Re: [openstack-dev] [nova]

2013-12-11 Thread Joe Gordon
On Tue, Dec 10, 2013 at 11:48 PM, Gary Kotton gkot...@vmware.com wrote:



 On 12/11/13 12:43 AM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 
 
 On Tuesday, December 10, 2013 4:17:45 PM, Maithem Munshed 71510 wrote:
  Hello,
 
  I was wondering, what is the reason behind having nova audit resources
  as opposed to using usage stats directly from what is reported by the
  compute driver. The available resources reported from the audit can be
  incorrect in some cases. Also, in many cases the reported usage stats
  from the driver are correct, so auditing periodically while having the
  usage stats from the driver is inefficient. One of the which result in
  an incorrect audit is: existing VMs on a hypervisor that are created
  prior to deploying nova. As a result, the scheduler will see more
  available resources than what actually is available. I am aware that
  Nova shouldn¹t be managing VMs that it hasn¹t created, but the
  reported available resources should be as accurate as possible.



While I agree there are valid use cases for wanting this.  I don't think
existing VMs on a hypervisor is one of them. Nova wasn't designed to share
a  nova-compute node, and I don't think we want make it do that either.


 
  I have proposed the following blueprint to provide the option of using
  usage stats directly from the driver :
 
 
 
 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.n
 et/nova/%2Bspec/use-driver-usage-statsk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0
 Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=x02rSPjpn8NY7Hm
 djKaREzjyWdCIsrjbjolGum3k878%3D%0As=c7dfed6064da0be905447da3533cf6b6d2fa
 7eefb2364fad1df4d484e39a9914
 
  I would like to know what your thoughts are and would appreciate
 feedback.
 
  Regards,
 
  Maithem
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 
 
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi
 -bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar
 =eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=x02rSPjpn8NY7HmdjK
 aREzjyWdCIsrjbjolGum3k878%3D%0As=d8f5240e69583b063b2435d4e62244f1df9acaa
 268d332922afba4d35a0add57
 
 One (big) problem is the virt drivers don't follow a standard format
 for the usage diagnostics, which has been discussed before in the
 mailing list [1].
 
 There is a nova blueprint [2] for standard auditing formats like in
 ceilometer which might be related to what you're looking for.

 This is one of the issue that we spoke about at the summit. At the moment
 the virt drivers return their usage statistics (not VM diagnostics). The
 resource tracker just ignores this, well actually it has a LOG debug for
 the results
 (
 https://github.com/openstack/nova/blob/master/nova/compute/resource_tracke
 r.py#L401), and proceed to calculate the available resources on the
 compute node.

 The conclusion from that session was that we should add in a configuration
 variable (to ensure backward compatibility) which will enable the resource
 tracker to make use of the girt driver statistics instead of recalculating
 them
 (
 https://github.com/openstack/nova/blob/master/nova/compute/resource_tracke
 r.py#L291)


+1, one of the concerns (something which a configuration option addresses),
is by using the virt driver statistics in scheduling it makes scheduling
much harder to reproduce and debug. In part because not all virt drivers
will report back a VM is consuming the entire RAM allocated to it, in which
case a compute node can mistakenly be oversubscribed.


 Thanks
 Gary

 
 [1]
 
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/pipe
 rmail/openstack-dev/2013-October/016385.htmlk=oIvRg1%2BdGAgOoM1BIlLLqw%3D
 %3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=x02rSPjpn8N
 Y7HmdjKaREzjyWdCIsrjbjolGum3k878%3D%0As=df8314a31fb99b591d081b1888f38e2be
 9e19f289afaaa7224844a964d4b1680
 [2]
 
 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
 t/nova/%2Bspec/support-standard-audit-formatsk=oIvRg1%2BdGAgOoM1BIlLLqw%3
 D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=x02rSPjpn8
 NY7HmdjKaREzjyWdCIsrjbjolGum3k878%3D%0As=f8f7b3aebf0e3226926146aed4b6caf4
 d380d410f35fb7ee974655ad6190fa71
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
 bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
 H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=x02rSPjpn8NY7HmdjKaRE
 zjyWdCIsrjbjolGum3k878%3D%0As=d8f5240e69583b063b2435d4e62244f1df9acaa268d
 332922afba4d35a0add57


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gating-Failures] Docs creation is failing

2013-12-11 Thread Wangpan
+1
http://logs.openstack.org/10/61310/2/check/gate-nova-docs/e4ca63f/console.html

2013-12-11



Wangpan



发件人:Gary Kotton gkot...@vmware.com
发送时间:2013-12-11 15:22
主题:[openstack-dev] [Gating-Failures] Docs creation is failing
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org
抄送:

Hi,
An example for this is: 
http://logs.openstack.org/94/59994/10/check/gate-nova-docs/b0f3910/console.html
Any ideas?
Thanks
Gary___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest] Bug triage and monitoring process

2013-12-11 Thread Giulio Fidente

On 12/06/2013 03:18 PM, Adalberto Medeiros wrote:

Hello all!

Yesterday, during the QA meeting, I volunteer myself to help the team
handling bugs and defining a better process to triage them.


which is great!


To accomplish that, I would like to suggest a Bug Triage Day for next
week on Thursday, 12th (yup, before people leave to end-of-year holidays
:) ).


see you on #openstack-qa


The second step, after getting a concise and triaged bug list, is to
ensure we have a defined process to constant revisit the list to avoid
the issues we have now. I'm would like to hear suggestions here.

Please, send any thoughts about those steps and any other points you
think we should address for monitoring the bugs. We may as well define
in this thread what is needed for the bug triage day.


a relatively simple thing we could do is to tag the bugs with the 
services they hit, for easier categorization and set some notifications 
on new tags so that one can revisit the tags on new bugs

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Jaromir Coufal



On 2013/10/12 21:24, Lyle, David wrote:

I would like to nominate Tatiana Mazur to Horizon Core.  Tatiana has been a 
significant code contributor in the last two releases, understands the code 
base well and has been doing a significant number of reviews for the last to 
milestones.

+1


Additionally, I'd like to remove some inactive members of Horizon-core who have 
been inactive since the early Grizzly release at the latest.
Devin Carlen
Jake Dahn
Jesse Andrews
Joe Heck
John Postlethwait
Paul McMillan
Todd Willey
Tres Henry
paul-tashima
sleepsonthefloor

+1 - havn't seen much activity.

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-11 Thread Alexandre Levine
Ok, Russell, if we do go this way creating a separate project what would 
be the correct naming for it in openstack?
gce-api, gce-compat, gce or some cryptic name like GraCE or GraCEful, 
for example? :)


Alex

10.12.2013 22:14, Russell Bryant пишет:

On 12/10/2013 11:13 AM, Alexandre Levine wrote:

Yes, I understand it perfectly, Cristopher, and cannot agree more. It's
just more work to reach this right now than use what's present. Still in
my opinion even in a mid-run just till IceHouse release it might be less
work overall.
I'm going to think it over.

So ... if you really do feel that way, I'm not sure it makes a lot of
sense to merge it one way if there's already a plan emerging to re-do
it.  We'd have to go through a painful deprecation cycle of the old code
where we're maintaining it in two places for a while.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-11 Thread Salvatore Orlando
I generally tend to agree that once the distributed router is available,
nobody would probably want to use a centralized one.
Nevertheless, I think it is correct that, at least for the moment, some
advanced services would only work with a centralized router.
There might also be unforeseen scalability/security issues which might
arise from the implementation, so it is worth giving users a chance to
choose what router's they'd like.

In the case of the NSX plugin, this was provided as an extended API
attribute in the Havana release with the aim of making it the default
solution for routing in the future.
One thing that is worth adding is that at the time it was explored the
ability of leveraging service providers for having a centralized router
provider and a distributed one; we had working code, but then we
reverted to the extended attribute. Perhaps it would be worth exploring
whether this is a feasible solution, and whether it might be even possible
to define flavors which characterise how routers and advanced services
are provided.

Salvatore


On 10 December 2013 18:09, Nachi Ueno na...@ntti3.com wrote:

 I'm +1 for 'provider'.

 2013/12/9 Akihiro Motoki mot...@da.jp.nec.com:
  Neutron defines provider attribute and it is/will be used in advanced
 services (LB, FW, VPN).
  Doesn't it fit for a distributed router case? If we can cover all
 services with one concept, it would be nice.
 
  According to this thread, we assumes at least two types edge and
 distributed.
  Though edge and distributed is a type of implementations, I think
 they are some kind of provider.
 
  I just would like to add an option. I am open to provider vs
 distirbute attributes.
 
  Thanks,
  Akihiro
 
  (2013/12/10 7:01), Vasudevan, Swaminathan (PNB Roseville) wrote:
  Hi Folks,
 
  We are in the process of defining the API for the Neutron Distributed
 Virtual Router, and we have a question.
 
  Just wanted to get the feedback from the community before we implement
 and post for review.
 
  We are planning to use the “distributed” flag for the routers that are
 supposed to be routing traffic locally (both East West and North South).
  This “distributed” flag is already there in the “neutronclient” API,
 but currently only utilized by the “Nicira Plugin”.
  We would like to go ahead and use the same “distributed” flag and add
 an extension to the router table to accommodate the “distributed flag”.
 
  Please let us know your feedback.
 
  Thanks.
 
  Swaminathan Vasudevan
  Systems Software Engineer (TC)
  HP Networking
  Hewlett-Packard
  8000 Foothills Blvd
  M/S 5541
  Roseville, CA - 95747
  tel: 916.785.0937
  fax: 916.785.1815
  email: swaminathan.vasude...@hp.com mailto:
 swaminathan.vasude...@hp.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Third-party testing

2013-12-11 Thread Salvatore Orlando
Hi Yoshihiro,

In my opinion the use of filters on changes is allowed by the smoketesting
policy we defined.
Notwithstanding that the approach of testing every patch is definitely the
safest, I understand in some cases the volume of patchsets uploaded to
gerrit might overwhelm the plugin-specific testing system, especially in
cases where not much resources are dedicated to it.

I would suggest to test every patch which has changes in the following
packages:
neutron.db
neutron.api
neutron.extensions
neutron.plugin.your-plugin
neutron.openstack
neutron.agent (if your plugin uses any of the agents)

Regards,
Salvatore


On 10 December 2013 06:09, Yoshihiro Kaneko ykaneko0...@gmail.com wrote:

 2013/12/10 Matt Riedemann mrie...@linux.vnet.ibm.com:
 
 
  On Sunday, December 08, 2013 11:32:50 PM, Yoshihiro Kaneko wrote:
 
  Hi Neutron team,
 
  I'm working on building Third-party testing for Neutron Ryu plugin.
  I intend to use Jenkins and gerrit-trigger plugin.
 
  It is required that Third-party testing provides verify vote for
  all changes to a plugin/driver's code, and all code submissions
  by the jenkins user.
 
 
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Testing_Requirements
 
  For this requirements, what kind of filter for the trigger should
  I set?
  It is easy to set a file path of the plugin/driver:
 project: plain:neutron
 branch:  plain:master
 file:path:neutron/plugins/ryu/**
  However, this is not enough because it lacks dependencies.
  It is difficult to judge a patchset which affects the plugin/driver.
  In addition, gerrit trigger has a file path filter, but there is no
  patchset owner filter, so it is not able to set a trigger to a
  patchset which is submitted by the jenkins user.
 
  Can Third-party testing execute tests for all patchset including the
  thing which may not affect the plugin/driver?
 
  Thanks,
  Kaneko
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  I can't speak for the Neutron team, but in Nova the requirement is to run
  all patches through the vendor plugin third party CI, not just
  vendor-specific patches.

 Thanks for the reply, Matt.
 I believe that it is the right way for a smoke testing.

 
  https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan
 
  --
 
  Thanks,
 
  Matt Riedemann
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Recruiting developers for Neutron API tests in Tempest

2013-12-11 Thread Salvatore Orlando
Thanks Miguel!

I will pick up a few tests from the list you put together, and I encourage
too every neutron developer to do the same.
At the end of the day, it's not really different from scripting what you do
everyday to test the code you develop.

I am also available to help new contributors get onboard on IRC.

Salvatore


On 10 December 2013 20:10, Miguel Lavalle mig...@mlavalle.com wrote:

 For the Icehouse cycle, the OpenStack community is undertaking a focused
 effort to strengthen the suite of Tempest API tests for Neutron. If you are
 interested on contributing to this effort, please go to
 https://etherpad.openstack.org/p/icehouse-summit-qa-neutron. Please
 scroll down to the API tests gap analysis section and select the topics
 you want to contribute to.

 Helping to develop Tempest tests (particularly API tests) is an excellent
 way for new contributors to learn Neutron. To get you going, we have
 developed a detailed how to guide that can by found here:
 https://wiki.openstack.org/wiki/Neutron/TempestAPITests

 Should you need further assistance, please contact mlavalle in the
 #openstack-neutron or #openstack-qa channels in irc.freenode.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Third party Neutron plugin testing meeting

2013-12-11 Thread Rossella Sblendido
Thanks Kyle for organizing this. I am attending too!


On Wed, Dec 11, 2013 at 7:50 AM, Irena Berezovsky ire...@mellanox.comwrote:

  Please take guys and girls from Israel into account J.





 *From:* Yongsheng Gong [mailto:gong...@unitedstack.com]
 *Sent:* Wednesday, December 11, 2013 5:20 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] Third party Neutron plugin
 testing meeting



 UTC 22:00+, which is 6:am beijing time,but if there are guys from Israel
 alike, I can get up one hour earlier, just like what I do for neutron
 meeting.



 On Wed, Dec 11, 2013 at 11:08 AM, Kyle Mestery mest...@siliconloons.com
 wrote:

 I suspect we'll need another meeting next week, I propose we have it
 at a time friendly to those in Asian timezones. Yong and Akihiro, can
 you guys propose a timeslot which works for you guys and I'll see
 about setting the followup meeting up.

 Thanks,
 Kyle


 On Dec 10, 2013, at 8:14 PM, Yongsheng Gong gong...@unitedstack.com
 wrote:

  It is 1am beijing time, so I am afraid I will not join.
 
 
  On Wed, Dec 11, 2013 at 10:10 AM, Akihiro Motoki amot...@gmail.com
 wrote:
  Thanks Kyle for coordinating the meeting.
 
  The time is midnight to me, but it fits everyone except me. I'll try the
 time but not sure. Anyway I will follow the log.
 
  2013年12月11日水曜日 Shiv Haris sha...@brocade.com:
 
  +1
 
 
 
  Will join via IRC or voice call
 
 
 
 
 
 
 
  From: Gary Duan [mailto:gd...@varmour.com]
  Sent: Tuesday, December 10, 2013 10:59 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin
 testingmeeting
 
 
 
  I will be joining IRC too.
 
 
 
  Thanks,
 
  Gary
 
 
 
  On Tue, Dec 10, 2013 at 10:33 AM, Edgar Magana emag...@plumgrid.com
 wrote:
 
  Also joining!
  Looking forward to hearing your ideas folks!
 
  Edgar
 
 
  On 12/10/13 10:16 AM, Nachi Ueno na...@ntti3.com wrote:
 
  +1 ! I'll join.
  I'm also working on investigating how to use openstack gating system.
  (This document is still draft version)
  
 https://docs.google.com/presentation/d/1WJInaSt_H2kVkjnhtPmiATP1F-0BVbuk1e
  efQalL5Q0/edit#slide=id.p
  
  2013/12/10 Ivar Lazzaro i...@embrane.com:
   +1 for 1700UTC Thursday on IRC
  
   -Original Message-
   From: Kyle Mestery [mailto:mest...@siliconloons.com]
   Sent: Tuesday, December 10, 2013 9:21 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin
  testing meeting
  
   On Dec 10, 2013, at 10:45 AM, Veiga, Anthony
  anthony_ve...@cable.comcast.com wrote:
   -Original Message-
   From: Kyle Mestery mest...@siliconloons.com
   Reply-To: OpenStack Development Mailing List (not for usage
  questions)
   openstack-dev@lists.openstack.org
   Date: Tuesday, December 10, 2013 10:48
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org
   Subject: [openstack-dev] [neutron] Third party Neutron plugin testing
   meeting
  
   Last week I took an action item to organize a meeting for everyone
   who is doing third-party testing in Neutron for plugins, whether
 this
   is vendor or Open Source based. The idea is to share ideas around
   setups and any issues people hit. I'd like to set this meeting up
 for
   this week, Thursday at 1700UTC. I would also like to propose we make
   this a dial in meeting using the Infrastructure Conferencing bridges
   [1]. If this time works, I'll set something up and reply to this
   thread with the dial in information.
  
   +1 for the meeting time.  Any particular reason for voice over IRC?
  
   We kind of decided that doing this over voice initially would be
  expedient, but I am fine with moving to IRC. If I don't hear
 objections,
  lets assume we will meet at 1700UTC Thursday on #openstack-meeting-alt.
  
  
  
   Also, I've started a etherpad page [2] with information. It would be
   good for people to add information to this etherpad as well. I've
   coupled this pad with information around multi-node gate testing for
   Neutron as well, as I suspect most of the third-party testing will
   require multiple nodes as well.
  
   I'll start filling out our setup.  I have some questions around
   Third-Party Testing in particular, and look forward to this
 discussion.
  
   Awesome, thanks Anthony!
  
  
   Thanks!
   Kyle
  
   [1]
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] Performance Regression in Neutron/Havana compared to Quantum/Grizzly

2013-12-11 Thread Nathani, Sreedhar (APS)
Hello Peter,

Here are the tests I have done. Already have 240 instances active across all 
the 16 compute nodes. To make the tests and data collection easy, 
I have done the tests on single compute node
 
First Test - 
*   240 instances already active,  16 instances on the compute node where I 
am going to do the tests
*   deploy 10 instances concurrently using nova boot command with 
num-instances option in single compute node
*   All the instances could get IP during the instance boot time. 
 
-   Instances are created at  2013-12-10 13:41:01
-   From the compute host, DHCP requests are sent from 13:41:20 but those 
are not reaching the DHCP server
Reply from the DHCP server got at 13:43:08 (A delay of 108 seconds)
-   DHCP agent updated the host file from 13:41:06 till 13:42:54. Dnsmasq 
process got SIGHUP message every time the hosts file is updated
-   In compute node tap devices are created between 13:41:08 and 13:41:18
Security group rules are received between 13:41:45 and 13:42:56
IP table rules were updated between 13:41:50 and 13:43:04

Second Test - 
*   Deleted the newly created 10 instances.
*   240 instances already active,  16 instances on the compute node where I 
am going to do the tests
*   Deploy 30 instances concurrently using nova boot command with 
num-instances option in single compute node
*   None  of the instances could get the IP during the instance boot.
 
 
-   Instances are created at  2013-12-10 14:13:50
 
-   From the compute host, DHCP Requests are sent from  14:14:14 but those 
are not reaching the DHCP Server
(don't see any DHCP requests are reaching the DHCP server 
from the tcpdump on the network node)
 
-   Reply from the DHCP server only got at 14:22:10 ( A delay of 636 
seconds)
 
-   From the strace of the DHCP agent process, it first updated the hosts 
file at 14:14:05, after this there is a gap of close to 60 min for 
Updating next instance address, it repeated till 7th 
instance which was updated at 14:19:50.  30th instance updated at 14:20:00
 
-   During the 30 instance creation, dnsmasq process got SIGHUP after the 
host file is updated, but at 14:19:52 it got SIGKILL and new process
   created - 14:19:52.881088 +++ killed by 
SIGKILL +++
 
-   In the compute node, tap devices are created between 14:14:03 and 
14:14:38
From the strace of L2 agent log, can see security group related 
messages are received from 14:14:27 till 14:20:02
During this period in the L2 agent log see many rpc timeout messages 
like below
Timeout: Timeout while waiting on RPC response - topic: q-plugin, RPC 
method: security_group_rules_for_devices info: unknown

Due to security group related messages received by this compute 
node with delay, it's taking very long time to update the iptable rules
(Can see it was updated till 14:20) which is causing the DHCP 
packets to be dropped at compute node itself without reaching to DHCP server
 
 
Here is my understanding based on the tests. 
Instances are creating fast and so its TAP devices. But there is a considerable 
delay in updating the network port details in dnsmasq host file and sending
The security group related info to the compute nodes due to which compute nodes 
are not able to update the iptable rules fast enough which is causing
Instance not able to get the IP.

I have collected the tcpdump from controller node, compute nodes + strace of 
dhcp, dnsmasq, OVS L2 agents incase if you are interested to look at it

Thanks  Regards,
Sreedhar Nathani


-Original Message-
From: Peter Feiner [mailto:pe...@gridcentric.ca] 
Sent: Tuesday, December 10, 2013 10:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Performance Regression in Neutron/Havana compared 
to Quantum/Grizzly

On Tue, Dec 10, 2013 at 7:48 AM, Nathani, Sreedhar (APS) 
sreedhar.nath...@hp.com wrote:
 My setup has 17 L2 agents (16 compute nodes, one Network node). 
 Setting the minimize_polling helped to reduce the CPU utilization by the L2 
 agents but it did not help in instances getting the IP during first boot.

 With the minimize_polling polling enabled less number of instances could get 
 IP than without the minimize_polling fix.

 Once the we reach certain number of ports(in my case 120 ports), 
 during subsequent concurrent instance deployment(30 instances), updating the 
 port details in the dnsmasq host is taking long time, which causing the delay 
 for instances getting IP address.

To figure out what the next problem is, I recommend that you determine 
precisely what port details in the dnsmasq host [are] taking [a] long time to 
update. Is the DHCPDISCOVER packet from the VM arriving before the dnsmasq 
process's hostsfile is updated and dnsmasq is SIGHUP'd? Is the VM sending the 

Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate as first-class citizen - SSL Termination (Revised)

2013-12-11 Thread Vijay Venkatachalam
Thanks for the detailed write-up Evg. 

What is the use of SSLPolicy.PassInfo?
Managing as individual ciphers is a pain, Can we introduce an entity called 
cipher groups? This enables to provide built-in cipher groups (HIGH, LOW, DES 
etc) as well. At the least we can provide this in the UI+CLI layer.
Also, it will be good to have a built-in DEFAULT ssl policy, which contains 
default set of SSL protocols, ciphers etc. which is to be used when sslpolicy 
is not provided.
Is there a need for binding multiple certificates for a VIP, because SNI is not 
modeled now? We could have vip_id as the key in vip_ssl_certificate_assoc.

Also, it will be good to have the following nomenclature corrected:
TrustedKey: This entity refers to a CA certificate, usage key can be avoided. 
My suggestion is to call it CA or cacert.
SSLCertificate.PublicKey: The property contains a server certificate (actually 
PublicKey + more info). My suggestion is to call the property as certificate

Thanks,
Vijay V.


 -Original Message-
 From: Evgeny Fedoruk [mailto:evge...@radware.com]
 Sent: Sunday, December 08, 2013 10:24 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate
 as first-class citizen - SSL Termination (Revised)
 
 Hi All.
 The wiki page for LBaaS SSL support was updated.
 Please see and comment
 https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL#Vip_SSL_Association
 
 Thank you!
 Evg
 
 -Original Message-
 From: Samuel Bercovici
 Sent: Thursday, December 05, 2013 9:14 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Evgeny Fedoruk; Samuel Bercovici
 Subject: RE: [openstack-dev] [Neutron][LBaaS] Vote required for certificate
 as first-class citizen - SSL Termination (Revised)
 
 Correct.
 
 Evgeny will update the WIKI accordingly.
 We will add a flag in the SSL Certificate to allow specifying that the 
 private key
 can't be persisted. And in this case, the private key could be passed when
 associating the cert_id with the VIP.
 
 Regards,
   -Sam.
 
 -Original Message-
 From: Nachi Ueno [mailto:na...@ntti3.com]
 Sent: Thursday, December 05, 2013 8:21 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for certificate
 as first-class citizen - SSL Termination (Revised)
 
 Hi folks
 
 OK, It looks like we get consensus on
 separate resource way.
 
 Best
 Nachi
 
 2013/12/5 Eugene Nikanorov enikano...@mirantis.com:
  Hi,
 
  My vote is for separate resource (e.g. 'New Model'). Also I'd like to
  see certificate handling as a separate extension/db mixing(in fact,
  persistence
  driver) similar to service_type extension.
 
  Thanks,
  Eugene.
 
 
  On Thu, Dec 5, 2013 at 2:13 PM, Stephen Gran
  stephen.g...@theguardian.com
  wrote:
 
  Hi,
 
  Right, sorry, I see that wasn't clear - I blame lack of coffee :)
 
  I would prefer the Revised New Model.  I much prefer the ability to
  restore a loadbalancer from config in the event of node failure, and
  the ability to do basic sharing of certificates between VIPs.
 
  I think that a longer term plan may involve putting the certificates
  in a smarter system if we decide we want to do things like evaluate
  trust models, but just storing them locally for now will do most of
  what I think people want to do with SSL termination.
 
  Cheers,
 
 
  On 05/12/13 09:57, Samuel Bercovici wrote:
 
  Hi Stephen,
 
  To make sure I understand, which model is fine Basic/Simple or New.
 
  Thanks,
  -Sam.
 
 
  -Original Message-
  From: Stephen Gran [mailto:stephen.g...@theguardian.com]
  Sent: Thursday, December 05, 2013 8:22 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Neutron][LBaaS] Vote required for
  certificate as first-class citizen - SSL Termination (Revised)
 
  Hi,
 
  I would be happy with this model.  Yes, longer term it might be nice
  to have an independent certificate store so that when you need to be
  able to validate ssl you can, but this is a good intermediate step.
 
  Cheers,
 
  On 02/12/13 09:16, Vijay Venkatachalam wrote:
 
 
  LBaaS enthusiasts: Your vote on the revised model for SSL Termination?
 
  Here is a comparison between the original and revised model for SSL
  Termination:
 
  ***
  Original Basic Model that was proposed in summit
  ***
  * Certificate parameters introduced as part of VIP resource.
  * This model is for basic config and there will be a model
  introduced in future for detailed use case.
  * Each certificate is created for one and only one VIP.
  * Certificate params not stored in DB and sent directly to loadbalancer.
  * In case of failures, there is no way to restart the operation
  from details stored in DB.
  ***
  Revised New Model
  ***
  * Certificate parameters will be part of an independent certificate
  resource. A 

Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Jiri Tomasek

+1 for Tatiana Mazur to Horizon Core



On 12/10/2013 09:24 PM, Lyle, David wrote:

I would like to nominate Tatiana Mazur to Horizon Core.  Tatiana has been a 
significant code contributor in the last two releases, understands the code 
base well and has been doing a significant number of reviews for the last to 
milestones.


Additionally, I'd like to remove some inactive members of Horizon-core who have 
been inactive since the early Grizzly release at the latest.
Devin Carlen
Jake Dahn
Jesse Andrews
Joe Heck
John Postlethwait
Paul McMillan
Todd Willey
Tres Henry
paul-tashima
sleepsonthefloor


Please respond with a +1/-1 by this Friday.

-David Lyle




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-11 Thread Jaromir Coufal

On 2013/10/12 23:09, Robert Collins wrote:

On 11 December 2013 05:42, Jaromir Coufal jcou...@redhat.com wrote:

On 2013/09/12 23:38, Tzu-Mainn Chen wrote:

The disagreement comes from whether we need manual node assignment or not.
I would argue that we
need to step back and take a look at the real use case: heterogeneous
nodes.  If there are literally
no characteristics that differentiate nodes A and B, then why do we care
which gets used for what?  Why
do we need to manually assign one?



Ideally, we don't. But with this approach we would take out the possibility
to change something or decide something from the user.


So, I think this is where the confusion is. Using the nova scheduler
doesn't prevent change or control. It just ensures the change and
control happen in the right place: the Nova scheduler has had years of
work, of features and facilities being added to support HPC, HA and
other such use cases. It should have everything we need [1], without
going down to manual placement. For clarity: manual placement is when
any of the user, Tuskar, or Heat query Ironic, select a node, and then
use a scheduler hint to bypass the scheduler.

This is very well written. I am all for things going to right places.


The 'easiest' way is to support bigger companies with huge deployments,
tailored infrastructure, everything connected properly.

But there are tons of companies/users who are running on old heterogeneous
hardware. Very likely even more than the number of companies having already
mentioned large deployments. And giving them only the way of 'setting up
rules' in order to get the service on the node - this type of user is not
gonna use our deployment system.


Thats speculation. We don't know if they will or will not because we
haven't given them a working system to test.
Some part of that is speculation, some part of that is feedback from 
people who are doing deployments (of course its just very limited 
audience). Anyway, it is not just pure theory.



Lets break the concern into two halves:
A) Users who could have their needs met, but won't use TripleO because
meeting their needs in this way is too hard/complex/painful.

B) Users who have a need we cannot meet with the current approach.

For category B users, their needs might be specific HA things - like
the oft discussed failure domains angle, where we need to split up HA
clusters across power bars, aircon, switches etc. Clearly long term we
want to support them, and the undercloud Nova scheduler is entirely
capable of being informed about this, and we can evolve to a holistic
statement over time. Lets get a concrete list of the cases we can
think of today that won't be well supported initially, and we can
figure out where to do the work to support them properly.
My question is - can't we help them now? To enable users to use our app 
even when we don't have enough smartness to help them 'auto' way?



For category A users, I think that we should get concrete examples,
and evolve our design (architecture and UX) to make meeting those
needs pleasant.
+1... I tried to pull some operators into this discussion thread, will 
try to get more.



What we shouldn't do is plan complex work without concrete examples
that people actually need. Jay's example of some shiny new compute
servers with special parts that need to be carved out was a great one
- we can put that in category A, and figure out if it's easy enough,
or obvious enough - and think about whether we document it or make it
a guided workflow or $whatever.


Somebody might argue - why do we care? If user doesn't like TripleO
paradigm, he shouldn't use the UI and should use another tool. But the UI is
not only about TripleO. Yes, it is underlying concept, but we are working on
future *official* OpenStack deployment tool. We should care to enable people
to deploy OpenStack - large/small scale, homo/heterogeneous hardware,
typical or a bit more specific use-cases.


The difficulty I'm having is that the discussion seems to assume that
'heterogeneous implies manual', but I don't agree that that
implication is necessary!
No, I don't agree with this either. Heterogeneous hardware can be very 
well managed automatically as well as homogeneous (classes, node profiles).



As an underlying paradigm of how to install cloud - awesome idea, awesome
concept, it works. But user doesn't care about how it is being deployed for
him. He cares about getting what he wants/needs. And we shouldn't go that
far that we violently force him to treat his infrastructure as cloud. I
believe that possibility to change/control - if needed - is very important
and we should care.


I propose that we make concrete use cases: 'Fred cannot use TripleO
without manual assignment because XYZ'. Then we can assess how
important XYZ is to our early adopters and go from there.
+1, yes. I will try to bug more relevant people, who could contribute at 
this area.



And what is key for us is to *enable* users - not to prevent them from 

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-11 Thread Jaromir Coufal



On 2013/10/12 19:39, Tzu-Mainn Chen wrote:


Ideally, we don't. But with this approach we would take out the
possibility to change something or decide something from the user.

The 'easiest' way is to support bigger companies with huge deployments,
tailored infrastructure, everything connected properly.

But there are tons of companies/users who are running on old
heterogeneous hardware. Very likely even more than the number of
companies having already mentioned large deployments. And giving them
only the way of 'setting up rules' in order to get the service on the
node - this type of user is not gonna use our deployment system.

Somebody might argue - why do we care? If user doesn't like TripleO
paradigm, he shouldn't use the UI and should use another tool. But the
UI is not only about TripleO. Yes, it is underlying concept, but we are
working on future *official* OpenStack deployment tool. We should care
to enable people to deploy OpenStack - large/small scale,
homo/heterogeneous hardware, typical or a bit more specific use-cases.


I think this is a very important clarification, and I'm glad you made it.  It 
sounds
like manual assignment is actually a sub-requirement, and the feature you're 
arguing
for is: supporting non-TripleO deployments.
Mostly but not only. The other argument is - keeping control on stuff I 
am doing. Note that undercloud user is different from overcloud user.



That might be a worthy goal, but I think it's a distraction for the Icehouse 
timeframe.
Each new deployment strategy requires not only a new UI, but different 
deployment
architectures that could have very little common with each other.  Designing 
them all
to work in the same space is a recipe for disaster, a convoluted gnarl of code 
that
doesn't do any one thing particularly well.  To use an analogy: there's a 
reason why
no one makes a flying boat car.

I'm going to strongly advocate that for Icehouse, we focus exclusively on large 
scale
TripleO deployments, working to make that UI and architecture as sturdy as we 
can.  Future
deployment strategies should be discussed in the future, and if they're not 
TripleO based,
they should be discussed with the proper OpenStack group.
One concern here is - it is quite likely that we get people excited 
about this approach - it will be a new boom - 'wow', there is automagic 
doing everything for me. But then the question would be reality - how 
many from that excited users will actually use TripleO for their real 
deployments (I mean in the early stages)? Would it be only couple of 
them (because of covered use cases, concerns of maturity, lack of 
control scarcity)? Can we assure them that if anything goes wrong, they 
have control over it?


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a 
CLI for managing the deployment providing the same fundamental features 
as UI. With the planned architecture changes (making tuskar-api thinner 
and getting rid of proxying to other services), there's not an obvious 
way to achieve that. We need to figure this out. I present a few options 
and look forward for feedback.


Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

This meant that the integration logic of how to use heat, ironic and 
other services to manage an OpenStack deployment lied within 
*tuskar-api*. This gave us an easy way towards having a CLI - just build 
tuskarclient to wrap abilities of tuskar-api.



Nowadays we talk about using heat and ironic (and neutron? nova? 
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case. 
Dashboard is quite a thin wrapper on top of python-...clients, which 
means there's a natural parity between what the Dashboard and the CLIs 
can do.


We're not wrapping the APIs directly (if wrapping them directly would be 
sufficient, we could just use Dashboard and not build Tuskar API at 
all). We're building a separate UI because we need *additional logic* on 
top of the APIs. E.g. instead of directly working with Heat templates 
and Heat stacks to deploy overcloud, user will get to pick how many 
control/compute/etc. nodes he wants to have, and we'll take care of Heat 
things behind the scenes. This makes Tuskar UI significantly thicker 
than Dashboard is, and the natural parity between CLI and UI vanishes. 
By having this logic in UI, we're effectively preventing its use from 
CLI. (If i were bold i'd also think about integrating Tuskar with other 
software which would be prevented too if we keep the business logic in 
UI, but i'm not absolutely positive about use cases here).


Now this raises a question - how do we get CLI reasonably on par with 
abilities of the UI? (Or am i wrong that Anna the infrastructure 
administrator would want that?)


Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there. 
Make it consume other python-*clients. (This is an unusual approach 
though, i'm not aware of any python-*client that would consume and 
integrate other python-*clients.)


2) Make a thicker tuskar-api and put the business logic there. (This is 
the original approach with consuming other services from tuskar-api. The 
feedback on this approach was mostly negative though.)


3) Keep tuskar-api and python-tuskarclient thin, make another library 
sitting between Tuskar UI and all python-***clients. This new project 
would contain the logic of using undercloud services to provide the 
tuskar experience it would expose python bindings for Tuskar UI and 
contain a CLI. (Think of it like traditional python-*client but instead 
of consuming a REST API, it would consume other python-*clients. I 
wonder if this is overengineering. We might end up with too many 
projects doing too few things? :) )


4) Keep python-tuskarclient thin, but build a separate CLI app that 
would provide same integration features as Tuskar UI does. (This would 
lead to code duplication. Depends on the actual amount of logic to 
duplicate if this is bearable or not.)



Which of the options you see as best? Did i miss some better option? Am 
i just being crazy and trying to solve a non-issue? Please tell me :)


Please don't consider the time aspect of this, focus rather on what's 
the right approach, where we want to get eventually. (We might want to 
keep a thick Tuskar UI for Icehouse not to set the hell loose, there 
will be enough refactoring already.)



Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Third-party testing

2013-12-11 Thread Yoshihiro Kaneko
Hi Salvatore,

Thank you for your reply.

2013/12/11 Salvatore Orlando sorla...@nicira.com:
 Hi Yoshihiro,

 In my opinion the use of filters on changes is allowed by the smoketesting
 policy we defined.
 Notwithstanding that the approach of testing every patch is definitely the
 safest, I understand in some cases the volume of patchsets uploaded to
 gerrit might overwhelm the plugin-specific testing system, especially in
 cases where not much resources are dedicated to it.

Ah, I have not considered a heavy load.


 I would suggest to test every patch which has changes in the following
 packages:
 neutron.db
 neutron.api
 neutron.extensions
 neutron.plugin.your-plugin
 neutron.openstack
 neutron.agent (if your plugin uses any of the agents)

Thank you for the details. It helps me very much.

Many thanks!


 Regards,
 Salvatore


 On 10 December 2013 06:09, Yoshihiro Kaneko ykaneko0...@gmail.com wrote:

 2013/12/10 Matt Riedemann mrie...@linux.vnet.ibm.com:
 
 
  On Sunday, December 08, 2013 11:32:50 PM, Yoshihiro Kaneko wrote:
 
  Hi Neutron team,
 
  I'm working on building Third-party testing for Neutron Ryu plugin.
  I intend to use Jenkins and gerrit-trigger plugin.
 
  It is required that Third-party testing provides verify vote for
  all changes to a plugin/driver's code, and all code submissions
  by the jenkins user.
 
 
  https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Testing_Requirements
 
  For this requirements, what kind of filter for the trigger should
  I set?
  It is easy to set a file path of the plugin/driver:
 project: plain:neutron
 branch:  plain:master
 file:path:neutron/plugins/ryu/**
  However, this is not enough because it lacks dependencies.
  It is difficult to judge a patchset which affects the plugin/driver.
  In addition, gerrit trigger has a file path filter, but there is no
  patchset owner filter, so it is not able to set a trigger to a
  patchset which is submitted by the jenkins user.
 
  Can Third-party testing execute tests for all patchset including the
  thing which may not affect the plugin/driver?
 
  Thanks,
  Kaneko
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  I can't speak for the Neutron team, but in Nova the requirement is to
  run
  all patches through the vendor plugin third party CI, not just
  vendor-specific patches.

 Thanks for the reply, Matt.
 I believe that it is the right way for a smoke testing.

 
  https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan
 
  --
 
  Thanks,
 
  Matt Riedemann
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sphinx 1.2 incompatibility (failing -docs jobs)

2013-12-11 Thread Sean Dague
On 12/10/2013 05:57 PM, James E. Blair wrote:
 Hi,
 
 Sphinx 1.2 was just released and it is incompatible with distutils in
 python 2.7.  See these links for more info:
 
   
 https://bitbucket.org/birkenfeld/sphinx/pull-request/193/builddoc-shouldnt-fail-on-unicode-paths/diff
   http://bugs.python.org/issue19570
 
 This has caused all -docs jobs to fail.  This morning we merged a change
 to openstack/requirements to pin Sphinx to version 1.2:
 
   https://review.openstack.org/#/c/61164/
 
 Sergey Lukjanov, Clark Boylan, and Jeremy Stanley finished up the
 automatic requirements proposal job (Thanks!), and so now updates have
 been automatically proposed to all projects that subscribe:
 
   https://review.openstack.org/#/q/topic:openstack/requirements,n,z
 
 Once those changes merge, -docs jobs for affected projects should start
 working again.
 
 Note that requirements updates for stable branches are proceeding
 separately; you can track their progress here:
 
   
 https://review.openstack.org/#/q/I0487b4eca8f2755b882689289e3cdf429729b1fb,n,z

Actually, it looks like that requirements change isn't good enough
because 1.2b3  1.2 (seriously, I hate python package management some
times).

It seemed to work correctly in the mirror (are we purging betas from the
mirror?), but not locally.

New change is here -
https://review.openstack.org/#/q/I719536a0754bb532b800c52082aeb4e2033e174f,n,z

Though that's currently getting broken by a network issue in rax -
https://bugs.launchpad.net/openstack-ci/+bug/1259911 (possibly fixed
with https://review.openstack.org/61399)

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Tempest blueprints status update and rationale, input demanded

2013-12-11 Thread Giulio Fidente

hi,

I'm attempting to rationalize on the status of tempest blueprints. I 
need your help so I organized questions in a few open points.



* (1) I'm looking for input here on the actual status of the following 
blueprints, which are already approved or in a good progress state:


https://blueprints.launchpad.net/tempest/+spec/add-basic-heat-tests

seems done, shall we close it? (steve baker)

https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors

seems done, shall we close it? (david kranz)

https://blueprints.launchpad.net/tempest/+spec/config-cleanup
https://blueprints.launchpad.net/tempest/+spec/config-verification

seems done, close? (mtreinish)

https://blueprints.launchpad.net/tempest/+spec/fix-gate-tempest-devstack-vm-quantum-full

old but still valid for icehouse, what is the real status here? (mlavalle)

https://blueprints.launchpad.net/tempest/+spec/client-lib-stability

is slow progress appropriate here? (david kranz)

https://blueprints.launchpad.net/tempest/+spec/quantum-basic-api

this was approved but it looks to me quite hard to implement tests for 
the different network topologies, is it even possible given our infra? 
(mlavalle)


https://blueprints.launchpad.net/tempest/+spec/crash-scenario-generator

needs approval, is there any agreement upon this being implemented or 
shall we drop this? (all core and contributors)


https://blueprints.launchpad.net/tempest/+spec/missing-compute-api-extensions

identifying missing tests isn't a blueprint per se I think so I'd close 
this unless someone volunteer the work to at least identify the wanted tests



* (2) The following are instead blueprints open for discussion which I 
think should either be approved or closed, again input is more  than 
welcomed as well as assignees if you care about it:


https://blueprints.launchpad.net/tempest/+spec/refactor-rest-client

https://blueprints.launchpad.net/tempest/+spec/tempest-multiple-images

https://blueprints.launchpad.net/tempest/+spec/general-swift-client

https://blueprints.launchpad.net/tempest/+spec/input-scenarios-for-scenario

https://blueprints.launchpad.net/tempest/+spec/neutron-advanced-scenarios

https://blueprints.launchpad.net/tempest/+spec/stress-api-tracking

https://blueprints.launchpad.net/tempest/+spec/test-developer-documentation


* (3) Finally, as a general rule of thumb for the many remaining 
blueprints which only demand for new tests, I think we should keep and 
approve blueprints asking for basic tests around new components but 
*not* (as in close) blueprints demanding for additional tests around 
existing components. Does it look reasonable?

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Ladislav Smola

+1 for Tatiana Mazur to Horizon Core

not sure if only cores should do the vote, but Tatiana has been very 
active, so it will be well deserved. :-)



On 12/11/2013 01:09 PM, Jiri Tomasek wrote:

+1 for Tatiana Mazur to Horizon Core



On 12/10/2013 09:24 PM, Lyle, David wrote:
I would like to nominate Tatiana Mazur to Horizon Core.  Tatiana has 
been a significant code contributor in the last two releases, 
understands the code base well and has been doing a significant 
number of reviews for the last to milestones.



Additionally, I'd like to remove some inactive members of 
Horizon-core who have been inactive since the early Grizzly release 
at the latest.

Devin Carlen
Jake Dahn
Jesse Andrews
Joe Heck
John Postlethwait
Paul McMillan
Todd Willey
Tres Henry
paul-tashima
sleepsonthefloor


Please respond with a +1/-1 by this Friday.

-David Lyle




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský
A few clarifications added, next time i'll need to triple-read after 
myself :)


On 11.12.2013 13:33, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

This meant that the integration logic of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.


Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.

We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all).


Sorry, this should have said not build Tuskar *UI* at all.


We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).

Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)

Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there.
Make it consume other python-*clients. (This is an unusual approach
though, i'm not aware of any python-*client that would consume and
integrate other python-*clients.)

2) Make a thicker tuskar-api and put the business logic there. (This is
the original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)

3) Keep tuskar-api and python-tuskarclient thin, make another library
sitting between Tuskar UI and all python-***clients. This new project
would contain the logic of using undercloud services to provide the
tuskar experience it would expose python bindings for Tuskar UI and


expose python bindings for Tuskar UI is double-meaning - to be more 
precise: expose python bindings for use within Tuskar UI.



contain a CLI. (Think of it like traditional python-*client but instead
of consuming a REST API, it would consume other python-*clients. I
wonder if this is overengineering. We might end up with too many
projects doing too few things? :) )

4) Keep python-tuskarclient thin, but build a separate CLI app that
would provide same integration features as Tuskar UI does. (This would
lead to code duplication. Depends on the actual amount of logic to
duplicate if this is bearable or not.)


Which of the options you see as best? Did i miss some better option? Am
i just being crazy and trying to solve a non-issue? Please tell me :)

Please don't consider the time aspect of this, focus rather on what's
the right approach, where we want to get eventually. (We might want to
keep a thick Tuskar UI for Icehouse not to set the hell loose, there
will be enough refactoring already.)


Thanks

Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gating-Failures] Docs creation is failing

2013-12-11 Thread Sergey Lukjanov
Hey,

doc jobs fails because sphinx 1.2 used, all project are now applying 1.2
rule for sphinx to fix it.

Here is a thread with additional info:
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021863.html

Thanks.


On Wed, Dec 11, 2013 at 1:30 PM, Wangpan hzwang...@corp.netease.com wrote:

  +1

 http://logs.openstack.org/10/61310/2/check/gate-nova-docs/e4ca63f/console.html

 2013-12-11
  --
  Wangpan
  --
  *发件人:*Gary Kotton gkot...@vmware.com
 *发送时间:*2013-12-11 15:22
 *主题:*[openstack-dev] [Gating-Failures] Docs creation is failing
 *收件人:*OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *抄送:*

  Hi,
 An example for this is:
 http://logs.openstack.org/94/59994/10/check/gate-nova-docs/b0f3910/console.html
 Any ideas?
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gating-Failures] Docs creation is failing

2013-12-11 Thread Florent Flament
Hi, 

The 1.2 rule for Sphinx doesn't help, as pointed out by Sean Dague here: 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021921.html  

The 1.1.99 rule he proposes works for me (on python-swiftclient): 
https://review.openstack.org/#/c/61378/ 

Regards, 
Florent Flament 

- Original Message -

From: Sergey Lukjanov slukja...@mirantis.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Wednesday, December 11, 2013 2:45:02 PM 
Subject: Re: [openstack-dev] [Gating-Failures] Docs creation is failing 

Hey, 

doc jobs fails because sphinx 1.2 used, all project are now applying 1.2 rule 
for sphinx to fix it. 

Here is a thread with additional info: 
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021863.html 

Thanks. 


On Wed, Dec 11, 2013 at 1:30 PM, Wangpan  hzwang...@corp.netease.com  wrote: 



+1 
http://logs.openstack.org/10/61310/2/check/gate-nova-docs/e4ca63f/console.html 
2013-12-11 

Wangpan 

发件人: Gary Kotton  gkot...@vmware.com  
发送时间: 2013-12-11 15:22 
主题: [openstack-dev] [Gating-Failures] Docs creation is failing 
收件人: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org  
抄送: 
Hi, 
An example for this is: 
http://logs.openstack.org/94/59994/10/check/gate-nova-docs/b0f3910/console.html 
Any ideas? 
Thanks 
Gary 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 







-- 
Sincerely yours, 
Sergey Lukjanov 
Savanna Technical Lead 
Mirantis Inc. 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2013-12-11 Thread Andrew Laski

On 12/10/13 at 11:09am, Flavio Percoco wrote:

On 09/12/13 17:37 -0500, Russell Bryant wrote:

On 12/09/2013 05:16 PM, Gordon Sim wrote:

On 12/09/2013 07:15 PM, Russell Bryant wrote:


[...]


One other pattern that can benefit from intermediated message flow is in
load balancing. If the processing entities are effectively 'pulling'
messages, this can more naturally balance the load according to capacity
than when the producer of the workload is trying to determine the best
balance.


Yes, that's another factor.  Today, we rely on the message broker's
behavior to equally distribute messages to a set of consumers.


Sometimes you even _want_ message distribution to be 'unequal', if the
load varies by message or the capacity by consumer. E.g. If one consumer
is particularly slow (or is given a particularly arduous task), it may
not be optimal for it to receive the same portion of subsequent messages
as other less heavily loaded or more powerful consumers.


Indeed.  We haven't tried to do that anywhere, but it would be an
improvement for some cases.


Agreed, this is something that worths experimenting.

[...]


I'm very interested in diving deeper into how Dispatch would fit into
the various ways OpenStack is using messaging today.  I'd like to get
a better handle on how the use of Dispatch as an intermediary would
scale out for a deployment that consists of 10s of thousands of
compute nodes, for example.

Is it roughly just that you can have a network of N Dispatch routers
that route messages from point A to point B, and for notifications we
would use a traditional message broker (qpidd or rabbitmq) ?


For scaling the basic idea is that not all connections are made to the
same process and therefore not all messages need to travel through a
single intermediary process.

So for N different routers, each have a portion of the total number of
publishers and consumers connected to them. Though client can
communicate even if they are not connected to the same router, each
router only needs to handle the messages sent by the publishers directly
attached, or sent to the consumer directly attached. It never needs to
see messages between publishers and consumer that are not directly
attached.

To address your example, the 10s of thousands of compute nodes would be
spread across N routers. Assuming these were all interconnected, a
message from the scheduler would only travel through at most two of
these N routers (the one the scheduler was connected to and the one the
receiving compute node was connected to). No process needs to be able to
handle 10s of thousands of connections itself (as contrasted with full
direct, non-intermediated communication, where the scheduler would need
to manage connections to each of the compute nodes).

This basic pattern is the same as networks of brokers, but Dispatch
router has been designed from the start to simply focus on that problem
(and not deal with all other broker related features, such as
transactions, durability, specialised queueing etc).


Soudns awesome.  :-)


The other difference is that Dispatch Router does not accept
responsibility for messages, i.e. it does not offer any
store-and-forward behaviour. Any acknowledgement is end-to-end. This
avoids it having to replicate messages. On failure they can if needed by
replayed by the original sender.


I think the lack of store-and-forward is OK.

Right now, all of the Nova code is written to assume that the messaging
is unreliable and that any message could get lost.  It may result in an
operation failing, but it should fail gracefully.  Doing end-to-end
acknowledgement may actually be an improvement.


This is interesting and a very important point. I wonder what the
reliability expectations of other services w.r.t OpenStack messaging
are.

I agree on the fact that p2p acknowledgement could be an improvement
but I'm also wondering how this (if ever) will affect projects - in
terms of requiring changes. One of the goals of this new driver is to
not require any changes on the existing projects.

Also, a bit different but related topic, are there cases where tasks
are re-scheduled in nova? If so, what does nova do in this case? Are
those task sent back to `nova-scheduler` for re-scheduling?


Yes, there are certain build failures that can occur which will cause a 
re-schedule.  That's currently accomplished by the compute node sending 
a message back to the scheduler so it can pick a new host.  I'm trying 
to shift that a bit so we're messaging the conductor rather than the 
scheduler, but the basic structure of it is going to remain the same for 
now.


If you mean in progress operations being restarted after a service is 
restarted, then no.  We're working towards making that possible but at 
the moment it doesn't exist.




Cheers,
FF

--
@flaper87
Flavio Percoco





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jay Dobies
Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm 
new to the project. I only mention it again because it's relevant in 
that I missed any of the discussion on why proxying from tuskar API to 
other APIs is looked down upon. Jiri and I had been talking yesterday 
and he mentioned it to me when I started to ask these same sorts of 
questions.


On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the 
individual APIs directly put a lot of knowledge into the clients that 
had to be replicated across clients. At the best case, that's simply 
knowing where to look for data. But I suspect it's bigger than that and 
there are workflows that will be implemented for tuskar needs. If the 
tuskar API can't call out to other APIs, that workflow implementation 
needs to be done at a higher layer, which means in each client.


Something I'm going to talk about later in this e-mail but I'll mention 
here so that the diagrams sit side-by-side is the potential for a facade 
layer that hides away the multiple APIs. Lemme see if I can do this in 
ASCII:


tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that 
calls across APIs and adds in the tuskar magic. That keeps the tuskar 
API from calling into other APIs* but keeps all of the API call logic 
abstracted away from the UX pieces.


* Again, I'm not 100% up to speed with the API discussion, so I'm going 
off the assumption that we want to avoid API to API calls. If that isn't 
as strict of a design principle as I'm understanding it to be, then the 
above picture probably looks kinda silly, so keep in mind the context 
I'm going from.


For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
   |
   +-tuskar-api-+-nova-api
   ||
tuskar-cli-++-heat-api

Where a tuskar client talked to the tuskar API to do tuskar things. 
Whatever was needed to do anything tuskar-y was hidden away behind the 
tuskar API.



This meant that the integration logic of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.

Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.


When you say python- clients, is there a distinction between the CLI and 
a bindings library that invokes the server-side APIs? In other words, 
the CLI is packaged as CLI+bindings and the UI as GUI+bindings?



We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all). We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).


I see your point about preventing its use from the CLI, but more 
disconcerting IMO is that it just doesn't belong in the UI. That sort of 
logic, the Heat things behind the scenes, sounds like the jurisdiction 
of the API (if I'm reading into what that entails correctly).



Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)


To reiterate my point above, I see the idea of getting the CLI on par, 
but I also see it as striving for a cleaner design as well.



Here are some options i 

Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Russell Bryant
On 12/10/2013 05:57 PM, Paul McMillan wrote:
 +1 on Tatiana Mazur, she's been doing a bunch of good work lately.
 
 I'm fine with me being removed from core provided you have someone else 
 qualified to address security issues as they come up. My contributions have 
 lately been reviewing and responding to security issues, vetting fixes for 
 those, and making sure they happen in a timely fashion. Fortunately, we 
 haven't had too many of those lately. Other than that, I've been lurking and 
 reviewing to make sure nothing egregious gets committed.
 
 If you don't have anyone else who is a web security specialist on the core 
 team, I'd like to stay. Since I'm also a member of the Django security team, 
 I offer a significant chunk of knowledge about how the underlying security 
 protections are intended work.

Security reviews aren't done on gerrit, though.  They are handled in
launchpad bugs.  It seems you could still contribute in this way without
being on the horizon-core team responsible for reviewing normal changes
in gerrit.

The bigger point is that you don't have to be on whatever-core to
contribute productively to reviews.  I think every project has people
that make important review contributions, but aren't necessarily
reviewing regularly enough to be whatever-core.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly subteam meeting at Thursday, 12.12, 14-00 UTC

2013-12-11 Thread Eugene Nikanorov
Hi lbaas folks,

Let's meet as usual at #openstack-meeting on Thursday, 12 at 14-00 UTC.
The primary discussion points should be:
1) Third party testing
2) L7 rules
3) Loadbalancer instance
4) HA for agents and HA for HAProxy

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Third party Neutron plugin testing meeting

2013-12-11 Thread Akihiro Motoki
UTC 22:00 works for me. Really appreciated.

On Wed, Dec 11, 2013 at 12:13 PM, Yongsheng Gong
gong...@unitedstack.com wrote:
 UTC 22:00+, which is 6:am beijing time,but if there are guys from Israel
 alike, I can get up one hour earlier, just like what I do for neutron
 meeting.


 On Wed, Dec 11, 2013 at 11:08 AM, Kyle Mestery mest...@siliconloons.com
 wrote:

 I suspect we'll need another meeting next week, I propose we have it
 at a time friendly to those in Asian timezones. Yong and Akihiro, can
 you guys propose a timeslot which works for you guys and I'll see
 about setting the followup meeting up.

 Thanks,
 Kyle

 On Dec 10, 2013, at 8:14 PM, Yongsheng Gong gong...@unitedstack.com
 wrote:

  It is 1am beijing time, so I am afraid I will not join.
 
 
  On Wed, Dec 11, 2013 at 10:10 AM, Akihiro Motoki amot...@gmail.com
  wrote:
  Thanks Kyle for coordinating the meeting.
 
  The time is midnight to me, but it fits everyone except me. I'll try the
  time but not sure. Anyway I will follow the log.
 
  2013年12月11日水曜日 Shiv Haris sha...@brocade.com:
 
  +1
 
 
 
  Will join via IRC or voice call
 
 
 
 
 
 
 
  From: Gary Duan [mailto:gd...@varmour.com]
  Sent: Tuesday, December 10, 2013 10:59 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin
  testingmeeting
 
 
 
  I will be joining IRC too.
 
 
 
  Thanks,
 
  Gary
 
 
 
  On Tue, Dec 10, 2013 at 10:33 AM, Edgar Magana emag...@plumgrid.com
  wrote:
 
  Also joining!
  Looking forward to hearing your ideas folks!
 
  Edgar
 
 
  On 12/10/13 10:16 AM, Nachi Ueno na...@ntti3.com wrote:
 
  +1 ! I'll join.
  I'm also working on investigating how to use openstack gating system.
  (This document is still draft version)
 
   https://docs.google.com/presentation/d/1WJInaSt_H2kVkjnhtPmiATP1F-0BVbuk1e
  efQalL5Q0/edit#slide=id.p
  
  2013/12/10 Ivar Lazzaro i...@embrane.com:
   +1 for 1700UTC Thursday on IRC
  
   -Original Message-
   From: Kyle Mestery [mailto:mest...@siliconloons.com]
   Sent: Tuesday, December 10, 2013 9:21 AM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [neutron] Third party Neutron plugin
  testing meeting
  
   On Dec 10, 2013, at 10:45 AM, Veiga, Anthony
  anthony_ve...@cable.comcast.com wrote:
   -Original Message-
   From: Kyle Mestery mest...@siliconloons.com
   Reply-To: OpenStack Development Mailing List (not for usage
  questions)
   openstack-dev@lists.openstack.org
   Date: Tuesday, December 10, 2013 10:48
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org
   Subject: [openstack-dev] [neutron] Third party Neutron plugin
   testing
   meeting
  
   Last week I took an action item to organize a meeting for everyone
   who is doing third-party testing in Neutron for plugins, whether
   this
   is vendor or Open Source based. The idea is to share ideas around
   setups and any issues people hit. I'd like to set this meeting up
   for
   this week, Thursday at 1700UTC. I would also like to propose we
   make
   this a dial in meeting using the Infrastructure Conferencing
   bridges
   [1]. If this time works, I'll set something up and reply to this
   thread with the dial in information.
  
   +1 for the meeting time.  Any particular reason for voice over IRC?
  
   We kind of decided that doing this over voice initially would be
  expedient, but I am fine with moving to IRC. If I don't hear
   objections,
  lets assume we will meet at 1700UTC Thursday on
   #openstack-meeting-alt.
  
  
  
   Also, I've started a etherpad page [2] with information. It would
   be
   good for people to add information to this etherpad as well. I've
   coupled this pad with information around multi-node gate testing
   for
   Neutron as well, as I suspect most of the third-party testing will
   require multiple nodes as well.
  
   I'll start filling out our setup.  I have some questions around
   Third-Party Testing in particular, and look forward to this
   discussion.
  
   Awesome, thanks Anthony!
  
  
   Thanks!
   Kyle
  
   [1]
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread James Slagle
On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský ji...@redhat.com wrote:
 Hi all,

 TL;DR: I believe that As an infrastructure administrator, Anna wants a CLI
 for managing the deployment providing the same fundamental features as UI.
 With the planned architecture changes (making tuskar-api thinner and getting
 rid of proxying to other services), there's not an obvious way to achieve
 that. We need to figure this out. I present a few options and look forward
 for feedback.

 Previously, we had planned Tuskar arcitecture like this:

 tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

To be clear, tuskarclient is just a library right?  So both the UI and
CLI use tuskarclient, at least was that the original plan?

 This meant that the integration logic of how to use heat, ironic and other
 services to manage an OpenStack deployment lied within *tuskar-api*. This
 gave us an easy way towards having a CLI - just build tuskarclient to wrap
 abilities of tuskar-api.


 Nowadays we talk about using heat and ironic (and neutron? nova?
 ceilometer?) apis directly from the UI, similarly as Dashboard does.

I think we should do that whereever we can for sure.  For example, to
get the status of a deployment we can do the same API call as heat
stack-status ... does, no need to write a new Tuskar API to do that.

 But our approach cannot be exactly the same as in Dashboard's case.
 Dashboard is quite a thin wrapper on top of python-...clients, which means
 there's a natural parity between what the Dashboard and the CLIs can do.

 We're not wrapping the APIs directly (if wrapping them directly would be
 sufficient, we could just use Dashboard and not build Tuskar API at all).
 We're building a separate UI because we need *additional logic* on top of
 the APIs. E.g. instead of directly working with Heat templates and Heat
 stacks to deploy overcloud, user will get to pick how many
 control/compute/etc. nodes he wants to have, and we'll take care of Heat
 things behind the scenes. This makes Tuskar UI significantly thicker than
 Dashboard is, and the natural parity between CLI and UI vanishes. By having
 this logic in UI, we're effectively preventing its use from CLI. (If i were
 bold i'd also think about integrating Tuskar with other software which would
 be prevented too if we keep the business logic in UI, but i'm not absolutely
 positive about use cases here).

I don't think we want the business logic in the UI.

 Now this raises a question - how do we get CLI reasonably on par with
 abilities of the UI? (Or am i wrong that Anna the infrastructure
 administrator would want that?)

IMO, we want an equivalent CLI and UI.  A big reason is so that it can
be sanely scripted/automated.


 Here are some options i see:

 1) Make a thicker python-tuskarclient and put the business logic there. Make
 it consume other python-*clients. (This is an unusual approach though, i'm
 not aware of any python-*client that would consume and integrate other
 python-*clients.)

python-openstackclient consumes other clients :).  Ok, that's probably
not a great example :).

This approach makes the most sense to me.  python-tuskarclient would
make the decisions about if it can call the heat api directly, or the
tuskar api, or some other api.  The UI and CLI would then both use
python-tuskarclient.

 2) Make a thicker tuskar-api and put the business logic there. (This is the
 original approach with consuming other services from tuskar-api. The
 feedback on this approach was mostly negative though.)

So, typically, I would say this is the right approach.  However given
what you pointed out above that sometimes we can use other API's
directly, we then have a seperation where sometimes you have to use
tuskar-api and sometimes you'd use heat/etc api.  By using
python-tuskarclient, you're really just pushing that abstraction into
a library instead of an API, and I think that makes some sense.

 3) Keep tuskar-api and python-tuskarclient thin, make another library
 sitting between Tuskar UI and all python-***clients. This new project would
 contain the logic of using undercloud services to provide the tuskar
 experience it would expose python bindings for Tuskar UI and contain a CLI.
 (Think of it like traditional python-*client but instead of consuming a REST
 API, it would consume other python-*clients. I wonder if this is
 overengineering. We might end up with too many projects doing too few
 things? :) )

I don't follow how this new library would be different from
python-tuskarclient.  Unless I'm just misinterpreting what
python-tuskarclient is meant to be, which may very well be true :).

 4) Keep python-tuskarclient thin, but build a separate CLI app that would
 provide same integration features as Tuskar UI does. (This would lead to
 code duplication. Depends on the actual amount of logic to duplicate if this
 is bearable or not.)

-1



 Which of the options you see as best? Did i miss some better option? Am i
 just being crazy and 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Tzu-Mainn Chen
Thanks for writing this all out!

- Original Message -
 Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
 new to the project. I only mention it again because it's relevant in
 that I missed any of the discussion on why proxying from tuskar API to
 other APIs is looked down upon. Jiri and I had been talking yesterday
 and he mentioned it to me when I started to ask these same sorts of
 questions.
 
 On 12/11/2013 07:33 AM, Jiří Stránský wrote:
  Hi all,
 
  TL;DR: I believe that As an infrastructure administrator, Anna wants a
  CLI for managing the deployment providing the same fundamental features
  as UI. With the planned architecture changes (making tuskar-api thinner
  and getting rid of proxying to other services), there's not an obvious
  way to achieve that. We need to figure this out. I present a few options
  and look forward for feedback.
 
  Previously, we had planned Tuskar arcitecture like this:
 
  tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.
 
 My biggest concern was that having each client call out to the
 individual APIs directly put a lot of knowledge into the clients that
 had to be replicated across clients. At the best case, that's simply
 knowing where to look for data. But I suspect it's bigger than that and
 there are workflows that will be implemented for tuskar needs. If the
 tuskar API can't call out to other APIs, that workflow implementation
 needs to be done at a higher layer, which means in each client.
 
 Something I'm going to talk about later in this e-mail but I'll mention
 here so that the diagrams sit side-by-side is the potential for a facade
 layer that hides away the multiple APIs. Lemme see if I can do this in
 ASCII:
 
 tuskar-ui -+   +-tuskar-api
 |   |
 +-client-facade-+-nova-api
 |   |
 tuskar-cli-+   +-heat-api
 
 The facade layer runs client-side and contains the business logic that
 calls across APIs and adds in the tuskar magic. That keeps the tuskar
 API from calling into other APIs* but keeps all of the API call logic
 abstracted away from the UX pieces.
 
 * Again, I'm not 100% up to speed with the API discussion, so I'm going
 off the assumption that we want to avoid API to API calls. If that isn't
 as strict of a design principle as I'm understanding it to be, then the
 above picture probably looks kinda silly, so keep in mind the context
 I'm going from.
 
 For completeness, my gut reaction was expecting to see something like:
 
 tuskar-ui -+
 |
 +-tuskar-api-+-nova-api
 ||
 tuskar-cli-++-heat-api
 
 Where a tuskar client talked to the tuskar API to do tuskar things.
 Whatever was needed to do anything tuskar-y was hidden away behind the
 tuskar API.
 
  This meant that the integration logic of how to use heat, ironic and
  other services to manage an OpenStack deployment lied within
  *tuskar-api*. This gave us an easy way towards having a CLI - just build
  tuskarclient to wrap abilities of tuskar-api.
 
  Nowadays we talk about using heat and ironic (and neutron? nova?
  ceilometer?) apis directly from the UI, similarly as Dashboard does.
  But our approach cannot be exactly the same as in Dashboard's case.
  Dashboard is quite a thin wrapper on top of python-...clients, which
  means there's a natural parity between what the Dashboard and the CLIs
  can do.

 When you say python- clients, is there a distinction between the CLI and
 a bindings library that invokes the server-side APIs? In other words,
 the CLI is packaged as CLI+bindings and the UI as GUI+bindings?
 
  We're not wrapping the APIs directly (if wrapping them directly would be
  sufficient, we could just use Dashboard and not build Tuskar API at
  all). We're building a separate UI because we need *additional logic* on
  top of the APIs. E.g. instead of directly working with Heat templates
  and Heat stacks to deploy overcloud, user will get to pick how many
  control/compute/etc. nodes he wants to have, and we'll take care of Heat
  things behind the scenes. This makes Tuskar UI significantly thicker
  than Dashboard is, and the natural parity between CLI and UI vanishes.
  By having this logic in UI, we're effectively preventing its use from
  CLI. (If i were bold i'd also think about integrating Tuskar with other
  software which would be prevented too if we keep the business logic in
  UI, but i'm not absolutely positive about use cases here).
 
 I see your point about preventing its use from the CLI, but more
 disconcerting IMO is that it just doesn't belong in the UI. That sort of
 logic, the Heat things behind the scenes, sounds like the jurisdiction
 of the API (if I'm reading into what that entails correctly).
 
  Now this raises a question - how do we get CLI reasonably on par with
  abilities of the UI? (Or am i wrong that Anna the infrastructure
  administrator would want that?)
 
 

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-11 Thread Tzu-Mainn Chen
 On 2013/10/12 19:39, Tzu-Mainn Chen wrote:
 
  Ideally, we don't. But with this approach we would take out the
  possibility to change something or decide something from the user.
 
  The 'easiest' way is to support bigger companies with huge deployments,
  tailored infrastructure, everything connected properly.
 
  But there are tons of companies/users who are running on old
  heterogeneous hardware. Very likely even more than the number of
  companies having already mentioned large deployments. And giving them
  only the way of 'setting up rules' in order to get the service on the
  node - this type of user is not gonna use our deployment system.
 
  Somebody might argue - why do we care? If user doesn't like TripleO
  paradigm, he shouldn't use the UI and should use another tool. But the
  UI is not only about TripleO. Yes, it is underlying concept, but we are
  working on future *official* OpenStack deployment tool. We should care
  to enable people to deploy OpenStack - large/small scale,
  homo/heterogeneous hardware, typical or a bit more specific use-cases.
 
  I think this is a very important clarification, and I'm glad you made it.
  It sounds
  like manual assignment is actually a sub-requirement, and the feature
  you're arguing
  for is: supporting non-TripleO deployments.

 Mostly but not only. The other argument is - keeping control on stuff I
 am doing. Note that undercloud user is different from overcloud user.

Sure, but again, that argument seems to me to be a non-TripleO approach.  I'm
not saying that it's not a possible use case, I'm saying that you're advocating
for a deployment strategy that fundamentally diverges from the TripleO
philosophy - and as such, that strategy will likely require a separate UI, 
underlying
architecture, etc, and should not be planned for in the Icehouse timeframe.

  That might be a worthy goal, but I think it's a distraction for the
  Icehouse timeframe.
  Each new deployment strategy requires not only a new UI, but different
  deployment
  architectures that could have very little common with each other.
  Designing them all
  to work in the same space is a recipe for disaster, a convoluted gnarl of
  code that
  doesn't do any one thing particularly well.  To use an analogy: there's a
  reason why
  no one makes a flying boat car.
 
  I'm going to strongly advocate that for Icehouse, we focus exclusively on
  large scale
  TripleO deployments, working to make that UI and architecture as sturdy as
  we can.  Future
  deployment strategies should be discussed in the future, and if they're not
  TripleO based,
  they should be discussed with the proper OpenStack group.
 One concern here is - it is quite likely that we get people excited
 about this approach - it will be a new boom - 'wow', there is automagic
 doing everything for me. But then the question would be reality - how
 many from that excited users will actually use TripleO for their real
 deployments (I mean in the early stages)? Would it be only couple of
 them (because of covered use cases, concerns of maturity, lack of
 control scarcity)? Can we assure them that if anything goes wrong, they
 have control over it?
 -- Jarda
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread James Slagle
On Wed, Dec 11, 2013 at 10:35 AM, James Slagle james.sla...@gmail.com wrote:
 On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský ji...@redhat.com wrote:
 1) Make a thicker python-tuskarclient and put the business logic there. Make
 it consume other python-*clients. (This is an unusual approach though, i'm
 not aware of any python-*client that would consume and integrate other
 python-*clients.)

 python-openstackclient consumes other clients :).  Ok, that's probably
 not a great example :).

 This approach makes the most sense to me.  python-tuskarclient would
 make the decisions about if it can call the heat api directly, or the
 tuskar api, or some other api.  The UI and CLI would then both use
 python-tuskarclient.

Another example:

Each python-*client also uses keystoneclient to do auth and get
endpoints.  So, it's not like each client has reimplemented the code
to make HTTP requests to keystone, they reuse the keystone Client
class object.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Dean Troyer
On Wed, Dec 11, 2013 at 9:35 AM, James Slagle james.sla...@gmail.comwrote:

 On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský ji...@redhat.com wrote:
  Previously, we had planned Tuskar arcitecture like this:
 
  tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

 To be clear, tuskarclient is just a library right?  So both the UI and
 CLI use tuskarclient, at least was that the original plan?


I would expect tuskarclient above to be the Python API bindings without the
business logic.


 I don't think we want the business logic in the UI.


+1


  Now this raises a question - how do we get CLI reasonably on par with
  abilities of the UI? (Or am i wrong that Anna the infrastructure
  administrator would want that?)

 IMO, we want an equivalent CLI and UI.  A big reason is so that it can
 be sanely scripted/automated.


At a minimum you need to be sure that all of the atomic operations in your
business logic are exposed via _some_ API.  ie, to script something the
script may be where the business logic exists.

Building on that is moving that logic into a library that calls multiple
Python client APIs.  This may or may not be part of tuskarclient.

The next step up is to put your business logic into what we used to call
middleware, the layer between client and backend.  This is server-side and
IMHO where it belongs.  This is really the ONLY way you can ensure that
various clients get the same experience.


 python-openstackclient consumes other clients :).  Ok, that's probably
 not a great example :).


:) No, not really.  But it is also developing some 'value-added' functions
that are cross-project APIs and has a similar problem.  So far that is just
smoke and mirrors hiding the duck tape behind the scenes but it is not
unlike some of the things that Horizon does for user convenience.


 This approach makes the most sense to me.  python-tuskarclient would
 make the decisions about if it can call the heat api directly, or the
 tuskar api, or some other api.  The UI and CLI would then both use
 python-tuskarclient.


If you do this keep the low-level API bindings separate from the
higher-level logical API.


  2) Make a thicker tuskar-api and put the business logic there. (This is
 the
  original approach with consuming other services from tuskar-api. The
  feedback on this approach was mostly negative though.)

 So, typically, I would say this is the right approach.  However given
 what you pointed out above that sometimes we can use other API's
 directly, we then have a seperation where sometimes you have to use
 tuskar-api and sometimes you'd use heat/etc api.  By using
 python-tuskarclient, you're really just pushing that abstraction into
 a library instead of an API, and I think that makes some sense.


Consider that pushig out to the client requires that the client be in sync
with what is deployed.  You'll have to make sure your client logic can
handle the multiple versions of server APIs that it will encounter.
 Putting that server-side lets you stay in sync with the other OpenStack
APIs you need to use.


  3) Keep tuskar-api and python-tuskarclient thin, make another library
  sitting between Tuskar UI and all python-***clients. This new project
 would
  contain the logic of using undercloud services to provide the tuskar
  experience it would expose python bindings for Tuskar UI and contain a
 CLI.
  (Think of it like traditional python-*client but instead of consuming a
 REST
  API, it would consume other python-*clients. I wonder if this is
  overengineering. We might end up with too many projects doing too few
  things? :) )

 I don't follow how this new library would be different from
 python-tuskarclient.  Unless I'm just misinterpreting what
 python-tuskarclient is meant to be, which may very well be true :).


This is essentially what I suggested above.  It need not be a separate repo
or installable package, but the internal API should have its own
namespace/modules/whatever.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] domain admin role query

2013-12-11 Thread Dolph Mathews
On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox jamielen...@redhat.comwrote:

 Using the default policies it will simply check for the admin role and not
 care about the domain that admin is limited to. This is partially a left
 over from the V2 api when there wasn't domains to worry about.

 A better example of policies are in the file
 etc/policy.v3cloudsample.json. In there you will see the rule for
 create_project is:

 identity:create_project: rule:admin_required and
 domain_id:%(project.domain_id)s,

 as opposed to (in policy.json):

 identity:create_project: rule:admin_required,

 This is what you are looking for to scope the admin role to a domain.


We need to start moving the rules from policy.v3cloudsample.json to the
default policy.json =)



 Jamie

 - Original Message -
  From: Ravi Chunduru ravi...@gmail.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, 11 December, 2013 11:23:15 AM
  Subject: [openstack-dev] [keystone] domain admin role query
 
  Hi,
  I am trying out Keystone V3 APIs and domains.
  I created an domain, created a project in that domain, created an user in
  that domain and project.
  Next, gave an admin role for that user in that domain.
 
  I am assuming that user is now admin to that domain.
  Now, I got a scoped token with that user, domain and project. With that
  token, I tried to create a new project in that domain. It worked.
 
  But, using the same token, I could also create a new project in a
 'default'
  domain too. I expected it should throw authentication error. Is it a bug?
 
  Thanks,
  --
  Ravi
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Ladislav Smola

Hi,

thanks for starting this conversation.
I will take it little side ways. I think we should be asking why have we 
needed the tuskar-api. It has done some more complex logic (e.g. 
building a heat template) or storing additional info, not supported by 
the services we use (like rack associations).

That is a perfectly fine use-case of introducing tuskar-api.

Although now, when everything is shifting to the services themselves, we 
don't need tuskar-api for that kind of stuff. Can you please list what 
complex operations are left, that should be done in tuskar? I think 
discussing concrete stuff would be best.


There can be a CLI or API deployment story using Openstack services, not 
necessarily calling only tuskar-cli and api as proxies.

E.g. in documentation you will have

now create the stack by: heat stack-create params

it's much better than:
You can create stack by tuskar-deploy params, which actually calls heat 
stack-create params


What is wrong about calling the original services? Why do we want to 
hide it?



Also, as I have been talking with rdopieralsky, there has been some 
problems in the past, with tuskar doing more steps in one. Like create a 
rack and register new nodes in the same time. As those have been 
separate API calls and there is no transaction handling, we should not 
do this kind of things in the first place. If we have actions that 
depends on each other, it should go from UI one by one. Otherwise we 
will be showing messages like, The rack has not been created, but 5 
from 8 nodes has been added. We have tried to delete those added nodes, 
but 2 of the 5 deletions has failed. Please figure this out, then you 
can run this awesome action that calls multiple dependent APIs without 
real rollback again. (or something like that, depending on what gets 
created first)


I am not saying we should not have tuskar-api. Just put there things 
that belongs there, not proxy everything.


btw. the real path of the diagram is

tuskar-ui - tuskarclient - tuskar-api - heatclient - heat-api   
.|ironic|etc.


My conclusion
--

I say if it can be tuskar-ui - heatclient - heat-api, lets keep it 
that way.


If we realize we are putting some business logic to UI, that needs to be 
done also to CLI, or we need to store some additional data, that doesn't 
belong anywhere let's put it in Tuskar-API.


Kind Regards,
Ladislav



On 12/11/2013 03:32 PM, Jay Dobies wrote:
Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm 
new to the project. I only mention it again because it's relevant in 
that I missed any of the discussion on why proxying from tuskar API to 
other APIs is looked down upon. Jiri and I had been talking yesterday 
and he mentioned it to me when I started to ask these same sorts of 
questions.


On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the 
individual APIs directly put a lot of knowledge into the clients that 
had to be replicated across clients. At the best case, that's simply 
knowing where to look for data. But I suspect it's bigger than that 
and there are workflows that will be implemented for tuskar needs. If 
the tuskar API can't call out to other APIs, that workflow 
implementation needs to be done at a higher layer, which means in each 
client.


Something I'm going to talk about later in this e-mail but I'll 
mention here so that the diagrams sit side-by-side is the potential 
for a facade layer that hides away the multiple APIs. Lemme see if I 
can do this in ASCII:


tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that 
calls across APIs and adds in the tuskar magic. That keeps the tuskar 
API from calling into other APIs* but keeps all of the API call logic 
abstracted away from the UX pieces.


* Again, I'm not 100% up to speed with the API discussion, so I'm 
going off the assumption that we want to avoid API to API calls. If 
that isn't as strict of a design principle as I'm understanding it to 
be, then the above picture probably looks kinda silly, so keep in mind 
the context I'm going from.


For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
   |
   +-tuskar-api-+-nova-api
   ||

Re: [openstack-dev] [keystone] domain admin role query

2013-12-11 Thread Lyle, David
+1 on moving the domain admin role rules to the default policy.json

-David Lyle

From: Dolph Mathews [mailto:dolph.math...@gmail.com] 
Sent: Wednesday, December 11, 2013 9:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] domain admin role query


On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox jamielen...@redhat.com wrote:
Using the default policies it will simply check for the admin role and not care 
about the domain that admin is limited to. This is partially a left over from 
the V2 api when there wasn't domains to worry  about.

A better example of policies are in the file etc/policy.v3cloudsample.json. In 
there you will see the rule for create_project is:

    identity:create_project: rule:admin_required and 
domain_id:%(project.domain_id)s,

as opposed to (in policy.json):

    identity:create_project: rule:admin_required,

This is what you are looking for to scope the admin role to a domain.

We need to start moving the rules from policy.v3cloudsample.json to the default 
policy.json =)
 

Jamie

- Original Message -
 From: Ravi Chunduru ravi...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Wednesday, 11 December, 2013 11:23:15 AM
 Subject: [openstack-dev] [keystone] domain admin role query

 Hi,
 I am trying out Keystone V3 APIs and domains.
 I created an domain, created a project in that domain, created an user in
 that domain and project.
 Next, gave an admin role for that user in that domain.

 I am assuming that user is now admin to that domain.
 Now, I got a scoped token with that user, domain and project. With that
 token, I tried to create a new project in that domain. It worked.

 But, using the same token, I could also create a new project in a 'default'
 domain too. I expected it should throw authentication error. Is it a bug?

 Thanks,
 --
 Ravi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jay Dobies
 I will take it little side ways. I think we should be asking why have 
 we needed the tuskar-api. It has done some more complex logic (e.g.  
 building a heat template) or storing additional info, not supported  
 by the services we use (like rack associations).

 That is a perfectly fine use-case of introducing tuskar-api.

 Although now, when everything is shifting to the services themselves, 
 we don't need tuskar-api for that kind of stuff. Can you please list 
 what complex operations are left, that should be done in tuskar? I  
 think discussing concrete stuff would be best.


This is a good call to circle back on that I'm not sure of it either. 
The wireframes I've seen so far largely revolve around node listing and 
allocation, but I 100% know I'm oversimplifying it and missing something 
bigger there.



Also, as I have been talking with rdopieralsky, there has been some
problems in the past, with tuskar doing more steps in one. Like create a
rack and register new nodes in the same time. As those have been
separate API calls and there is no transaction handling, we should not
do this kind of things in the first place. If we have actions that
depends on each other, it should go from UI one by one. Otherwise we
will be showing messages like, The rack has not been created, but 5
from 8 nodes has been added. We have tried to delete those added nodes,
but 2 of the 5 deletions has failed. Please figure this out, then you
can run this awesome action that calls multiple dependent APIs without
real rollback again. (or something like that, depending on what gets
created first)


This is what I expected to see as the primary argument against it, the 
lack of a good transactional model for calling the dependent APIs. And 
it's certainly valid.


But what you're describing is the exact same problem regardless if you 
go from the UI or from the Tuskar API. If we're going to do any sort of 
higher level automation of things for the user that spans APIs, we're 
going to run into it. The question is if the client(s) handle it or the 
API. The alternative is to not have the awesome action in the first 
place, in which case we're not really giving the user as much value as 
an application.



I am not saying we should not have tuskar-api. Just put there things
that belongs there, not proxy everything.



btw. the real path of the diagram is

tuskar-ui - tuskarclient - tuskar-api - heatclient - heat-api
.|ironic|etc.

My conclusion
--

I say if it can be tuskar-ui - heatclient - heat-api, lets keep it
that way.


I'm still fuzzy on what OpenStack means when it says *client. Is that 
just a bindings library that invokes a remote API or does it also 
contain the CLI bits?



If we realize we are putting some business logic to UI, that needs to be
done also to CLI, or we need to store some additional data, that doesn't
belong anywhere let's put it in Tuskar-API.

Kind Regards,
Ladislav


Thanks for the feedback  :)




On 12/11/2013 03:32 PM, Jay Dobies wrote:

Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
new to the project. I only mention it again because it's relevant in
that I missed any of the discussion on why proxying from tuskar API to
other APIs is looked down upon. Jiri and I had been talking yesterday
and he mentioned it to me when I started to ask these same sorts of
questions.

On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the
individual APIs directly put a lot of knowledge into the clients that
had to be replicated across clients. At the best case, that's simply
knowing where to look for data. But I suspect it's bigger than that
and there are workflows that will be implemented for tuskar needs. If
the tuskar API can't call out to other APIs, that workflow
implementation needs to be done at a higher layer, which means in each
client.

Something I'm going to talk about later in this e-mail but I'll
mention here so that the diagrams sit side-by-side is the potential
for a facade layer that hides away the multiple APIs. Lemme see if I
can do this in ASCII:

tuskar-ui -+   +-tuskar-api
   |   |
   +-client-facade-+-nova-api
   |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that
calls across APIs and adds in the tuskar magic. That keeps the tuskar
API from calling into 

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Ladislav Smola

On 12/11/2013 04:35 PM, James Slagle wrote:

On Wed, Dec 11, 2013 at 7:33 AM, Jiří Stránský ji...@redhat.com wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a CLI
for managing the deployment providing the same fundamental features as UI.
With the planned architecture changes (making tuskar-api thinner and getting
rid of proxying to other services), there's not an obvious way to achieve
that. We need to figure this out. I present a few options and look forward
for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.

To be clear, tuskarclient is just a library right?  So both the UI and
CLI use tuskarclient, at least was that the original plan?


This meant that the integration logic of how to use heat, ironic and other
services to manage an OpenStack deployment lied within *tuskar-api*. This
gave us an easy way towards having a CLI - just build tuskarclient to wrap
abilities of tuskar-api.


Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.

I think we should do that whereever we can for sure.  For example, to
get the status of a deployment we can do the same API call as heat
stack-status ... does, no need to write a new Tuskar API to do that.


But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which means
there's a natural parity between what the Dashboard and the CLIs can do.

We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at all).
We're building a separate UI because we need *additional logic* on top of
the APIs. E.g. instead of directly working with Heat templates and Heat
stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker than
Dashboard is, and the natural parity between CLI and UI vanishes. By having
this logic in UI, we're effectively preventing its use from CLI. (If i were
bold i'd also think about integrating Tuskar with other software which would
be prevented too if we keep the business logic in UI, but i'm not absolutely
positive about use cases here).

I don't think we want the business logic in the UI.


Can you specify what kind of business logic?

Like we do validations in UI before we send it to API (both on server 
and client).
We occasionally do some joins. E.g. list of nodes is join of nova 
baremetal-list and nova list.


That is considered to be a business logic. Though if it is only for UI 
purposes, it should stay in UI.


Other than this, it's just API calls.




Now this raises a question - how do we get CLI reasonably on par with
abilities of the UI? (Or am i wrong that Anna the infrastructure
administrator would want that?)

IMO, we want an equivalent CLI and UI.  A big reason is so that it can
be sanely scripted/automated.


Sure, we have. It's just API calls. Though e.g. when you want massive 
instance delete, you will write a script for that in CLI. In UI you will 
filter it and use checkboxes.

So the equivalence is in API calls, not in the complex operations.


Here are some options i see:

1) Make a thicker python-tuskarclient and put the business logic there. Make
it consume other python-*clients. (This is an unusual approach though, i'm
not aware of any python-*client that would consume and integrate other
python-*clients.)

python-openstackclient consumes other clients :).  Ok, that's probably
not a great example :).

This approach makes the most sense to me.  python-tuskarclient would
make the decisions about if it can call the heat api directly, or the
tuskar api, or some other api.  The UI and CLI would then both use
python-tuskarclient.


Guys, I am not sure about this. I thought python-xxxclient should follow 
Remote Proxy Pattern, being an object wrapper for the service API calls.


Even if you do this, it should call rather e.g. python-heatclient, 
rather than API directly. Though I haven't seen this one before in 
Openstack.




2) Make a thicker tuskar-api and put the business logic there. (This is the
original approach with consuming other services from tuskar-api. The
feedback on this approach was mostly negative though.)

So, typically, I would say this is the right approach.  However given
what you pointed out above that sometimes we can use other API's
directly, we then have a seperation where sometimes you have to use
tuskar-api and sometimes you'd use heat/etc api.  By using
python-tuskarclient, you're really just pushing that abstraction into
a library instead of an API, and I think that makes some sense.


Shouldn't be general libs in the Oslo, rather than client?


3) Keep tuskar-api and python-tuskarclient thin, make another library

Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Jiří Stránský

On 11.12.2013 16:43, Tzu-Mainn Chen wrote:

Thanks for writing this all out!

- Original Message -

Disclaimer: I swear I'll stop posting this sort of thing soon, but I'm
new to the project. I only mention it again because it's relevant in
that I missed any of the discussion on why proxying from tuskar API to
other APIs is looked down upon. Jiri and I had been talking yesterday
and he mentioned it to me when I started to ask these same sorts of
questions.

On 12/11/2013 07:33 AM, Jiří Stránský wrote:

Hi all,

TL;DR: I believe that As an infrastructure administrator, Anna wants a
CLI for managing the deployment providing the same fundamental features
as UI. With the planned architecture changes (making tuskar-api thinner
and getting rid of proxying to other services), there's not an obvious
way to achieve that. We need to figure this out. I present a few options
and look forward for feedback.

Previously, we had planned Tuskar arcitecture like this:

tuskar-ui - tuskarclient - tuskar-api - heat-api|ironic-api|etc.


My biggest concern was that having each client call out to the
individual APIs directly put a lot of knowledge into the clients that
had to be replicated across clients. At the best case, that's simply
knowing where to look for data. But I suspect it's bigger than that and
there are workflows that will be implemented for tuskar needs. If the
tuskar API can't call out to other APIs, that workflow implementation
needs to be done at a higher layer, which means in each client.

Something I'm going to talk about later in this e-mail but I'll mention
here so that the diagrams sit side-by-side is the potential for a facade
layer that hides away the multiple APIs. Lemme see if I can do this in
ASCII:

tuskar-ui -+   +-tuskar-api
 |   |
 +-client-facade-+-nova-api
 |   |
tuskar-cli-+   +-heat-api

The facade layer runs client-side and contains the business logic that
calls across APIs and adds in the tuskar magic. That keeps the tuskar
API from calling into other APIs* but keeps all of the API call logic
abstracted away from the UX pieces.

* Again, I'm not 100% up to speed with the API discussion, so I'm going
off the assumption that we want to avoid API to API calls. If that isn't
as strict of a design principle as I'm understanding it to be, then the
above picture probably looks kinda silly, so keep in mind the context
I'm going from.

For completeness, my gut reaction was expecting to see something like:

tuskar-ui -+
 |
 +-tuskar-api-+-nova-api
 ||
tuskar-cli-++-heat-api

Where a tuskar client talked to the tuskar API to do tuskar things.
Whatever was needed to do anything tuskar-y was hidden away behind the
tuskar API.


This meant that the integration logic of how to use heat, ironic and
other services to manage an OpenStack deployment lied within
*tuskar-api*. This gave us an easy way towards having a CLI - just build
tuskarclient to wrap abilities of tuskar-api.

Nowadays we talk about using heat and ironic (and neutron? nova?
ceilometer?) apis directly from the UI, similarly as Dashboard does.
But our approach cannot be exactly the same as in Dashboard's case.
Dashboard is quite a thin wrapper on top of python-...clients, which
means there's a natural parity between what the Dashboard and the CLIs
can do.


When you say python- clients, is there a distinction between the CLI and
a bindings library that invokes the server-side APIs? In other words,
the CLI is packaged as CLI+bindings and the UI as GUI+bindings?


python-tuskarclient = Python bindings to tuskar-api + CLI, in one project

tuskar-ui doesn't have it's own bindings, it depends on 
python-tuskarclient for bindings to tuskar-api (and other clients for 
bindings to other APIs). UI makes use just of the Python bindings part 
of clients and doesn't interact with the CLI part. This is the general 
OpenStack way of doing things.





We're not wrapping the APIs directly (if wrapping them directly would be
sufficient, we could just use Dashboard and not build Tuskar API at
all). We're building a separate UI because we need *additional logic* on
top of the APIs. E.g. instead of directly working with Heat templates
and Heat stacks to deploy overcloud, user will get to pick how many
control/compute/etc. nodes he wants to have, and we'll take care of Heat
things behind the scenes. This makes Tuskar UI significantly thicker
than Dashboard is, and the natural parity between CLI and UI vanishes.
By having this logic in UI, we're effectively preventing its use from
CLI. (If i were bold i'd also think about integrating Tuskar with other
software which would be prevented too if we keep the business logic in
UI, but i'm not absolutely positive about use cases here).


I see your point about preventing its use from the CLI, but more
disconcerting IMO is that it just doesn't belong in the UI. That sort 

[openstack-dev] [trove] configuration groups and datastores type/versions

2013-12-11 Thread Craig Vyvial
Configuration Groups is currently developed to associate the datastore
version with a configuration that is created. If a datastore version is not
presented it will use the default similar to the way instances are created
now. This looks like a way of associating the configuration with a
datastore because an instance has this same association.

Depending on how you setup your datastore types and versions this might not
be ideal.
Example:
Datastore Type | Version
-
Mysql  | 5.1
Mysql  | 5.5
Percona| 5.5
-

Configuration  | datastore_version
---
mysql-5.5-config   | mysql 5.5
percona-5.5-config | percona 5.5
---

or

Datastore Type | Version
-
Mysql 5.1  | 5.1.12
Mysql 5.1  | 5.1.13
Mysql  | 5.5.32
Percona| 5.5.44
-

Configuration  | datastore_version
---
mysql-5.1-config   | mysql 5.5
percona-5.5-config | percona 5.5
---


Notice that if you associate the configuration with a datastore version
then in the latter example you will not be able to use the same
configurations that you created with different minor versions of the
datastore.

Something that we should consider is allowing a configuration to be
associated with a just a datastore type (eg. Mysql 5.1) so that any
versions of 5.1 should allow the same configuration to be applied.

I do not view this as a change that needs to happen before the current code
is merged but more as an additive feature of configurations.


*snippet from Morris and I talking about this*

Given the nature of how the datastore / types code has been implemented in
that it is highly configurable, I believe that we we need to adjust the way
in which we are associating configuration groups with datastore types and
versions.  The main use case that I am considering here is that as a user
of the API, I want to be able to associate configurations with a specific
datastore type so that I can easily return a list of the configurations
that are valid for that database type (Example: Get me a list of
configurations for MySQL 5.6).   We know that configurations will vary
across types (MySQL vs. Redis) as well as across major versions (MySQL 5.1
vs MySQL 5.6).   Presently, the code only keys off the datastore version,
and consequently, if I were to set up my datastore type as MySQL X.X and
datastore versions as X.X.X, then you would be potentially associating a
configuration with a specific minor version such as MySQL 5.1.63.Given
then, I am thinking that it makes more sense to allow a configuration to be
associated with both a datastore type AND and datastore version with
precedence given to the datastore type (where both attributes are either
optional – or at least one is required).  This would give the most
flexibility to associate configurations with either the type, version, or
both and would allow it to work across providers given that they are likely
to configure types/versions differently.


I'd like to hear how the community views this and bring up the conversation
now rather than later.


Thanks,

-Craig Vyvial
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Tuskar CLI after architecture changes

2013-12-11 Thread Dean Troyer
On Wed, Dec 11, 2013 at 10:41 AM, Jay Dobies jason.dob...@redhat.comwrote:

 I'm still fuzzy on what OpenStack means when it says *client. Is that just
 a bindings library that invokes a remote API or does it also contain the
 CLI bits?


For the older python-*client projects they are both Python API bindings and
a this CLI on top of them.  Some of the newer clients may not include a
CLI.  By default I think most people mean the library API when referring to
clients without 'CLI'.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack preview

2013-12-11 Thread Randall Burt
On Dec 10, 2013, at 3:46 PM, Zane Bitter zbit...@redhat.com wrote:

 On 10/12/13 15:10, Randall Burt wrote:
 On Dec 10, 2013, at 1:27 PM, Zane Bitter zbit...@redhat.com
  wrote:
 
 On 10/12/13 12:46, Richard Lee wrote:
 Hey all,
 
 We're working on a blueprint
 https://blueprints.launchpad.net/heat/+spec/preview-stack that adds
 the ability to preview what a given template+parameters would create in
 terms of resources.  We think this would provide significant value for
 blueprint authors and for other heat users that want to see what
 someone's template would create before actually launching resources (and
 possibly having to pay for them).
 
 +1 for this use case.
 
 BTW AWS supports something similar, which we never bothered to implement in 
 the compatibility API. You might want to do some research on that as a 
 starting point:
 
 http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_EstimateTemplateCost.html
 
 However the fact that we have pluggable resource types would make it very 
 difficult for us to do cost calculations inside Heat (and, in fact, 
 CloudFormation doesn't do that either, it just spits out a URL for their 
 separate calculator) - e.g. it's very hard to know which resources will 
 create, say, a Nova server unless they are all annotated in some way.
 
 Are you thinking the API will simply return a list of resource types and 
 counts? e.g.:
 
 {
   OS::Nova::Server: 2,
   OS::Cinder::Volume: 1,
   OS::Neutron::FloatingIP: 1
 }
 
 If so, +1 for that implementation too. Don't forget that you will have to 
 recurse through provider templates, which may not contain what they say on 
 the tin.
 
 That sounds more than reasonable to me. I don't think we could begin to do 
 any sort of meaningful cost calculation without having to mostly punt to 
 the service provider anyway.
 
 Yeah, exactly.
 
 Although it occurs to me that we may want more detail than I originally 
 thought... e.g. knowing the flavor of any Nova servers is probably quite 
 important. Any ideas?
 
 The first thing that comes to mind is that we could annotate resource types 
 with the list of parameters we want to group by. That would enable something 
 like:
 
 {
  OS::Nova::Server:
[{config: {flavor: m1.small}, count: 1},
 {config: {flavor: m3.large}, count: 1}],
  OS::Cinder::Volume:
[{config: {size: 10}, count: 1}],
  OS::Neutron::FloatingIP:
[{config: {}, count: 1}],
 }
 
 - ZB

Yeah, that makes a lot of sense from a I want to calculate what this stack is 
going to cost me use case. My only concern is that a given service provider 
may have different ideas as to what's important WRT a stack's value, but we 
could always extend this with something in the global environment similar to 
how we discussed resource support status in those reviews.

So it sounds to me like we just need to add a field to the property schema that 
says this property is important to the preview call.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][qa] How will ironic tests run in tempest?

2013-12-11 Thread David Kranz

On 12/10/2013 08:41 PM, Devananda van der Veen wrote:
 Tue, Dec 10, 2013 at 12:43 PM, David Kranz dkr...@redhat.com 
mailto:dkr...@redhat.com wrote:


On 12/09/2013 01:37 PM, Devananda van der Veen wrote:

On Fri, Dec 6, 2013 at 2:13 PM, Clark Boylan
clark.boy...@gmail.com mailto:clark.boy...@gmail.com wrote:

On Fri, Dec 6, 2013 at 1:53 PM, David Kranz
dkr...@redhat.com mailto:dkr...@redhat.com wrote:
 It's great that tempest tests for ironic have been
submitted! I was
 reviewing https://review.openstack.org/#/c/48109/ and
noticed that the tests
 do not actually run. They are skipped because baremetal is
not enabled. This
 is not terribly surprising but we have had a policy in
tempest to only merge
 code that has demonstrated that it works. For services that
cannot run in
 the single-vm environment of the upstream gate we said
there could be a
 system running somewhere that would run them and report a
result to gerrit.
 Is there a plan for this, or to make an exception for ironic?

  -David

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

There is a change[0] to openstack-infra/config to add
experimental
tempest jobs to test ironic. I think that change is close to
being
ready, but I need to give it time for a proper review. Once
in that
will allow you to test 48109 (in theory, not sure if all the
bits will
just work). I don't think these tests fall under the cannot
run in a
single vm environment umbrella, we should be able to test the
baremetal code via the pxe booting of VMs within the single VM
environment.

[0] https://review.openstack.org/#/c/53917/


Clark


We can test the ironic services, database, and the driver
interfaces by using our fake driver within a single devstack VM
today (I'm not sure the exercises for all of this have been
written yet, but it's practical to test it). OTOH, I don't
believe we can test a PXE deploy within a single VM today, and
need to resume discussions with infra about this.

There are some other aspects of Ironic (IPMI, SOL access, any
vendor-specific drivers) which we'll need real hardware to test
because they can't effectively be virtualized. TripleO should
cover some (much?) of those needs, once they are able to switch
to using Ironic instead of nova-baremetal.

-Devananda

So it seems that the code in the submitted tempest tests can run
in a regular job if devstack is configured to enable ironic, but
that this cannot be the default. So I propose that we create a
regular devstack+ironic job that will run in the ironic and
tempest gates, and run just the ironic tests. When third-party
bare-metal results can be reported for ironic, tempest can then
accept tests that require bare-metal.  Does any one have a problem
with this approach?

 -David


As I understand it, the infra/config patch which Clark already linked 
(https://review.openstack.org/#/c/53917), which has gone through 
several iterations, should be enabling Ironic within devstack -- and 
thus causing tempest to run the relevant tests -- within the Ironic 
and Tempest check and gate pipelines. This will exercise Ironic's API 
by performing CRUD actions on resources. It doesn't do any more than 
that yet.
It looks like that patch is adding ironic jobs to the experimental queue 
but I think we want them on check/gate.


David, I'm not sure what you mean by when third-party bare-metal 
results can be reported for ironic -- I don't see any reason why we 
couldn't accept third-party smoke tests right now, except that none of 
the tempest tests are written... Am I missing something?
I was assuming there were some ironic tests that actually need bare 
metal resources to run. Perhaps there are not. Either way, we just want 
to make sure that when tests are submitted to tempest we have evidence 
that they have successfully run. Sounds like the CRUD tests will just 
work the same way as our existing tests once ironic is enabled in devstack.


In the longer term, we are planning to enable tempest testing of 
deployment by ironic within devstack-gate as all the pieces come 
together. This will take a fair bit more work / time, but I'm going to 
start nudging resources in this direction very soon. In fact, we just 
talked about this in #infra for a bit. Here's an attempt to summarize 
what came of it w.r.t. Ironic's testing plans. We will need:


- some changes in devstack-gate to prepare a new environment by...
-- install sshd + 

[openstack-dev] [Ceilometer][Glance][Oslo]Notification issue with Glance

2013-12-11 Thread Nadya Privalova
Hello, guys!

We faced with Glance notifications issue during Tempest tests for
Ceilometer. We tried to send notification by ourselves (during
investigation we've found that Glance uses almost the same code):

from oslo.config import cfg
from oslo import messaging


CONF.rabbit_host = 'localhost'
CONF.rabbit_port = 5672
CONF.rabbit_use_ssl = False
CONF.rabbit_userid = 'guest'
CONF.rabbit_password = 'guest'
CONF.rabbit_virtual_host = '/'
CONF.rabbit_notification_exchange = 'glance'
CONF.rabbit_notification_topic = 'notifications'
CONF.rabbit_durable_queues = False
CONF.notification_driver = 'rabbit'

tr = messaging.get_transport(CONF, 'rabbit://')
n = messaging.Notifier(tr, 'messaging', 'image.localhost')
n.info({},'image.hello_world', 'Hello, World!')

And no messages were detected in Rabbit. And no ERRORs.
Please advice, maybe this issue is known?

Thanks for any help,
Nadya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Monty Taylor


On 12/11/2013 03:51 PM, Russell Bryant wrote:
 On 12/10/2013 05:57 PM, Paul McMillan wrote:
 +1 on Tatiana Mazur, she's been doing a bunch of good work lately.

 I'm fine with me being removed from core provided you have someone else 
 qualified to address security issues as they come up. My contributions have 
 lately been reviewing and responding to security issues, vetting fixes for 
 those, and making sure they happen in a timely fashion. Fortunately, we 
 haven't had too many of those lately. Other than that, I've been lurking and 
 reviewing to make sure nothing egregious gets committed.

 If you don't have anyone else who is a web security specialist on the core 
 team, I'd like to stay. Since I'm also a member of the Django security team, 
 I offer a significant chunk of knowledge about how the underlying security 
 protections are intended work.
 
 Security reviews aren't done on gerrit, though.  They are handled in
 launchpad bugs.  It seems you could still contribute in this way without
 being on the horizon-core team responsible for reviewing normal changes
 in gerrit.
 
 The bigger point is that you don't have to be on whatever-core to
 contribute productively to reviews.  I think every project has people
 that make important review contributions, but aren't necessarily
 reviewing regularly enough to be whatever-core.

And as a follow up - I betcha the vulnerability-management team would
LOVE to have you!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Scheduler] about scheduler-as-a-service

2013-12-11 Thread Haiming Yang
It looks to me this schedule as a service is not a service, and it more like an 
function with some changeable parameters. It needs many nova and cinder 
functions done first.

-原始邮件-
发件人: Qiu Yu unic...@gmail.com
发送时间: ‎2013/‎12/‎11 14:41
收件人: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [Scheduler] about scheduler-as-a-service



On Tue, Dec 10, 2013 at 5:30 PM, Lingxian Kong anlin.k...@gmail.com wrote:

we know that there is a scheduler-as-a-service[1] working in progress now, 
aiming at smart resource placement and also providing the instance group API 
work for nova.


But what I wonder is does it include the feature of DRS(Distributed Resource 
Scheduler, something like that), as it is in vCenter[2], or is there any 
project related to this? or some related bp?


Any hints are appreciated. I apologize if this question was already covered and 
I missed it.




For the smart portion, maybe you should take a look at 
https://blueprints.launchpad.net/nova/+spec/solver-scheduler



And for the DRS feature, I think it's more likely to be fit in nova conductor's 
role. After all the migration task have been moved to conductor, then some 
feature like DRS could be discussed as the next step.
https://blueprints.launchpad.net/nova/+spec/cold-migrations-to-conductor
https://blueprints.launchpad.net/nova/+spec/unified-migrations



[1]https://etherpad.openstack.org/p/icehouse-external-scheduler
[2]https://www.vmware.com/cn/products/vsphere/features/drs-dpm.html




BTW, the second link you provided seems to be in Chinese. For those who 
interested, please use this one instead.
http://www.vmware.com/pdf/vmware_drs_wp.pdf


--
Qiu Yu___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] VMwareAPI sub-team status update 2013-12-11

2013-12-11 Thread Shawn Hartsock
Greetings Stackers!

The VMwareAPI subteam is still working out a few priorities, but we should
have these mostly ironed out by next week's IRC meeting. If you need to
comment or adjust expectations be sure to show up and let us know. We had a
very active discussion in this week's IRC meeting. I think our plans are
pretty well set for Icehouse-2 now. We'll follow up again in next week's
IRC meeting on #openstack-meeting at
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20131218T1700

We may edit and change this list a bit over the next 7 days, but here's
what we're tracking so far.

== Blueprint priorities ==

Icehouse-2

*. 
https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management*approved!

*. https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support*approved!

*. https://blueprints.launchpad.net/oslo/+spec/pw-keyrings --- Or alternate
design... finalize by 2013-12-18 meeting in IRC

*. https://blueprints.launchpad.net/nova/+spec/autowsdl-caching -- phase 1
*needs an edit and approval...

Icehouse-3

*. https://blueprints.launchpad.net/nova/+spec/autowsdl-caching -- phase 2
(might should break into 2 BP?) * needs edit/approval

*. https://blueprints.launchpad.net/nova/+spec/config-validation-script*needs
edit/approval, this is the Nova VMware driver half of another BP
I would like to have anything targeted at Icehouse-2 out of Draft status
and ready for approval by next weeks IRC meeting! If you have a related BP
that needs help, ping me or show up in IRC to solicit help. Other than
that...

The order of the day is reviews! We need to get more reviews on our own
code, otherwise how can we recommend a change for review by a core
reviewer? We can save everybody time and effort if people with a vested
interest in the VMware drivers step up and really work things out before
they get escalated up.

== Bugs by priority: ==

* High/Critical, needs review,needs review : 'vmware driver does not work
with more than one datacenter in vC'
 https://review.openstack.org/52630
 https://review.openstack.org/43270
* High/Critical, needs review,needs review : 'nova failures when vCenter
has multiple datacenters'
 https://review.openstack.org/52630
 https://review.openstack.org/43270
* High/High, needs review : 'VMware: spawning large amounts of VMs
concurrently sometimes causes VMDK lock error'
 https://review.openstack.org/58598
* High/High, needs review : 'VMWare: AssertionError: Trying to re-send() an
already-triggered event.'
 https://review.openstack.org/54808
* High/High, needs review : 'VMware: timeouts due to nova-compute stuck at
100% when using deploying 100 VMs'
 https://review.openstack.org/60259
* Medium/High, needs review : 'VMware: instance names can be edited, breaks
nova-driver lookup'
 https://review.openstack.org/59571
* Medium/High, needs revision : '_check_if_folder_file_exists only checks
for metadata file'
 https://review.openstack.org/48544


= Reviews By fitness for core: =


== needs one more +2/approval ==
* https://review.openstack.org/47743
 title: 'VMWare: bug fix for Vim exception handling'
votes: +2:1, +1:7, -1:0, -2:0. +80 days in progress, revision: 11 is 18
days old

== ready for core ==
* https://review.openstack.org/55070
title: 'VMware: fix rescue with disks are not hot-addable'
 votes: +2:0, +1:5, -1:0, -2:0. +38 days in progress, revision: 2 is 17
days old
* https://review.openstack.org/49692
 title: 'VMware: iscsi target discovery fails while attaching volumes'
votes: +2:0, +1:5, -1:0, -2:0. +68 days in progress, revision: 10 is 8 days
old
* https://review.openstack.org/57376
title: 'VMware: delete vm snapshot after nova snapshot'
 votes: +2:0, +1:5, -1:0, -2:0. +21 days in progress, revision: 4 is 16
days old
* https://review.openstack.org/57519
 title: 'VMware: use .get() to access 'summary.accessible''
votes: +2:0, +1:5, -1:0, -2:0. +21 days in progress, revision: 1 is 16 days
old
* https://review.openstack.org/54361
title: 'VMware: fix datastore selection when token is returned'
 votes: +2:0, +1:8, -1:0, -2:0. +43 days in progress, revision: 5 is 42
days old

== needs review ==
* https://review.openstack.org/59571
 title: 'VMware: fix instance lookup against vSphere'
votes: +2:0, +1:1, -1:0, -2:0. +9 days in progress, revision: 8 is 6 days
old
* https://review.openstack.org/52630
title: 'VMware: fix bug when more than one datacenter exists'
 votes: +2:0, +1:3, -1:0, -2:0. +54 days in progress, revision: 20 is 1
days old
* https://review.openstack.org/52630
 title: 'VMware: fix bug when more than one datacenter exists'
votes: +2:0, +1:3, -1:0, -2:0. +54 days in progress, revision: 20 is 1 days
old
* https://review.openstack.org/60010
title: 'VMware: prefer shared datastores over unshared'
 votes: +2:0, +1:3, -1:0, -2:0. +7 days in progress, revision: 1 is 5 days
old
* https://review.openstack.org/55038
 title: 'VMware: bug fix for VM rescue when config drive is config...'
votes: +2:0, +1:2, -1:0, -2:0. +39 days in progress, revision: 4 is 8 days

[openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Tzu-Mainn Chen
Hi,

I'm trying to clarify the terminology being used for Tuskar, which may be 
helpful so that we're sure
that we're all talking about the same thing :)  I'm copying responses from the 
requirements thread
and combining them with current requirements to try and create a unified view.  
Hopefully, we can come
to a reasonably rapid consensus on any desired changes; once that's done, the 
requirements can be
updated.

* NODE a physical general purpose machine capable of running in many roles. 
Some nodes may have hardware layout that is particularly
   useful for a given role.

 * REGISTRATION - the act of creating a node in Ironic

 * ROLE - a specific workload we want to map onto one or more nodes. 
Examples include 'undercloud control plane', 'overcloud control
   plane', 'overcloud storage', 'overcloud compute' etc.

 * MANAGEMENT NODE - a node that has been mapped with an undercloud role
 * SERVICE NODE - a node that has been mapped with an overcloud role
* COMPUTE NODE - a service node that has been mapped to an 
overcloud compute role
* CONTROLLER NODE - a service node that has been mapped to an 
overcloud controller role
* OBJECT STORAGE NODE - a service node that has been mapped to an 
overcloud object storage role
* BLOCK STORAGE NODE - a service node that has been mapped to an 
overcloud block storage role

 * UNDEPLOYED NODE - a node that has not been mapped with a role
  * another option - UNALLOCATED NODE - a node that has not been 
allocated through nova scheduler (?)
   - (after reading lifeless's explanation, I 
agree that allocation may be a
  misleading term under TripleO, so I 
personally vote for UNDEPLOYED)

 * INSTANCE - A role deployed on a node - this is where work actually 
happens.

* DEPLOYMENT

 * SIZE THE ROLES - the act of deciding how many nodes will need to be 
assigned to each role
   * another option - DISTRIBUTE NODES (?)
 - (I think the former is more accurate, but 
perhaps there's a better way to say it?)

 * SCHEDULING - the process of deciding which role is deployed on which node

 * SERVICE CLASS - a further categorization within a service role for a 
particular deployment.

  * NODE PROFILE - a set of requirements that specify what attributes a 
node must have in order to be mapped to
   a service class



Does this seem accurate?  All feedback is appreciated!

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Nomination of Sandy Walsh to core team

2013-12-11 Thread Gordon Chung
 To that end, I would like to nominate Sandy Walsh from Rackspace to
 ceilometer-core. Sandy is one of the original authors of StackTach, and
 spearheaded the original stacktach-ceilometer integration. He has been
 instrumental in many of my codes reviews, and has contributed much of 
the
 existing event storage and querying code.

+1 in support of Sandy.  the Event work he's led in Ceilometer has been an 
important feature and i think he has some valuable ideas.

cheers,
gordon chung
openstack, ibm software standards___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] domain admin role query

2013-12-11 Thread Adam Young

https://blueprints.launchpad.net/keystone/+spec/update-policy-to-cloud

On 12/11/2013 11:18 AM, Lyle, David wrote:

+1 on moving the domain admin role rules to the default policy.json

-David Lyle

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Wednesday, December 11, 2013 9:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] domain admin role query


On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox jamielen...@redhat.com wrote:
Using the default policies it will simply check for the admin role and not care 
about the domain that admin is limited to. This is partially a left over from the 
V2 api when there wasn't domains to worry  about.

A better example of policies are in the file etc/policy.v3cloudsample.json. In 
there you will see the rule for create_project is:

 identity:create_project: rule:admin_required and 
domain_id:%(project.domain_id)s,

as opposed to (in policy.json):

 identity:create_project: rule:admin_required,

This is what you are looking for to scope the admin role to a domain.

We need to start moving the rules from policy.v3cloudsample.json to the default 
policy.json =)
  


Jamie

- Original Message -

From: Ravi Chunduru ravi...@gmail.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Wednesday, 11 December, 2013 11:23:15 AM
Subject: [openstack-dev] [keystone] domain admin role query

Hi,
I am trying out Keystone V3 APIs and domains.
I created an domain, created a project in that domain, created an user in
that domain and project.
Next, gave an admin role for that user in that domain.

I am assuming that user is now admin to that domain.
Now, I got a scoped token with that user, domain and project. With that
token, I tried to create a new project in that domain. It worked.

But, using the same token, I could also create a new project in a 'default'
domain too. I expected it should throw authentication error. Is it a bug?

Thanks,
--
Ravi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Tempest blueprints status update and rationale, input demanded

2013-12-11 Thread Matthew Treinish
On Wed, Dec 11, 2013 at 01:44:19PM +0100, Giulio Fidente wrote:
 hi,
 
 I'm attempting to rationalize on the status of tempest blueprints. I
 need your help so I organized questions in a few open points.
 
 
 * (1) I'm looking for input here on the actual status of the
 following blueprints, which are already approved or in a good
 progress state:
 
 https://blueprints.launchpad.net/tempest/+spec/add-basic-heat-tests
 
 seems done, shall we close it? (steve baker)
 
 https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
 
 seems done, shall we close it? (david kranz)
 
 https://blueprints.launchpad.net/tempest/+spec/config-cleanup
 https://blueprints.launchpad.net/tempest/+spec/config-verification
 
 seems done, close? (mtreinish)

These are both still in progress. The config verification tooling one is
dependent on the config file format being finalized. All that it has right now
is basic framework to build off of.

 
 https://blueprints.launchpad.net/tempest/+spec/fix-gate-tempest-devstack-vm-quantum-full
 
 old but still valid for icehouse, what is the real status here? (mlavalle)
 
 https://blueprints.launchpad.net/tempest/+spec/client-lib-stability
 
 is slow progress appropriate here? (david kranz)
 
 https://blueprints.launchpad.net/tempest/+spec/quantum-basic-api
 
 this was approved but it looks to me quite hard to implement tests
 for the different network topologies, is it even possible given our
 infra? (mlavalle)
 
 https://blueprints.launchpad.net/tempest/+spec/crash-scenario-generator
 
 needs approval, is there any agreement upon this being implemented
 or shall we drop this? (all core and contributors)
 
 https://blueprints.launchpad.net/tempest/+spec/missing-compute-api-extensions
 
 identifying missing tests isn't a blueprint per se I think so I'd
 close this unless someone volunteer the work to at least identify
 the wanted tests
 
 
 * (2) The following are instead blueprints open for discussion which
 I think should either be approved or closed, again input is more
 than welcomed as well as assignees if you care about it:
 
 https://blueprints.launchpad.net/tempest/+spec/refactor-rest-client

I'm going to reject this one. It seems to comes up every cycle, but because
of the differences between XML/JSON having a common client just ends up
being more work. I remember a ML thread on this topic several months ago.

 
 https://blueprints.launchpad.net/tempest/+spec/tempest-multiple-images

This is an old one, we have multi-image test support in tempest. Although I'm
sure it could be better. It's probably safe to close it.

 
 https://blueprints.launchpad.net/tempest/+spec/general-swift-client

Not being very familiar with the swift API I can't say how much simpler
having a single client for all the resource types would be. You should cycle
back around with Martina Kollarova and get some more details about what he's
planning.

 
 https://blueprints.launchpad.net/tempest/+spec/input-scenarios-for-scenario

This was approved during last week's meeting.

 
 https://blueprints.launchpad.net/tempest/+spec/neutron-advanced-scenarios

This should be approved.

 
 https://blueprints.launchpad.net/tempest/+spec/stress-api-tracking

I'd defer to mkoderer on this one, but it sounds reasonable to me.

 
 https://blueprints.launchpad.net/tempest/+spec/test-developer-documentation

We definitely need more documentation for tempest. But, I think this needs a
better description and a concrete list of work items though. I don't think the
definition of a negative test is really a big issue when it comes to tempest
documentation.

 
 
 * (3) Finally, as a general rule of thumb for the many remaining
 blueprints which only demand for new tests, I think we should keep
 and approve blueprints asking for basic tests around new components
 but *not* (as in close) blueprints demanding for additional tests
 around existing components. Does it look reasonable?

I think this is fine, although we probably need a better story around adding
tests to prevent duplication of effort.

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [OpenStack][Heat] AutoScaling scale down issue

2013-12-11 Thread Clint Byrum
Hi!

This list is for discussion of ongoing bugs and features in Heat. For
user-centric discussions, please use the main openstack mailing list:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Thanks!

Excerpts from Haiming Yang's message of 2013-12-11 09:40:32 -0800:
 I think it might be useful to think about how to integrate savana into heat.
 When auto scale usually the first created node will be removed first.
 
 -原始邮件-
 发件人: Jay Lau jay.lau@gmail.com
 发送时间: ‎2013/‎12/‎11 23:46
 收件人: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 主题: [openstack-dev] [OpenStack][Heat] AutoScaling scale down issue
 
 Greetings,
 
 
 Here come a question related to heat auto scale down.
 
 
 The scenario is as following:
 
 I was trying to deploy hadoop cluster with heat Auto Scaling template. 
 
 When scale up a slave node, I can use user-data to do some post work for 
 configuration file on hadoop master node base on the information of slave 
 node (The mainly configuration file is conf/slaves as I need to put slave 
 node to this file);
 
 But when scale down, seems I have no chance to do some configuration for the 
 master node (Remove the scale down node from conf/slaves) as master node do 
 not know which slave node was scale down.
 
 
 Does anyone has some experience on this?
 
 
 
 Thanks,
 
 
 Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Jordan OMara

On 11/12/13 14:15 -0500, Tzu-Mainn Chen wrote:

Hi,

I'm trying to clarify the terminology being used for Tuskar, which may be 
helpful so that we're sure
that we're all talking about the same thing :)  I'm copying responses from the 
requirements thread
and combining them with current requirements to try and create a unified view.  
Hopefully, we can come
to a reasonably rapid consensus on any desired changes; once that's done, the 
requirements can be
updated.

* NODE a physical general purpose machine capable of running in many roles. 
Some nodes may have hardware layout that is particularly
  useful for a given role.

* REGISTRATION - the act of creating a node in Ironic

* ROLE - a specific workload we want to map onto one or more nodes. 
Examples include 'undercloud control plane', 'overcloud control
  plane', 'overcloud storage', 'overcloud compute' etc.

* MANAGEMENT NODE - a node that has been mapped with an undercloud role
* SERVICE NODE - a node that has been mapped with an overcloud role
   * COMPUTE NODE - a service node that has been mapped to an overcloud 
compute role
   * CONTROLLER NODE - a service node that has been mapped to an 
overcloud controller role
   * OBJECT STORAGE NODE - a service node that has been mapped to an 
overcloud object storage role
   * BLOCK STORAGE NODE - a service node that has been mapped to an 
overcloud block storage role

* UNDEPLOYED NODE - a node that has not been mapped with a role
 * another option - UNALLOCATED NODE - a node that has not been 
allocated through nova scheduler (?)
  - (after reading lifeless's explanation, I agree that 
allocation may be a
 misleading term under TripleO, so I 
personally vote for UNDEPLOYED)

* INSTANCE - A role deployed on a node - this is where work actually 
happens.

* DEPLOYMENT

* SIZE THE ROLES - the act of deciding how many nodes will need to be 
assigned to each role
  * another option - DISTRIBUTE NODES (?)
- (I think the former is more accurate, but 
perhaps there's a better way to say it?)

* SCHEDULING - the process of deciding which role is deployed on which node

* SERVICE CLASS - a further categorization within a service role for a 
particular deployment.

 * NODE PROFILE - a set of requirements that specify what attributes a 
node must have in order to be mapped to
  a service class



Does this seem accurate?  All feedback is appreciated!

Mainn



Thanks for doing this! Presumably this is going to go on a wiki where
we can look at it forever and ever?
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgpn_aW3DEuC0.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Tzu-Mainn Chen
 Hi,
 
 I'm trying to clarify the terminology being used for Tuskar, which may be
 helpful so that we're sure
 that we're all talking about the same thing :)  I'm copying responses from
 the requirements thread
 and combining them with current requirements to try and create a unified
 view.  Hopefully, we can come
 to a reasonably rapid consensus on any desired changes; once that's done,
 the requirements can be
 updated.
 
 * NODE a physical general purpose machine capable of running in many roles.
 Some nodes may have hardware layout that is particularly
useful for a given role.
 
  * REGISTRATION - the act of creating a node in Ironic
 
  * ROLE - a specific workload we want to map onto one or more nodes.
  Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.
 
  * MANAGEMENT NODE - a node that has been mapped with an undercloud
  role
  * SERVICE NODE - a node that has been mapped with an overcloud role
 * COMPUTE NODE - a service node that has been mapped to an
 overcloud compute role
 * CONTROLLER NODE - a service node that has been mapped to an
 overcloud controller role
 * OBJECT STORAGE NODE - a service node that has been mapped to
 an overcloud object storage role
 * BLOCK STORAGE NODE - a service node that has been mapped to an
 overcloud block storage role
 
  * UNDEPLOYED NODE - a node that has not been mapped with a role
   * another option - UNALLOCATED NODE - a node that has not been
   allocated through nova scheduler (?)
- (after reading lifeless's explanation,
I agree that allocation may be a
   misleading term under TripleO, so I
   personally vote for UNDEPLOYED)
 
  * INSTANCE - A role deployed on a node - this is where work actually
  happens.
 
 * DEPLOYMENT
 
  * SIZE THE ROLES - the act of deciding how many nodes will need to be
  assigned to each role
* another option - DISTRIBUTE NODES (?)
  - (I think the former is more accurate, but
  perhaps there's a better way to say it?)
 
  * SCHEDULING - the process of deciding which role is deployed on which
  node
 
  * SERVICE CLASS - a further categorization within a service role for a
  particular deployment.
 
   * NODE PROFILE - a set of requirements that specify what
   attributes a node must have in order to be mapped to
a service class
 
 
 
 Does this seem accurate?  All feedback is appreciated!
 
 Mainn
 
 
 Thanks for doing this! Presumably this is going to go on a wiki where
 we can look at it forever and ever?


Yep, if consensus is reached, I'd replace the current tuskar glossary on the 
wiki with this
(as well as update the requirements).

Mainn


 --
 Jordan O'Mara jomara at redhat.com
 Red Hat Engineering, Raleigh
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Jay Dobies


So glad we're hashing this out now. This will save a bunch of headaches 
in the future. Good call pushing this forward.


On 12/11/2013 02:15 PM, Tzu-Mainn Chen wrote:

Hi,

I'm trying to clarify the terminology being used for Tuskar, which may be 
helpful so that we're sure
that we're all talking about the same thing :)  I'm copying responses from the 
requirements thread
and combining them with current requirements to try and create a unified view.  
Hopefully, we can come
to a reasonably rapid consensus on any desired changes; once that's done, the 
requirements can be
updated.

* NODE a physical general purpose machine capable of running in many roles. 
Some nodes may have hardware layout that is particularly
useful for a given role.


Do we ever need to distinguish between undercloud and overcloud nodes?


  * REGISTRATION - the act of creating a node in Ironic


DISCOVERY - The act of having nodes found auto-magically and added to 
Ironic with minimal user intervention.




  * ROLE - a specific workload we want to map onto one or more nodes. 
Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.

  * MANAGEMENT NODE - a node that has been mapped with an undercloud 
role
  * SERVICE NODE - a node that has been mapped with an overcloud role
 * COMPUTE NODE - a service node that has been mapped to an 
overcloud compute role
 * CONTROLLER NODE - a service node that has been mapped to an 
overcloud controller role
 * OBJECT STORAGE NODE - a service node that has been mapped to an 
overcloud object storage role
 * BLOCK STORAGE NODE - a service node that has been mapped to an 
overcloud block storage role

  * UNDEPLOYED NODE - a node that has not been mapped with a role
   * another option - UNALLOCATED NODE - a node that has not been 
allocated through nova scheduler (?)
- (after reading lifeless's explanation, I agree that 
allocation may be a
   misleading term under TripleO, so I 
personally vote for UNDEPLOYED)


Undeployed still sounds a bit odd to me when paired with the word role. 
I could see deploying a workload bundle or something, but a role 
doesn't feel like a tangible thing that is pushed out somewhere.


Unassigned? As in, it hasn't been assigned a role yet.


  * INSTANCE - A role deployed on a node - this is where work actually 
happens.


I'm fine with instance, but the the phrasing a role deployed on a 
node feels odd to me in the same way undeployed does. Maybe a slight 
change to A node that has been assigned a role, but that also may be 
me being entirely too nit-picky.


To put it in context, on a scale of 1-10, my objection to this and 
undeployed is around a 2, so don't let me come off as strenuously 
objecting.



* DEPLOYMENT

  * SIZE THE ROLES - the act of deciding how many nodes will need to be 
assigned to each role
* another option - DISTRIBUTE NODES (?)
  - (I think the former is more accurate, but 
perhaps there's a better way to say it?)

  * SCHEDULING - the process of deciding which role is deployed on which 
node


I know this derives from a Nova term, but to me, the idea of 
scheduling carries a time-in-the-future connotation to it. The 
interesting part of what goes on here is the assignment of which roles 
go to which instances.



  * SERVICE CLASS - a further categorization within a service role for a 
particular deployment.


I don't understand this one, can you add a few examples?


   * NODE PROFILE - a set of requirements that specify what attributes 
a node must have in order to be mapped to
a service class


Even without knowing what service class is, I like this one.  :)




Does this seem accurate?  All feedback is appreciated!

Mainn


Thanks again  :D

 ___

OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday December 12th at 22:00 UTC

2013-12-11 Thread Matthew Treinish
The weekly OpenStack QA team IRC meeting will be tomorrow, December 12th at
22:00 UTC in the #openstack-meeting channel. 

The meeting agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda. 


Also, a quick reminder that tomorrow's meeting will be the start of the new
alternating meeting time schedule. The weekly meetings will now be oscillating
between 17:00 UTC and 22:00 UTC on Thursdays. So, next week's meeting will be
on December 19th at 17:00 UTC.


-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread James Slagle
This is really helpful, thanks for pulling it together.

comment inline...

On Wed, Dec 11, 2013 at 2:15 PM, Tzu-Mainn Chen tzuma...@redhat.com wrote:
 * NODE a physical general purpose machine capable of running in many roles. 
 Some nodes may have hardware layout that is particularly
useful for a given role.

  * REGISTRATION - the act of creating a node in Ironic

  * ROLE - a specific workload we want to map onto one or more nodes. 
 Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.

  * MANAGEMENT NODE - a node that has been mapped with an undercloud 
 role
  * SERVICE NODE - a node that has been mapped with an overcloud role
 * COMPUTE NODE - a service node that has been mapped to an 
 overcloud compute role
 * CONTROLLER NODE - a service node that has been mapped to an 
 overcloud controller role
 * OBJECT STORAGE NODE - a service node that has been mapped to an 
 overcloud object storage role
 * BLOCK STORAGE NODE - a service node that has been mapped to an 
 overcloud block storage role

  * UNDEPLOYED NODE - a node that has not been mapped with a role

This begs the question (for me anyway), why not call it UNMAPPED NODE?
 If not, can we s/mapped/deployed in the descriptions above instead?

It might make sense then to define mapped and deployed in technical
terms as well.  Is mapped just the act of associating a node with a
role in the UI, or does it mean that bits have actually been
transferred across the wire to the node's disk and it's now running?

   * another option - UNALLOCATED NODE - a node that has not been 
 allocated through nova scheduler (?)
- (after reading lifeless's explanation, I 
 agree that allocation may be a
   misleading term under TripleO, so I 
 personally vote for UNDEPLOYED)

  * INSTANCE - A role deployed on a node - this is where work actually 
 happens.

 * DEPLOYMENT

  * SIZE THE ROLES - the act of deciding how many nodes will need to be 
 assigned to each role
* another option - DISTRIBUTE NODES (?)
  - (I think the former is more accurate, but 
 perhaps there's a better way to say it?)

  * SCHEDULING - the process of deciding which role is deployed on which 
 node

  * SERVICE CLASS - a further categorization within a service role for a 
 particular deployment.

   * NODE PROFILE - a set of requirements that specify what attributes 
 a node must have in order to be mapped to
a service class




-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Heat] AutoScaling scale down issue

2013-12-11 Thread Angus Salkeld

On 11/12/13 23:43 +0800, Jay Lau wrote:

Greetings,

Here come a question related to heat auto scale down.

The scenario is as following:
I was trying to deploy hadoop cluster with heat Auto Scaling template.

When scale up a slave node, I can use user-data to do some post work for
configuration file on hadoop master node base on the information of slave
node (The mainly configuration file is conf/slaves as I need to put slave
node to this file);

But when scale down, seems I have no chance to do some configuration for
the master node (Remove the scale down node from conf/slaves) as master
node do not know which slave node was scale down.

Does anyone has some experience on this?


You should use the metadata rather than the userdata (userdata is
not updatable) as you can retrieve the metadata and action on it
during the life of the server.

-Angus


Thanks,

Jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Support for Pecan in Nova

2013-12-11 Thread Ryan Petrello
Hello,

I’ve spent the past week experimenting with using Pecan for Nova’s API, and 
have opened an experimental review:

https://review.openstack.org/#/c/61303/6

…which implements the `versions` v3 endpoint using pecan (and paves the way for 
other extensions to use pecan).  This is a *potential* approach I've considered 
for gradually moving the V3 API, but I’m open to other suggestions (and 
feedback on this approach).  I’ve also got a few open questions/general 
observations:

1.  It looks like the Nova v3 API is composed *entirely* of extensions 
(including “core” API calls), and that extensions and their routes are 
discoverable and extensible via installed software that registers itself via 
stevedore.  This seems to lead to an API that’s composed of installed software, 
which in my opinion, makes it fairly hard to map out the API (as opposed to how 
routes are manually defined in other WSGI frameworks).  I assume at this time, 
this design decision has already been solidified for v3?

2.  The approach in my review would allow us to translate extensions to pecan 
piecemeal.  To me, this seems like a more desirable and manageable approach 
than moving everything to pecan at once, given the scale of Nova’s API.  Do 
others agree/disagree?  Until all v3 extensions are translated, this means the 
v3 API is composed of two separate WSGI apps.

3.  Can somebody explain the purpose of the wsgi.deserializer decorator?  It’s 
something I’ve not accounted for yet in my pecan implementation.  Is the goal 
to deserialize the request *body* from e.g., XML into a usable data structure?  
Is there an equivalent for JSON handling?  How does this relate to the schema 
validation that’s being done in v3?

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Matt Wagner
On Wed Dec 11 14:15:22 2013, Tzu-Mainn Chen wrote:
 Hi,
 
 I'm trying to clarify the terminology being used for Tuskar, which
 may be helpful so that we're sure that we're all talking about the
 same thing :)  I'm copying responses from the requirements thread and
 combining them with current requirements to try and create a unified
 view.  Hopefully, we can come to a reasonably rapid consensus on any
 desired changes; once that's done, the requirements can be updated. 

Your mail client seems to wrap lines awkwardly, well past the standard
length. Just seems kinda odd.


 * UNDEPLOYED NODE - a node that has not been mapped with a role 
  * another option - UNALLOCATED NODE - a node that has not been
 allocated through nova scheduler (?)
- (after reading lifeless's
   explanation, I agree that allocation may be a misleading term under
   TripleO, so I personally vote for UNDEPLOYED)

Not to muddy the waters further, but 'undeployed' sounds a little bit to
me like the node was deployed, and then we un-deployed it. I think
'nondeployed' eliminates that, but makes it sound fairly odd.

I like James' unmapped, FWIW. Though I guess it leaves the same
ambiguity about whether it was mapped and then un-mapped, or if it's yet
to be mapped.


 * SIZE THE ROLES - the act of deciding how many nodes will need to be
 assigned to each role 
  * another option - DISTRIBUTE NODES (?) - (I
 think the former is more accurate, but perhaps there's a better way
 to say it?)

I don't love 'size the roles', though I'm not sure that 'distribute' has
the same meaning.

If I didn't already know what you meant, and you asked me what 'size the
nodes' meant, I'd assume it was about the size of a given instance --
e.g., does node X need 2GB RAM or should I give it 4GB?

What you're really doing is setting the 'count' of nodes, right? Set
Node Count? (I don't love that particular phrasing either, but it seems
more accurate/precise.)


 Does this seem accurate?  All feedback is appreciated!

Thanks for starting this discussing, Mainn. I agree with the others that
it's a good idea to get this stuff nailed down early on.


-- 
Matt Wagner
Software Engineer, Red Hat



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-11 Thread Stefano Maffulli
On 12/06/2013 02:19 AM, Jaromir Coufal wrote:
 We are growing. At the moment we are 4 core members and others are
 coming in. But honestly, contributors are not coming to specific
 projects - they go to reach UX community in a sense - OK this is awesome
 effort, how can I help? What can I work on? 

It seems to me that from the comments in the thread, we may have these
fresh energies directed at reviewing code from the UX perspective. Do
you think that code reviews across all projects are something in scope
for the UX team? If so, how do you think we can make it easier for the
UX team to discover reviews that may require comments?


/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-11 Thread Doug Hellmann
On Wed, Dec 11, 2013 at 3:41 PM, Ryan Petrello
ryan.petre...@dreamhost.comwrote:

 Hello,

 I’ve spent the past week experimenting with using Pecan for Nova’s API,
 and have opened an experimental review:

 https://review.openstack.org/#/c/61303/6

 …which implements the `versions` v3 endpoint using pecan (and paves the
 way for other extensions to use pecan).  This is a *potential* approach
 I've considered for gradually moving the V3 API, but I’m open to other
 suggestions (and feedback on this approach).  I’ve also got a few open
 questions/general observations:

 1.  It looks like the Nova v3 API is composed *entirely* of extensions
 (including “core” API calls), and that extensions and their routes are
 discoverable and extensible via installed software that registers itself
 via stevedore.  This seems to lead to an API that’s composed of installed
 software, which in my opinion, makes it fairly hard to map out the API (as
 opposed to how routes are manually defined in other WSGI frameworks).  I
 assume at this time, this design decision has already been solidified for
 v3?


Yeah, I brought this up at the summit. I am still having some trouble
understanding how we are going to express a stable core API for
compatibility testing if the behavior of the API can be varied so
significantly by deployment decisions. Will we just list each required
extension, and forbid any extras for a compliant cloud?

Maybe the issue is caused by me misunderstanding the term extension,
which (to me) implies an optional component but is perhaps reflecting a
technical implementation detail instead?

Doug




 2.  The approach in my review would allow us to translate extensions to
 pecan piecemeal.  To me, this seems like a more desirable and manageable
 approach than moving everything to pecan at once, given the scale of Nova’s
 API.  Do others agree/disagree?  Until all v3 extensions are translated,
 this means the v3 API is composed of two separate WSGI apps.

 3.  Can somebody explain the purpose of the wsgi.deserializer decorator?
  It’s something I’ve not accounted for yet in my pecan implementation.  Is
 the goal to deserialize the request *body* from e.g., XML into a usable
 data structure?  Is there an equivalent for JSON handling?  How does this
 relate to the schema validation that’s being done in v3?

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Plan files and resources

2013-12-11 Thread Clayton Coleman


- Original Message -
 Devdatta,
 
 On Dec 10, 2013, at 12:37 PM, devdatta kulkarni
 devdatta.kulka...@rackspace.com wrote:
 
  Hi Adrian,
  
  Thanks for creating https://etherpad.openstack.org/p/solum-demystified
  
  I am really excited to see the examples. Especially cool is how
  examples 2 and 3 demonstrate using a component (solum_glance_id) created
  as part of example 1.
  
  
  Some questions/comments:
  
  1) Summarizing the sequence of events just to make sure I understand them
  correctly:
a) User selects a language pack and specifies its id in the plan file
 
 They could put the language pack reference into a Plan file, or we could
 generate a Plan file with a CLI command that feeds an auto-generated file to
 the API for the user. That might reduce the user complexity a bit for the
 general case.

It seems like the reasonable M1 and M2 scenarios are to get the bones of an 
integration working that allow a flexible Plan to exist (but not necessarily 
something an average user would edit).  M2 and M3 can focus on the support 
around making Plans that mere mortals can throw together (whether generated or 
precreated by an operator), and a lot of how that evolves depends on the other 
catalog work.  You could argue the resistance from some quarters to the current 
PaaS model is that the Plan equivalent is hardcoded and non-flexible - what 
is being done differently here is to offer the concepts necessary to allow 
other types of plans and application models to coexist in a single system.

 
b) User creates repo with the plan file in it.
 
 We could scan the repo for a Plan file to override the auto-generation step,
 to allow a method for customization.
 
After this the flow could be:
c.1) User uses solum cli to 'create' an application by giving reference
to
   the repo uri
 
 I view this as the use of the cli app create command as the first step.
 They can optionally specify a Plan file to use for either the build
 sequence, or the app deployment sequence, or both (for a total of TWO Plan
 files). We could also allow plan files to be placed in the Git repo, and
 picked up there in the event that none are specified on the command line.
 
 Note that they may also put a HOT file in their repo, and bypass HOT file
 generation/catalog-lookup and cause Solum to use the supplied template. This
 would be useful for power users who want the ability to further influence
 the arrangement of the Heat stack.
 
c.1.1) Solum creates a plan resource
c.1.2) Solum model interpreter creates a Heat stack and does the rest
of the
 things needed to create a assembly.
(The created plan resource does not play any part in assembly
creation as such.
 Its only role is being a 'trackback' to track the plan from which
 the assembly was created.)
 
 It's also a way to find out what services the given requirements were mapped
 to. In a Plan file, the services array contains ServiceSpecfications (see
 the EX[1-3] YAML examples under the services node for an example of what
 those look like. In a Plan resource, the services array includes a list of
 service resources so you can see what Solum's model interpreter mapped your
 requirements to.
 
or,
c.2) User uses solum cli to 'create/register' a plan by providing
reference to the repo uri.
 c.2.1) Solum creates the plan resource.
c.2) User uses solum cli to 'create' an application by specifying the
created plan
 resource uri
 (In this flow, the plan is actively used).
 
 Yes, this would be another option. I expect that this approach may be used by
 users who want to create multitudes of Assemblies from a given Plan
 resource.
 
  2) Addition of new solum specific attributes in a plan specification is
  interesting.
I imagine those can be contributed back as Solum profile to CAMP spec?
 
 If we want, that input would certainly be welcomed.
 
  3) Model interpreter for generating Heat stack from a plan is a nice idea.
For all: Are there any recommended libraries for this?
 
 Good question. There are a number of orchestration systems that we could look
 at as case studies. Anything that has a declarative DSL is likely to have
 implementations that are relevant to our need for a model interpreter. This
 includes Heat.
 
  4) Just to confirm, I assume that the api-spec-review etherpad
  (https://etherpad.openstack.org/p/solum-api-spec-review),
is for fyi purpose only. If someone wants to know what is the current
thinking about API, they should
just look at the solum-demystified etherpad
(https://etherpad.openstack.org/p/solum-demystified)
 
 I just updated the solum-api-spec-review, as that's actually still WIP. I
 labeled it as such.
 
 Thanks,
 
 Adrian
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] [glance] Please stop +Aing glance changes until your doc job is working

2013-12-11 Thread Sean Dague
Dear Glance core,

Until this review is sorted - https://review.openstack.org/#/c/60971/2

You won't be able to merge any changes, because of the docs issue with
sphinx.
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021863.html

Which means right now every glance patch that goes into the gate will
100% fail, and will cause 45-60 minute delay to every other project in
the gate as your change has to fail out of the queue.

Thanks,

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Running multiple neutron-servers

2013-12-11 Thread Mike Wilson
Hi Neutron team,

I haven't been involved in neutron meetings for quite some time so I'm not
sure where we are on this at this point. It is often recommended in
OpenStack guides and other operational materials to run multiple
neutron-servers to deal with the API load from Nova. Things like the
_heal_instance_info_caches periodic task as well as just normal create
requests are pretty heavy. Those issues aside I think we can all agree that
it would good for the neutron-server to be horizontally scalable. I don't
have a handle on the all the issues surrounding this. However, I did report
a bug a few months ago about concurrency and updates to the
IpAvailabilityRanges[1]. There was a fix proposed by Zhang Hua [2] that
seems like it needs more discussion.

Essentially, Salvatore has concerns about patching up a design flaw from
what I gather. At the same time, we still have had this issue since the
initial release of neutron(quantum) and it is still a really big deal for
deployers. I would like to propose that we pick up the conversation where
it left off on the proposed fix and _also_ consider any possible redesign
going forward.

Could I get some feedback from Salvatore specifically and other members of
the team on this? I would also be happy to pitch in towards whatever
solution is decided on provided we can rescue the poor deployers :-).

-Mike Wilson


[1] https://bugs.launchpad.net/neutron/+bug/1214115
[2] https://review.openstack.org/43275
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] getting back on the train post-summit

2013-12-11 Thread Robert Collins
Hey everyone!

So the summit was a month back, but on the TripleO side we've kindof
slumped: there is great design work on the console side happening
(yay), but the CD side - where Nova rebuild support is the current
blocker - is basically stalled.

Folk are doing good work on related bits of plumbing, bringing up more
services and improving the quality of what we have but we not really
*moving forward*.

So - I want to suggest that we strap back on the 'collaborate rather
than partition' mindset, and all get stuck in to the rebuild
preserving ephemeral partition blueprint - it is literally the single
most important thing for TripleO right now, and with a need for
patches in four projects (Nova, python-novaclient, the API docs and
finally in heat to expose the use of it)  there is still plenty of
room for folk to make sure they don't tread on each others toes.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-11 Thread Christopher Yeoh
On Thu, Dec 12, 2013 at 7:11 AM, Ryan Petrello
ryan.petre...@dreamhost.comwrote:

 Hello,

 I’ve spent the past week experimenting with using Pecan for Nova’s API,
 and have opened an experimental review:

 https://review.openstack.org/#/c/61303/6

 …which implements the `versions` v3 endpoint using pecan (and paves the
 way for other extensions to use pecan).  This is a *potential* approach
 I've considered for gradually moving the V3 API, but I’m open to other
 suggestions (and feedback on this approach).  I’ve also got a few open
 questions/general observations:

 1.  It looks like the Nova v3 API is composed *entirely* of extensions
 (including “core” API calls), and that extensions and their routes are
 discoverable and extensible via installed software that registers itself
 via stevedore.  This seems to lead to an API that’s composed of installed
 software, which in my opinion, makes it fairly hard to map out the API (as
 opposed to how routes are manually defined in other WSGI frameworks).  I
 assume at this time, this design decision has already been solidified for
 v3?


Yes, from an implementation view everything is an extension, even core
functionality. One issue with the V2 API is that because core is hard coded
and separate from the plugin framework there were things you could do in
core API code that you couldn't do in extensions and other things which you
could do in both, but had to do in different ways. Which is bad from a
maintainability/readability point of view. And inevitably we ended up with
extension specific code sitting in what should have been only core code. So
we ended up deciding to make everything a plugin to consistency of how API
code is written and also ensured that the framework didn't treat core API
code in any special way.



 2.  The approach in my review would allow us to translate extensions to
 pecan piecemeal.  To me, this seems like a more desirable and manageable
 approach than moving everything to pecan at once, given the scale of Nova’s
 API.  Do others agree/disagree?  Until all v3 extensions are translated,
 this means the v3 API is composed of two separate WSGI apps.


Yes, I think this is the way to go. Attempting to get a big-bang patch
merged would be rather challenging.



 3.  Can somebody explain the purpose of the wsgi.deserializer decorator?
  It’s something I’ve not accounted for yet in my pecan implementation.  Is
 the goal to deserialize the request *body* from e.g., XML into a usable
 data structure?  Is there an equivalent for JSON handling?  How does this
 relate to the schema validation that’s being done in v3?


Yes the deserializer decorator specifies and XML deserializer which is used
when the default one is not suitable. schema validation is done on the
deserialized output so essentially covers both JSON and XML (eg XML is not
directly validated, but what we end up interpreting in the api code is).

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Dec 11 2013

2013-12-11 Thread Anne Gentle
Thanks to new doc patchers Shilla Saebi and Thomas Herve for their clean up
work, especially for the Identity API docs and Heat install!

Be ready for 12/20/13 Doc Bug Day! Looking forward to it.

1. In review and merged this past week:

The Install Guide is still the most worked-on document. I'd like to shift
from that now that we're past milestone-1 and work harder on the Cloud
Administrator Guide and Operations Guide, to separate out items from the
Configuration Reference that are how to and move them to the Cloud
Administrator Guide. Nermina made progress on the Cloud Administrator Guide
additions from the Configuration Reference, her analysis is here:
http://lists.openstack.org/pipermail/openstack-docs/2013-December/003459.html

Lana Brindley was in Texas this week, and she's going to take on
maintenance of the User Guide and Admin User Guide with some additions of
scripts that are common user/admin user tasks with CLIs. She's also looking
into our processes and reviews and making good strides towards bringing in
developer teams for reviews after they use the DocImpact flag. Thanks Lana!

2. High priority doc work:

To me, one high priority is to get the config options done for milestone 1
release.

Another priority is for me to write a content spec for each book that we
currently maintain, so that incoming projects can easily plug in to each
deliverable. If I had to write a short description of each book, it'd be:

Installation Guide - Describes a manual install process for multiple
distributions including CentOS, Debian, Fedora, OpenSUSE, RedHat Enterprise
Linux, SUSE Enterprise Linux, and Ubuntu.

Configuration Reference - Contains a reference listing of all configuration
options for core and integrated OpenStack services.

Cloud Administrator Guide - Contains how-to information for managing an
OpenStack cloud as needed for your use cases, described in this document.

High Availability Guide - Describes potential strategies for making your
OpenStack services and related controllers and data stores highly available.

Operations Guide - Offers information for designing and operating OpenStack
private or public clouds plus best practices for day-to-day operations.

Security Guide - Provide best practices and conceptual information about
securing an OpenStack cloud

Virtual Machine Image Guide - Shows you how to obtain, create, and modify
virtual machine images that are compatible with OpenStack.
End User Guide - Shows OpenStack end users how to create and manage
resources in an OpenStack cloud with the OpenStack dashboard and OpenStack
client commands.

Admin User Guide - Shows OpenStack admin users how to create and manage
resources in an OpenStack cloud with the OpenStack dashboard and OpenStack
client commands.

API Quick Start - A brief overview of how to send requests to endpoints for
OpenStack services.

I'd like for projects to understand what goes where and be able to write
sections that fit into these titles. I'm not recommending that you create
your own title, but understand where your section can go in the wider
picture of OpenStack docs.

3. Doc work going on that I know of:

See below for the developmental edit of the O'Reilly edition of the
Operations Guide.

Shaun's working on the autodoc for configuration option tables for the
icehouse-1 milestone.

4. New incoming doc requests:

The Triple-O team would like to talk about how to plug into the OpenStack
existing docs. Feel free to reach out on details, while we are going to
stick with the manual install for this release (Icehouse), perhaps the
Triple-O team can get a head start. Open to ideas after hearing from them
in the weekly project meeting.

5. Doc tools updates:

The infra team is working on ways to version and deploy the
clouddocs-maven-plugin (wow, that's verbing a noun).

6. Other doc news:

The development edit for the Operations Guide from O'Reilly has begun, and
by tomorrow we should have input on a few more chapters. Here are some
overarching comments from Brian Anderson you may like to help us dig into:

---

- an expanded Preface which situates the Operations Guide in relation to
other, related guides (like the Admin guide you mentioned). A taxonomy of
user roles would be helpful here.
- an introductory chapter about OpenStack, showing, in particular, a
high-level view of an OpenStack deployment and the different sorts of
components. There should be an emphasis on how the components work
together. Some institution-level suggestions would perhaps be helpful here
(what your organization should bear in mind when using OpenStack)
- a sneak peek of Icehouse (in the preface? or as an appendix?)
- more content focusing on how to upgrade from one version to another
(maybe this would be a new chapter?)

As I mentioned, the book is quite clean. However, we can aim it at our
audience a bit more. As it stands, the book contains a lot of tactics but
not a lot of strategy. What are some broad considerations that our audience
should 

Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-11 Thread Christopher Yeoh
On Thu, Dec 12, 2013 at 8:59 AM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Dec 11, 2013 at 3:41 PM, Ryan Petrello 
 ryan.petre...@dreamhost.com wrote:

 Hello,

 I’ve spent the past week experimenting with using Pecan for Nova’s API,
 and have opened an experimental review:

 https://review.openstack.org/#/c/61303/6

 …which implements the `versions` v3 endpoint using pecan (and paves the
 way for other extensions to use pecan).  This is a *potential* approach
 I've considered for gradually moving the V3 API, but I’m open to other
 suggestions (and feedback on this approach).  I’ve also got a few open
 questions/general observations:

 1.  It looks like the Nova v3 API is composed *entirely* of extensions
 (including “core” API calls), and that extensions and their routes are
 discoverable and extensible via installed software that registers itself
 via stevedore.  This seems to lead to an API that’s composed of installed
 software, which in my opinion, makes it fairly hard to map out the API (as
 opposed to how routes are manually defined in other WSGI frameworks).  I
 assume at this time, this design decision has already been solidified for
 v3?


 Yeah, I brought this up at the summit. I am still having some trouble
 understanding how we are going to express a stable core API for
 compatibility testing if the behavior of the API can be varied so
 significantly by deployment decisions. Will we just list each required
 extension, and forbid any extras for a compliant cloud?


 Maybe the issue is caused by me misunderstanding the term extension,
 which (to me) implies an optional component but is perhaps reflecting a
 technical implementation detail instead?


Yes and no :-) As Ryan mentions, all API code is a plugin in the V3 API.
However, some must be loaded or the V3 API
refuses to start up. In nova/api/openstack/__init__.py we have
API_V3_CORE_EXTENSIONS which hard codes
which extensions must be loaded and there is no config option to override
this (blacklisting a core plugin will result in the
V3 API not starting up).

So for compatibility testing I think what will probably happen is that
we'll be defining a minimum set (API_V3_CORE_EXTENSIONS)
that must be implemented and clients can rely on that always being present
on a compliant cloud. But clients can also then query through /extensions
what other functionality (which is backwards compatible with respect to
core) may also be present on that specific cloud.

Chris



 Doug




 2.  The approach in my review would allow us to translate extensions to
 pecan piecemeal.  To me, this seems like a more desirable and manageable
 approach than moving everything to pecan at once, given the scale of Nova’s
 API.  Do others agree/disagree?  Until all v3 extensions are translated,
 this means the v3 API is composed of two separate WSGI apps.

 3.  Can somebody explain the purpose of the wsgi.deserializer decorator?
  It’s something I’ve not accounted for yet in my pecan implementation.  Is
 the goal to deserialize the request *body* from e.g., XML into a usable
 data structure?  Is there an equivalent for JSON handling?  How does this
 relate to the schema validation that’s being done in v3?

 ---
 Ryan Petrello
 Senior Developer, DreamHost
 ryan.petre...@dreamhost.com
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-11 Thread Georgy Okrokvertskhov
Hi,

To keep this thread alive I would like to share the small screencast I've
recorded for Murano Metadata repository. I would like to share with you
what we have in Murano and start a conversation about metadata repository
development in OpenStack. Here is a link to screencast
http://www.youtube.com/watch?v=Yi4gC4ZhvPg Here is a
linkhttps://wiki.openstack.org/wiki/Murano/SimplifiedMetadataRepository
to a detailed specification of PoC for metadata repository currently
implemented in Murano.

There is an etherpad (here https://etherpad.openstack.org/p/MuranoMetadata)
for new MetadataRepository design we started to write after lesson learn
phase of PoC. This is a future version of repository we want to have. This
proposal can be used as an initial basis for metadata repository design
conversation.

It will be great if we start conversation with Glance team to understand
how this work can be organized. As it was revealed in this thread, the most
probable candidate for metadata repository service implementation is Glance
program.

Thanks,
Georgy


On Mon, Dec 9, 2013 at 3:24 AM, Thierry Carrez thie...@openstack.orgwrote:

 Vishvananda Ishaya wrote:
  On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov
  gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com
 wrote:
 
  I am really inspired by this thread. Frankly saying, Glance for Murano
  was a kind of sacred entity, as it is a service with a long history in
  OpenStack.  We even did not think in the direction of changing Glance.
  Spending a night with these ideas, I am kind of having a dream about
  unified catalog where the full range of different entities are
  presented. Just imagine that we have everything as  first class
  citizens of catalog treated equally: single VM (image), Heat template
  (fixed number of VMs\ autoscaling groups), Murano Application
  (generated Heat templates), Solum assemblies
 
  Projects like Solum will highly benefit from this catalog as it can
  use all varieties of VM configurations talking with one service.
  This catalog will be able not just list all possible deployable
  entities but can be also a registry for already deployed
  configurations. This is perfectly aligned with the goal for catalog to
  be a kind of market place which provides billing information too.
 
  OpenStack users also will benefit from this as they will have the
  unified approach for manage deployments and deployable entities.
 
  I doubt that it could be done by a single team. But if all teams join
  this effort we can do this. From my perspective, this could be a part
  of Glance program and it is not necessary to add a new program for
  that. As it was mentioned earlier in this thread an idea of market
  place for images in Glance was here for some time. I think we can
  extend it to the idea of creating a marketplace for a deployable
  entity regardless of the way of deployment. As Glance is a core
  project which means it always exist in OpenStack deployment it makes
  sense to as a central catalog for everything.
 
  +1

 +1 too.

 I don't think that Glance is collapsing under its current complexity
 yet, so extending Glance to a general catalog service that can serve
 more than just reference VM images makes sense IMHO.

 --
 Thierry Carrez (ttx)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-11 Thread Randall Burt
On Dec 11, 2013, at 5:44 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com
 wrote:

 Hi,
 
 To keep this thread alive I would like to share the small screencast I've 
 recorded for Murano Metadata repository. I would like to share with you what 
 we have in Murano and start a conversation about metadata repository 
 development in OpenStack. Here is a link to screencast 
 http://www.youtube.com/watch?v=Yi4gC4ZhvPg Here is a link  to a detailed 
 specification of PoC for metadata repository currently implemented in Murano.
 
 There is an etherpad (here) for new MetadataRepository design we started to 
 write after lesson learn phase of PoC. This is a future version of repository 
 we want to have. This proposal can be used as an initial basis for metadata 
 repository design conversation.
 
 It will be great if we start conversation with Glance team to understand how 
 this work can be organized. As it was revealed in this thread, the most 
 probable candidate for metadata repository service implementation is Glance 
 program. 
 
 Thanks,
 Georgy

Thanks for the link and info. I think the general consensus is this belongs in 
Glance, however I think details are being deferred until the mid-summit meet up 
in Washington D.C. (I could be totally wrong about this). In any case, I think 
I'll also start converting the existing HeatR blueprints to Glance ones. 
Perhaps it would be a good idea at this point to propose specific blueprints 
and have further ML discussions focused on specific changes?

 On Mon, Dec 9, 2013 at 3:24 AM, Thierry Carrez thie...@openstack.org wrote:
 Vishvananda Ishaya wrote:
  On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov
  gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote:
 
  I am really inspired by this thread. Frankly saying, Glance for Murano
  was a kind of sacred entity, as it is a service with a long history in
  OpenStack.  We even did not think in the direction of changing Glance.
  Spending a night with these ideas, I am kind of having a dream about
  unified catalog where the full range of different entities are
  presented. Just imagine that we have everything as  first class
  citizens of catalog treated equally: single VM (image), Heat template
  (fixed number of VMs\ autoscaling groups), Murano Application
  (generated Heat templates), Solum assemblies
 
  Projects like Solum will highly benefit from this catalog as it can
  use all varieties of VM configurations talking with one service.
  This catalog will be able not just list all possible deployable
  entities but can be also a registry for already deployed
  configurations. This is perfectly aligned with the goal for catalog to
  be a kind of market place which provides billing information too.
 
  OpenStack users also will benefit from this as they will have the
  unified approach for manage deployments and deployable entities.
 
  I doubt that it could be done by a single team. But if all teams join
  this effort we can do this. From my perspective, this could be a part
  of Glance program and it is not necessary to add a new program for
  that. As it was mentioned earlier in this thread an idea of market
  place for images in Glance was here for some time. I think we can
  extend it to the idea of creating a marketplace for a deployable
  entity regardless of the way of deployment. As Glance is a core
  project which means it always exist in OpenStack deployment it makes
  sense to as a central catalog for everything.
 
  +1
 
 +1 too.
 
 I don't think that Glance is collapsing under its current complexity
 yet, so extending Glance to a general catalog service that can serve
 more than just reference VM images makes sense IMHO.
 
 --
 Thierry Carrez (ttx)
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Lyle, David

 -Original Message-
 From: Monty Taylor [mailto:mord...@inaugust.com]
 Sent: Wednesday, December 11, 2013 10:28 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Horizon] Nominations to Horizon Core
 
 
 
 On 12/11/2013 03:51 PM, Russell Bryant wrote:
  On 12/10/2013 05:57 PM, Paul McMillan wrote:
  +1 on Tatiana Mazur, she's been doing a bunch of good work lately.
 
  I'm fine with me being removed from core provided you have someone
 else qualified to address security issues as they come up. My contributions
 have lately been reviewing and responding to security issues, vetting fixes
 for those, and making sure they happen in a timely fashion. Fortunately, we
 haven't had too many of those lately. Other than that, I've been lurking and
 reviewing to make sure nothing egregious gets committed.
 
  If you don't have anyone else who is a web security specialist on the core
 team, I'd like to stay. Since I'm also a member of the Django security team, I
 offer a significant chunk of knowledge about how the underlying security
 protections are intended work.
 
  Security reviews aren't done on gerrit, though.  They are handled in
  launchpad bugs.  It seems you could still contribute in this way without
  being on the horizon-core team responsible for reviewing normal changes
  in gerrit.
 
  The bigger point is that you don't have to be on whatever-core to
  contribute productively to reviews.  I think every project has people
  that make important review contributions, but aren't necessarily
  reviewing regularly enough to be whatever-core.
 
 And as a follow up - I betcha the vulnerability-management team would
 LOVE to have you!
 

Your reviews are still valued and carry no less weight in or out of 
Horizon-core.  It really just boils down to engagement.

I agree with Monty, that vulnerability-management seems like a natural fit for 
the concerns you raise, plus it has a broader focus.

David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-11 Thread Georgy Okrokvertskhov
Hi,

I think BP is a right way to organize this. I will submit BP for metadata
service from our side too.

Thanks
Georgy


On Wed, Dec 11, 2013 at 3:53 PM, Randall Burt randall.b...@rackspace.comwrote:

 On Dec 11, 2013, at 5:44 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com
  wrote:

  Hi,
 
  To keep this thread alive I would like to share the small screencast
 I've recorded for Murano Metadata repository. I would like to share with
 you what we have in Murano and start a conversation about metadata
 repository development in OpenStack. Here is a link to screencast
 http://www.youtube.com/watch?v=Yi4gC4ZhvPg Here is a link  to a detailed
 specification of PoC for metadata repository currently implemented in
 Murano.
 
  There is an etherpad (here) for new MetadataRepository design we started
 to write after lesson learn phase of PoC. This is a future version of
 repository we want to have. This proposal can be used as an initial basis
 for metadata repository design conversation.
 
  It will be great if we start conversation with Glance team to understand
 how this work can be organized. As it was revealed in this thread, the most
 probable candidate for metadata repository service implementation is Glance
 program.
 
  Thanks,
  Georgy

 Thanks for the link and info. I think the general consensus is this
 belongs in Glance, however I think details are being deferred until the
 mid-summit meet up in Washington D.C. (I could be totally wrong about
 this). In any case, I think I'll also start converting the existing HeatR
 blueprints to Glance ones. Perhaps it would be a good idea at this point to
 propose specific blueprints and have further ML discussions focused on
 specific changes?

  On Mon, Dec 9, 2013 at 3:24 AM, Thierry Carrez thie...@openstack.org
 wrote:
  Vishvananda Ishaya wrote:
   On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov
   gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com
 wrote:
  
   I am really inspired by this thread. Frankly saying, Glance for Murano
   was a kind of sacred entity, as it is a service with a long history in
   OpenStack.  We even did not think in the direction of changing Glance.
   Spending a night with these ideas, I am kind of having a dream about
   unified catalog where the full range of different entities are
   presented. Just imagine that we have everything as  first class
   citizens of catalog treated equally: single VM (image), Heat template
   (fixed number of VMs\ autoscaling groups), Murano Application
   (generated Heat templates), Solum assemblies
  
   Projects like Solum will highly benefit from this catalog as it can
   use all varieties of VM configurations talking with one service.
   This catalog will be able not just list all possible deployable
   entities but can be also a registry for already deployed
   configurations. This is perfectly aligned with the goal for catalog to
   be a kind of market place which provides billing information too.
  
   OpenStack users also will benefit from this as they will have the
   unified approach for manage deployments and deployable entities.
  
   I doubt that it could be done by a single team. But if all teams join
   this effort we can do this. From my perspective, this could be a part
   of Glance program and it is not necessary to add a new program for
   that. As it was mentioned earlier in this thread an idea of market
   place for images in Glance was here for some time. I think we can
   extend it to the idea of creating a marketplace for a deployable
   entity regardless of the way of deployment. As Glance is a core
   project which means it always exist in OpenStack deployment it makes
   sense to as a central catalog for everything.
  
   +1
 
  +1 too.
 
  I don't think that Glance is collapsing under its current complexity
  yet, so extending Glance to a general catalog service that can serve
  more than just reference VM images makes sense IMHO.
 
  --
  Thierry Carrez (ttx)
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  --
  Georgy Okrokvertskhov
  Technical Program Manager,
  Cloud and Infrastructure Services,
  Mirantis
  http://www.mirantis.com
  Tel. +1 650 963 9828
  Mob. +1 650 996 3284
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list

Re: [openstack-dev] [TripleO][Tuskar] Terminology

2013-12-11 Thread Robert Collins
On 12 December 2013 08:15, Tzu-Mainn Chen tzuma...@redhat.com wrote:
 Hi,

 I'm trying to clarify the terminology being used for Tuskar, which may be 
 helpful so that we're sure
 that we're all talking about the same thing :)  I'm copying responses from 
 the requirements thread
 and combining them with current requirements to try and create a unified 
 view.  Hopefully, we can come
 to a reasonably rapid consensus on any desired changes; once that's done, the 
 requirements can be
 updated.

 * NODE a physical general purpose machine capable of running in many roles. 
 Some nodes may have hardware layout that is particularly
useful for a given role.

  * REGISTRATION - the act of creating a node in Ironic

  * ROLE - a specific workload we want to map onto one or more nodes. 
 Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.

  * MANAGEMENT NODE - a node that has been mapped with an undercloud 
 role

Pedantically, this is 'A node with an instance of a management role
running on it'. I think calling it 'management node' is too sticky.
What if we cold migrate it to another machine when a disk fails and we
want to avoid dataloss if another disk were to fail?

Management instance?

  * SERVICE NODE - a node that has been mapped with an overcloud role

Again, the binding to node is too sticky IMNSHO.

Service instance? Cloud instance?

 * COMPUTE NODE - a service node that has been mapped to an 
 overcloud compute role
 * CONTROLLER NODE - a service node that has been mapped to an 
 overcloud controller role
 * OBJECT STORAGE NODE - a service node that has been mapped to an 
 overcloud object storage role
 * BLOCK STORAGE NODE - a service node that has been mapped to an 
 overcloud block storage role

s/Node/instance/ ?

  * UNDEPLOYED NODE - a node that has not been mapped with a role
   * another option - UNALLOCATED NODE - a node that has not been 
 allocated through nova scheduler (?)
- (after reading lifeless's explanation, I 
 agree that allocation may be a
   misleading term under TripleO, so I 
 personally vote for UNDEPLOYED)

I like 'available' because it is a direct statement that doesn't
depend on how things are utilised - mapping or allocation or
deployment or whatever. It is available for us to do something with
it.
'Available nodes'.


  * INSTANCE - A role deployed on a node - this is where work actually 
 happens.

 * DEPLOYMENT

  * SIZE THE ROLES - the act of deciding how many nodes will need to be 
 assigned to each role
* another option - DISTRIBUTE NODES (?)
  - (I think the former is more accurate, but 
 perhaps there's a better way to say it?)

Perhaps 'Size the cloud' ? How big do you want your cloud to be?

  * SCHEDULING - the process of deciding which role is deployed on which 
 node

This possible should be a sub step of deployment.

  * SERVICE CLASS - a further categorization within a service role for a 
 particular deployment.

See the other thread where I suggested perhaps bringing the image +
config aspects all the way up - I think that renames 'service class'
to 'Role configuration'. KVM Compute is a role configuration. KVM
compute(GPU) might be another.

   * NODE PROFILE - a set of requirements that specify what attributes 
 a node must have in order to be mapped to
a service class

Today the implementation at the plumbing layer can only do 'flavour',
though Heat is open to letting us to 'find an instance from any of X
flavors' in future. Lets not be -too- generic:
'Flavor': The Nova description of a particular machine configuration,
and choosing one is part of setting up the 'role configuration'.

Thanks for drafting this!

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Please stop +Aing glance changes until your doc job is working

2013-12-11 Thread Mark Washenberger
On Wed, Dec 11, 2013 at 3:05 PM, Sean Dague s...@dague.net wrote:

 Dear Glance core,

 Until this review is sorted - https://review.openstack.org/#/c/60971/2


Or this one https://review.openstack.org/#/c/61600/ rather




 You won't be able to merge any changes, because of the docs issue with
 sphinx.

 http://lists.openstack.org/pipermail/openstack-dev/2013-December/021863.html

 Which means right now every glance patch that goes into the gate will
 100% fail, and will cause 45-60 minute delay to every other project in
 the gate as your change has to fail out of the queue.

 Thanks,

 -Sean


Thanks for the alert.



 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-11 Thread Alex Xu

On 2013年12月12日 04:41, Ryan Petrello wrote:

Hello,

I’ve spent the past week experimenting with using Pecan for Nova’s API, and 
have opened an experimental review:

https://review.openstack.org/#/c/61303/6

…which implements the `versions` v3 endpoint using pecan (and paves the way for 
other extensions to use pecan).  This is a *potential* approach I've considered 
for gradually moving the V3 API, but I’m open to other suggestions (and 
feedback on this approach).  I’ve also got a few open questions/general 
observations:

1.  It looks like the Nova v3 API is composed *entirely* of extensions 
(including “core” API calls), and that extensions and their routes are 
discoverable and extensible via installed software that registers itself via 
stevedore.  This seems to lead to an API that’s composed of installed software, 
which in my opinion, makes it fairly hard to map out the API (as opposed to how 
routes are manually defined in other WSGI frameworks).  I assume at this time, 
this design decision has already been solidified for v3?

2.  The approach in my review would allow us to translate extensions to pecan 
piecemeal.  To me, this seems like a more desirable and manageable approach 
than moving everything to pecan at once, given the scale of Nova’s API.  Do 
others agree/disagree?  Until all v3 extensions are translated, this means the 
v3 API is composed of two separate WSGI apps.

+1 for this too.

3.  Can somebody explain the purpose of the wsgi.deserializer decorator?  It’s 
something I’ve not accounted for yet in my pecan implementation.  Is the goal 
to deserialize the request *body* from e.g., XML into a usable data structure?  
Is there an equivalent for JSON handling?  How does this relate to the schema 
validation that’s being done in v3?

---
Ryan Petrello
Senior Developer, DreamHost
ryan.petre...@dreamhost.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-11 Thread Sean Dague
On 12/11/2013 04:17 PM, Chris Buccella wrote:
 On 12/02/2013 10:18 AM, Joe Gordon wrote:

  


 Thanks for bringing this up, and I'd welcome a patch in Swift that
 would use a common library to generate the transaction id, if it
 were installed. I can see that there would be huge advantage to
 operators to trace requests through multiple systems.

 Another option would be for each system that calls an another
 OpenStack system to expect and log the transaction ID for the
 request that was given. This would be looser coupling and be more
 forgiving for a heterogeneous cluster. Eg when Glance makes a call
 to Swift, Glance cloud log the transaction id that Swift used
 (from the Swift response). Likewise, when Swift makes a call to
 Keystone, Swift could log the Keystone transaction id. This
 wouldn't result in a single transaction id across all systems, but
 it would provide markers so an admin could trace the request.


 There was a session on this at the summit, and although the notes are
 a little scarce this was the conclusion we came up with.  Every time a
 cross service call is made, we will log and send a notification for
 ceilometer to consume, with the request-ids of both request ids.  One
 of the benefits of this approach is that we can easily generate a tree
 of all the API calls that are made (and clearly show when multiple
 calls are made to the same service), something that just a cross
 service request id would have trouble with.

 https://etherpad.openstack.org/p/icehouse-summit-qa-gate-debugability 


 With that in mind I think having a standard x-openstack-request-id
 makes things a little more uniform, and means that adding new services
 doesn't require new logic to handle new request ids.
 
 Two questions here:
 
 1) The APIChangeGuidelines state that changing a header is frowned upon.
 So I suppose that means we'll need to add x-openstack-request-id to nova
 and cinder, keeping around x-compute-request-id for the time being?
 
 2) The deadline for blueprints for icehouse-2 is next week. This
 blueprint [1] is still marked as next; should be move that up to
 icehouse-2?

x-compute-request-id would need to go through the normal deprecation
path. So deprecate for icehouse, remove in J. Adding
x-openstack-request-id could happen right away, just mirror the ids
across to it.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Akihiro Motoki
+1 for both Tatiana and cleaning up the core list.


On Wed, Dec 11, 2013 at 5:24 AM, Lyle, David david.l...@hp.com wrote:
 I would like to nominate Tatiana Mazur to Horizon Core.  Tatiana has been a 
 significant code contributor in the last two releases, understands the code 
 base well and has been doing a significant number of reviews for the last to 
 milestones.


 Additionally, I'd like to remove some inactive members of Horizon-core who 
 have been inactive since the early Grizzly release at the latest.
 Devin Carlen
 Jake Dahn
 Jesse Andrews
 Joe Heck
 John Postlethwait
 Paul McMillan
 Todd Willey
 Tres Henry
 paul-tashima
 sleepsonthefloor


 Please respond with a +1/-1 by this Friday.

 -David Lyle




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Kieran Spear
+1 for Tatiana and the clean-up.

On 11 December 2013 07:24, Lyle, David david.l...@hp.com wrote:
 I would like to nominate Tatiana Mazur to Horizon Core.  Tatiana has been a 
 significant code contributor in the last two releases, understands the code 
 base well and has been doing a significant number of reviews for the last to 
 milestones.


 Additionally, I'd like to remove some inactive members of Horizon-core who 
 have been inactive since the early Grizzly release at the latest.
 Devin Carlen
 Jake Dahn
 Jesse Andrews
 Joe Heck
 John Postlethwait
 Paul McMillan
 Todd Willey
 Tres Henry
 paul-tashima
 sleepsonthefloor


 Please respond with a +1/-1 by this Friday.

 -David Lyle




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Jeremy Stanley
On 2013-12-11 18:28:14 +0100 (+0100), Monty Taylor wrote:
 On 12/11/2013 03:51 PM, Russell Bryant wrote:
  On 12/10/2013 05:57 PM, Paul McMillan wrote:
  [...]
   If you don't have anyone else who is a web security specialist
   on the core team, I'd like to stay. Since I'm also a member of
   the Django security team, I offer a significant chunk of
   knowledge about how the underlying security protections are
   intended work.
  
  Security reviews aren't done on gerrit, though.  They are
  handled in launchpad bugs.  It seems you could still contribute
  in this way without being on the horizon-core team responsible
  for reviewing normal changes in gerrit.
  [...]
 
 And as a follow up - I betcha the vulnerability-management team
 would LOVE to have you!

In particular, there are plenty of open public vulnerabilities
throughout OpenStack in various states of being addressed which you
can pitch in on even with fairly limited levels of commitment.
Anything which needs an advisory, or which we think might need one
but are not yet sure, is listed at https://bugs.launchpad.net/ossa
(with privately-reported and still embargoed issues being the
exception). Whatever you see there which piques your interest,
whether it needs testing/confirmation, a patch or even just an
expert opinion on exploitability/risk would be a welcome
contribution.

Any help we get dealing with already public vulnerabilities frees up
more of our time to focus on embargoed items while still keeping the
core group small (minimizing risk of premature disclosure). More
info at...

https://wiki.openstack.org/wiki/Vulnerability_Management

/end_public_service_announcement

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Project status update

2013-12-11 Thread Devananda van der Veen
Hi all!

I realize it's been a while since I've posted an update about the project
-- it's high time I do so! And there are several things to report...

We tagged an Icehouse-1 milestone, though we did not publish a tarball just
yet. That should happen at the Icehouse-2 milestone.
  http://git.openstack.org/cgit/openstack/ironic/tag/?id=2014.1.b1

We've had a functioning python client (library and CLI) for a while now,
and I finally got around to tagging a release and pushing it up to PyPI.
I'll be issuing another release once the deployment API is implemented
(patches are up, but may take a few iterations).
  https://pypi.python.org/pypi/python-ironicclient

Speaking of APIs, we're auto-generating our API docs now. Thanks,
pecan/wsme! Note that our v1 API is not yet stabilized - but at least the
docs are going to stay up-to-date as we hammer out issues and add missing
components.
  http://docs.openstack.org/developer/ironic/webapi/v1.html

We have a patchset up for a Nova ironic driver; it is not
feature-complete and still a WIP, but I thought it would be good to list it
here in case anyone is interested in tracking its parity with the baremetal
driver.
  https://review.openstack.org/#/c/51328/

As of late October, Ironic was integrated with devstack. Though it is
currently disabled by default, it is easy to enable in your localrc.

We also have a diskimage-builder element and can use TripleO to deploy an
ironic-based undercloud. Even though it can't deploy an overcloud yet, I
find it very useful for development.


That's all for now,
-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-11 Thread Ian Wells
Are these NSX routers *functionally* different?

What we're talking about here is a router which, whether it's distributed
or not, behaves *exactly the same*.  So as I say, maybe it's an SLA thing,
but 'distributed' isn't really user meaningful if the user can't actually
prove he's received a distributed router by using the APIs or seeing
traffic flow differently.

I think, by the names you're referring to, the NSX routers acutally have
different user visible behaviour, and that's a different matter entirely,
obviously you want, as a user, to choose one or the other.
-- 
Ian.


On 10 December 2013 23:21, Vasudevan, Swaminathan (PNB Roseville) 
swaminathan.vasude...@hp.com wrote:

  Hi Nachi/Akihiro motoki,

 I am not clear.

 Today the L3 Service Plugin does not support the “service_type” attribute to 
 define the provider option.



 Are we suggesting that we need to include the service_type for the L3 Service 
 Plugin and then we can make use of the “service_type” attribute to 
 distinguish between the “edge” and “distributed”.





 So if I understand correctly, a “provider” router will be an Edge router and 
 a non-provider router will be a “distributed router”.



 Thanks

 Swami



 I'm +1 for 'provider'.



 2013/12/9 Akihiro Motoki mot...@da.jp.nec.com:

  Neutron defines provider attribute and it is/will be used in advanced

  services (LB, FW, VPN).

  Doesn't it fit for a distributed router case? If we can cover all services

  with one concept, it would be nice.

 

  According to this thread, we assumes at least two types edge and

  distributed.

  Though edge and distributed is a type of implementations, I think they

  are some kind of provider.

 

  I just would like to add an option. I am open to provider vs distirbute

  attributes.

 

  Thanks,

  Akihiro

 

  (2013/12/10 7:01), Vasudevan, Swaminathan (PNB Roseville) wrote:

  Hi Folks,

 

  We are in the process of defining the API for the Neutron Distributed

  Virtual Router, and we have a question.

 

  Just wanted to get the feedback from the community before we implement and

  post for review.

 

  We are planning to use the “distributed” flag for the routers that are

  supposed to be routing traffic locally (both East West and North South).

  This “distributed” flag is already there in the “neutronclient” API, but

  currently only utilized by the “Nicira Plugin”.

  We would like to go ahead and use the same “distributed” flag and add an

  extension to the router table to accommodate the “distributed flag”.

 

  Please let us know your feedback.

 

  Thanks.

 

  Swaminathan Vasudevan

  Systems Software Engineer (TC)

  HP Networking

  Hewlett-Packard

  8000 Foothills Blvd

  M/S 5541

  Roseville, CA - 95747

  tel: 916.785.0937

  fax: 916.785.1815

  email: swaminathan.vasude...@hp.com mailto:swaminathan.vasude...@hp.com 
  swaminathan.vasude...@hp.com





 Swaminathan Vasudevan

 Systems Software Engineer (TC)





 HP Networking

 Hewlett-Packard

 8000 Foothills Blvd

 M/S 5541

 Roseville, CA - 95747

 tel: 916.785.0937

 fax: 916.785.1815

 email: swaminathan.vasude...@hp.com





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Bryan D. Payne
Re: Removing Paul McMillan from core

I would argue that it is critical that each project have 1-2 people on core
that are security experts.  The VMT is an intentionally small team.  They
are moving to having specifically appointed security sub-teams on each
project (I believe this is what I heard at the last summit).  These teams
would be a subset of the core devs that can handle security reviews.  They
idea is that these people would then be able to +1 / -1 embargoed security
patches.  So having someone like Paul on Horizon core would be very
valuable for such things.

In addition, I think that gerrit is exactly where security reviews *should*
be happening.  Much better to catch things before they are merged, rather
than as bugs after-the-fact.  Would we rather have a -1 on a code review
than a CVE?

My 2 cents,
-bryan (from OSSG)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Metrics] Communicating how to interpret community data

2013-12-11 Thread Stefano Maffulli
Hello folks

I wrote a blog post today after noticing again that reporters take the
data published on Activity Board and Stackalytics as true, without
asking questions to the protagonists. The problem is that at the moment
none of the systems we have can guarantee that the data is presenting at
any given time is true. In particular, the data about companies
involvement in OpenStack is most likely to be more wrong than true
(except possibly for the manually curated reports done at release time).

I knew there was a risk that people may not understand/misinterpret the
data and I'd like to find ways to mitigate it.

One fast way would be to put clearer warnings on pages like
http://activity.openstack.org/dash/newbrowser/browser/scm-companies.html
and http://activity.openstack.org/data/display/OPNSTK2/Organizations
Stackalytics would need also to put a clear warning on
http://stackalytics.com/. It may not solve the root cause but at least
it may suggest reporters to ask for confirmation of the data shown
before making assumptions about companies and their involvement in
OpenStack.

For the medium term, I am hoping that we can build a service that can
export data from the OpenStack membership database to reporting
services, stackalytics and the others. It's not a simple task though
because of member's privacy.

Two new bugs filed on this topic, let's discuss there how to solve this
issue:

 https://bugs.launchpad.net/stackalytics/+bug/1260135
 https://bugs.launchpad.net/openstack-community/+bug/1260140

Thanks,
/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Bug list maintenance

2013-12-11 Thread Devananda van der Veen
So, I've dug into the bug list in the past few days, and want to share what
I've observed.

Over the Havana cycle, we all used the bug list as a way to earmark work we
needed to come back to. Some of those earmarks are stale. Perhaps the
status is incorrect, or we fixed it but didn't close the bug, or the
description no longer reflects the current codebase.

I'd like to ask that, if you have any bugs assigned to you, please take a
few minutes to review them. If you're still working on them, please make
sure the status  priority fields are accurate, and target a reasonable
milestone (i2 is Jan 23, i3 is March 6). Oh, and let me know you've
reviewed your bugs, otherwise I'm going to nag you :)

Also, if you aren't able to work that bug right now, don't sweat it - just
unassign yourself. This is about keeping the bug list accurate, not about
guilting anyone into working more.

Thanks!
-D
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for tomorrow meeting at 2000 UTC

2013-12-11 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tomorrow, 
2013-12-12!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss any current integration work (or problems) or help needed.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, questions and answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Russell Bryant
On 12/11/2013 08:14 PM, Bryan D. Payne wrote:
 Re: Removing Paul McMillan from core
 
 I would argue that it is critical that each project have 1-2 people on
 core that are security experts.  The VMT is an intentionally small team.
  They are moving to having specifically appointed security sub-teams on
 each project (I believe this is what I heard at the last summit).  These
 teams would be a subset of the core devs that can handle security
 reviews.  They idea is that these people would then be able to +1 / -1
 embargoed security patches.  So having someone like Paul on Horizon core
 would be very valuable for such things.

We can involve people in security reviews without having them on the
core review team.  They are separate concerns.

 In addition, I think that gerrit is exactly where security reviews
 *should* be happening.  Much better to catch things before they are
 merged, rather than as bugs after-the-fact.  Would we rather have a -1
 on a code review than a CVE?

This has been discussed quite a bit.  We can't handle security patches
on gerrit right now while they are embargoed because we can't completely
hide them.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread ZG Niu
+1


On Thu, Dec 12, 2013 at 9:14 AM, Bryan D. Payne bdpa...@acm.org wrote:

 Re: Removing Paul McMillan from core

 I would argue that it is critical that each project have 1-2 people on
 core that are security experts.  The VMT is an intentionally small team.
  They are moving to having specifically appointed security sub-teams on
 each project (I believe this is what I heard at the last summit).  These
 teams would be a subset of the core devs that can handle security reviews.
  They idea is that these people would then be able to +1 / -1 embargoed
 security patches.  So having someone like Paul on Horizon core would be
 very valuable for such things.

 In addition, I think that gerrit is exactly where security reviews
 *should* be happening.  Much better to catch things before they are merged,
 rather than as bugs after-the-fact.  Would we rather have a -1 on a code
 review than a CVE?

 My 2 cents,
 -bryan (from OSSG)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
NiuZG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] domain admin role query

2013-12-11 Thread Paul Belanger

On 13-12-11 11:18 AM, Lyle, David wrote:

+1 on moving the domain admin role rules to the default policy.json

-David Lyle

From: Dolph Mathews [mailto:dolph.math...@gmail.com]
Sent: Wednesday, December 11, 2013 9:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] domain admin role query


On Tue, Dec 10, 2013 at 10:49 PM, Jamie Lennox jamielen...@redhat.com wrote:
Using the default policies it will simply check for the admin role and not care 
about the domain that admin is limited to. This is partially a left over from the 
V2 api when there wasn't domains to worry  about.

A better example of policies are in the file etc/policy.v3cloudsample.json. In 
there you will see the rule for create_project is:

 identity:create_project: rule:admin_required and 
domain_id:%(project.domain_id)s,

as opposed to (in policy.json):

 identity:create_project: rule:admin_required,

This is what you are looking for to scope the admin role to a domain.

We need to start moving the rules from policy.v3cloudsample.json to the default 
policy.json =)


Jamie

- Original Message -

From: Ravi Chunduru ravi...@gmail.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Wednesday, 11 December, 2013 11:23:15 AM
Subject: [openstack-dev] [keystone] domain admin role query

Hi,
I am trying out Keystone V3 APIs and domains.
I created an domain, created a project in that domain, created an user in
that domain and project.
Next, gave an admin role for that user in that domain.

I am assuming that user is now admin to that domain.
Now, I got a scoped token with that user, domain and project. With that
token, I tried to create a new project in that domain. It worked.

But, using the same token, I could also create a new project in a 'default'
domain too. I expected it should throw authentication error. Is it a bug?

Thanks,
--
Ravi



One of the issues I had this week while using the 
policy.v3cloudsample.json was I had no easy way of creating a domain 
with the id of 'admin_domain_id'.  I basically had to modify the SQL 
directly to do it.


Any chance we can create a 2nd domain using 'admin_domain_id' via 
keystone-manage sync_db?


--
Paul Belanger | PolyBeacon, Inc.
Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
Github: https://github.com/pabelanger | Twitter: 
https://twitter.com/pabelanger


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSG][OSSN] Glance allows sharing of images between projects without consumer project approval

2013-12-11 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Glance allows sharing of images between projects without consumer
project approval
- ---

### Summary ###
Glance allows images to be shared between projects. In certain API
versions, images can be shared without the consumer project's
approval. This allows potentially malicious images to show up in a
project's image list.

### Affected Services / Software ###
Glance, Image Service, Diablo, Essex, Folsom, Grizzly, Havana

### Discussion ###
Since the OpenStack Diablo release, Glance allows images to be shared
between projects. To share an image, the producer of the image adds
the consumer project as a member of the image. When using the Image
Service API v1, the image producer is able to share an image with a
consumer project without their approval. This results in the shared
image showing up in the image list for the consumer project. This can
mislead users with roles in the consumer project into running a
potentially malicious image.

The Image Service API v2.0 does not allow image sharing between
projects, so a project is not susceptible to running unauthorized
images shared by other projects. The Image Service API v2.1 allows
image sharing using a two-step process. An image producer must add a
consumer as a member of the image, and the consumer must accept the
shared image before it shows up in their image list. This additional
approval process allows a consumer to control what images show up in
their image list, thus preventing potentially malicious images being
used without the consumers knowledge.

### Recommended Actions ###
In the OpenStack Diablo, Essex, and Folsom releases, Glance supports
image sharing using the Image Service API v1. There is no way to
require approval of a shared image by consumer projects. Users should
be cautioned to be careful when using images from their image list, as
they may be using an image that was shared with them without their
knowledge.

In the OpenStack Grizzly and Havana releases, Glance supports the
Image Service API v2.1 or later. Support is still provided for Image
Service API v1, which allows image sharing between projects without
consumer project approval. It is recommended to disable v1 of the
Image Service API if possible. This can be done by setting the
following directive in the glance-api.conf configuration file:

-  begin example glance-api.conf snippet 
enable_v1_api = False
-  end example glance-api.conf snippet 

### Contacts / References ###
This OSSN : https://bugs.launchpad.net/ossn/+bug/1226078
Original LaunchPad Bug : https://bugs.launchpad.net/glance/+bug/1226078
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
CVE: CVE-2013-4354
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSqTCNAAoJEJa+6E7Ri+EVZwEH+gK6PC8t1OHqCZ2Z7Sd9I0PW
GPMD17iMCSvwV9492wWbjmvMpnyEdu9bQKYe1Rx/mQE2poKuxm/fmBA72M8YiiI3
WoS2ABOtvPPp35ZpQgre/sVwOoOwRxmpHDn9gf9U+/hgGw1gGsfWB3bm/R7paLUD
xzWEU/XS8NY9DQLcajPBGDI2/VoySOUf1WS9i9+G/sbplVbHVNDzs7qzlAhdHdgw
Ed6KTfxF0Em7sA4QL0SpEeHMiiYJgA2YhJsSVXN0sklz8Jw7drwaH/vdLEG9KES3
f+q46GDnNJhwB9vlPs3ljUYKtVrFFhOtxTiZhZV2wsXtsnNTQtxg22HRC40vVvg=
=tYQj
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Fix to agents race condition creates another issue

2013-12-11 Thread Edgar Magana
In commit:
https://review.openstack.org/#/c/58814/

There is an assumption that all plugins creates the
plumgrid_neutron.agents which is not the case. I just tested big switch and
plumgrid and they are failing:

INFO  [alembic.migration] Running upgrade havana - e197124d4b9, add unique
constraint to members
INFO  [alembic.migration] Running upgrade e197124d4b9 - 1fcfc149aca4, Add
a unique constraint on (agent_type, host) columns to prevent a race
condition when an agent entry is 'upserted'.
Traceback (most recent call last):
  File /usr/local/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 143, in main
CONF.command.func(config, CONF.command.name)
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 80, in
do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File /opt/stack/neutron/neutron/db/migration/cli.py, line 59, in
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line
124, in upgrade
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line
193, in run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 177,
in load_python_file
module = load_module(module_id, path)
  File /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 39,
in load_module
return imp.load_source(module_id, path, fp)
  File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py,
line 105, in module
run_migrations_online()
  File /opt/stack/neutron/neutron/db/migration/alembic_migrations/env.py,
line 89, in run_migrations_online
options=build_options())
  File string, line 7, in run_migrations
  File /usr/local/lib/python2.7/dist-packages/alembic/environment.py,
line 652, in run_migrations
self.get_context().run_migrations(**kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line
224, in run_migrations
change(**kw)
  File
/opt/stack/neutron/neutron/db/migration/alembic_migrations/versions/1fcfc149aca4_agents_unique_by_type_and_host.py,
line 50, in upgrade
local_cols=['agent_type', 'host']
  File string, line 7, in create_unique_constraint
  File /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line
539, in create_unique_constraint
schema=schema, **kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line
135, in add_constraint
self._exec(schema.AddConstraint(const))
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line
76, in _exec
conn.execute(construct, *multiparams, **params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py,
line 1449, in execute
params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py,
line 1542, in _execute_ddl
compiled
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py,
line 1698, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py,
line 1691, in _execute_context
context)
  File
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line
331, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in
execute
self.errorhandler(self, exc, value)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36,
in defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1146, Table
'plumgrid_neutron.agents' doesn't exist) 'ALTER TABLE agents ADD
CONSTRAINT uniq_agents0agent_type0host UNIQUE (agent_type, host)' ()
++ failed
++ local r=1
+++ jobs -p
++ kill
++ set +o xtrace



Is this known issues? If not, let me know and I can properly reported.

Thanks,

Edgar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Plan files and resources

2013-12-11 Thread Adrian Otto

 On Dec 11, 2013, at 4:45 PM, Clayton Coleman ccole...@redhat.com wrote:
 - Original Message -
 Devdatta,
 
 On Dec 10, 2013, at 12:37 PM, devdatta kulkarni
 devdatta.kulka...@rackspace.com wrote:
 
 Hi Adrian,
 
 Thanks for creating https://etherpad.openstack.org/p/solum-demystified
 
 I am really excited to see the examples. Especially cool is how
 examples 2 and 3 demonstrate using a component (solum_glance_id) created
 as part of example 1.
 
 
 Some questions/comments:
 
 1) Summarizing the sequence of events just to make sure I understand them
 correctly:
  a) User selects a language pack and specifies its id in the plan file
 
 They could put the language pack reference into a Plan file, or we could
 generate a Plan file with a CLI command that feeds an auto-generated file to
 the API for the user. That might reduce the user complexity a bit for the
 general case.
 
 It seems like the reasonable M1 and M2 scenarios are to get the bones of an 
 integration working that allow a flexible Plan to exist (but not necessarily 
 something an average user would edit).  

To be clear, are you suggesting that we ask users to place stock plan files in 
their code repos as a first step? This would certainly minimize work for us to 
get to milestone-1.

 M2 and M3 can focus on the support around making Plans that mere mortals can 
 throw together (whether generated or precreated by an operator), and a lot of 
 how that evolves depends on the other catalog work.  

This would mean revisiting the simplicity of the plan file, documenting lots of 
examples of them so the are well understood. At that point we could demonstrate 
ways to tweak them to accommodate a variety of workload types with Solum, not 
just deploy simple web apps fitting a single system architecture.

 You could argue the resistance from some quarters to the current PaaS model 
 is that the Plan equivalent is hardcoded and non-flexible - what is being 
 done differently here is to offer the concepts necessary to allow other types 
 of plans and application models to coexist in a single system.

Agreed 100%. 

  b) User creates repo with the plan file in it.
 
 We could scan the repo for a Plan file to override the auto-generation step,
 to allow a method for customization.
 
  After this the flow could be:
  c.1) User uses solum cli to 'create' an application by giving reference
  to
 the repo uri
 
 I view this as the use of the cli app create command as the first step.
 They can optionally specify a Plan file to use for either the build
 sequence, or the app deployment sequence, or both (for a total of TWO Plan
 files). We could also allow plan files to be placed in the Git repo, and
 picked up there in the event that none are specified on the command line.
 
 Note that they may also put a HOT file in their repo, and bypass HOT file
 generation/catalog-lookup and cause Solum to use the supplied template. This
 would be useful for power users who want the ability to further influence
 the arrangement of the Heat stack.
 
  c.1.1) Solum creates a plan resource
  c.1.2) Solum model interpreter creates a Heat stack and does the rest
  of the
   things needed to create a assembly.
  (The created plan resource does not play any part in assembly
  creation as such.
   Its only role is being a 'trackback' to track the plan from which
   the assembly was created.)
 
 It's also a way to find out what services the given requirements were mapped
 to. In a Plan file, the services array contains ServiceSpecfications (see
 the EX[1-3] YAML examples under the services node for an example of what
 those look like. In a Plan resource, the services array includes a list of
 service resources so you can see what Solum's model interpreter mapped your
 requirements to.
 
  or,
  c.2) User uses solum cli to 'create/register' a plan by providing
  reference to the repo uri.
   c.2.1) Solum creates the plan resource.
  c.2) User uses solum cli to 'create' an application by specifying the
  created plan
   resource uri
   (In this flow, the plan is actively used).
 
 Yes, this would be another option. I expect that this approach may be used by
 users who want to create multitudes of Assemblies from a given Plan
 resource.
 
 2) Addition of new solum specific attributes in a plan specification is
 interesting.
  I imagine those can be contributed back as Solum profile to CAMP spec?
 
 If we want, that input would certainly be welcomed.
 
 3) Model interpreter for generating Heat stack from a plan is a nice idea.
  For all: Are there any recommended libraries for this?
 
 Good question. There are a number of orchestration systems that we could look
 at as case studies. Anything that has a declarative DSL is likely to have
 implementations that are relevant to our need for a model interpreter. This
 includes Heat.
 
 4) Just to confirm, I assume that the api-spec-review etherpad
 

Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Bryan D. Payne

 We can involve people in security reviews without having them on the
  core review team.  They are separate concerns.


Yes, but those people can't ultimately approve the patch.  So you'd need to
have a security reviewer do their review, and then someone who isn't a
security person be able to offer the +1/+2 based on the opinion of the
security reviewer.  This doesn't make any sense to me.  You're involving an
extra person needlessly, and creating extra work.



 This has been discussed quite a bit.  We can't handle security patches
 on gerrit right now while they are embargoed because we can't completely
 hide them.


I think that you're confusing security reviews of new code changes with
reviews of fixes to security problems.  In this part of my email, I'm
talking about the former.  These are not embargoed.  They are just the
everyday improvements to the system.  That is the best time to identify and
gate on security issues.  Without someone on core that can give a -2 when
there's a problem, this will basically never happen.  Then we'll be back to
fixing a greater number of things as bugs.

-bryan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Nathan Kinder
On 12/11/2013 08:08 PM, Bryan D. Payne wrote:
 We can involve people in security reviews without having them on the
 core review team.  They are separate concerns.
 
 
 Yes, but those people can't ultimately approve the patch.  So you'd need
 to have a security reviewer do their review, and then someone who isn't
 a security person be able to offer the +1/+2 based on the opinion of the
 security reviewer.  This doesn't make any sense to me.  You're involving
 an extra person needlessly, and creating extra work.
 
  
 
 This has been discussed quite a bit.  We can't handle security patches
 on gerrit right now while they are embargoed because we can't completely
 hide them.
 
 
 I think that you're confusing security reviews of new code changes with
 reviews of fixes to security problems.  In this part of my email, I'm
 talking about the former.  These are not embargoed.  They are just the
 everyday improvements to the system.  That is the best time to identify
 and gate on security issues.  Without someone on core that can give a -2
 when there's a problem, this will basically never happen.  Then we'll be
 back to fixing a greater number of things as bugs.

+1.  I'd really like to see at least one security representative per
project on core who makes sure that incoming code an blueprints are
following security best practices.  These best practices still need to
be clearly defined, but it's going to be impossible to uphold them once
they are established unless someone with review power is involved.  We
want security to be more proactive instead of reactive.

-NGK

 
 -bryan
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Support for Pecan in Nova

2013-12-11 Thread Mike Perez
On 10:06 Thu 12 Dec , Christopher Yeoh wrote:
 On Thu, Dec 12, 2013 at 8:59 AM, Doug Hellmann
 doug.hellm...@dreamhost.comwrote:

 
 
 
  On Wed, Dec 11, 2013 at 3:41 PM, Ryan Petrello 
  ryan.petre...@dreamhost.com wrote:
 
  Hello,
 
  I’ve spent the past week experimenting with using Pecan for Nova’s API,
  and have opened an experimental review:
 
  https://review.openstack.org/#/c/61303/6
 
  …which implements the `versions` v3 endpoint using pecan (and paves the
  way for other extensions to use pecan).  This is a *potential* approach
  I've considered for gradually moving the V3 API, but I’m open to other
  suggestions (and feedback on this approach).  I’ve also got a few open
  questions/general observations:
 
  1.  It looks like the Nova v3 API is composed *entirely* of extensions
  (including “core” API calls), and that extensions and their routes are
  discoverable and extensible via installed software that registers
itself
  via stevedore.  This seems to lead to an API that’s composed of
installed
  software, which in my opinion, makes it fairly hard to map out the API
(as
  opposed to how routes are manually defined in other WSGI frameworks).
 I
  assume at this time, this design decision has already been solidified
for
  v3?
 
 
  Yeah, I brought this up at the summit. I am still having some trouble
  understanding how we are going to express a stable core API for
  compatibility testing if the behavior of the API can be varied so
  significantly by deployment decisions. Will we just list each required
  extension, and forbid any extras for a compliant cloud?
 

  Maybe the issue is caused by me misunderstanding the term extension,
  which (to me) implies an optional component but is perhaps reflecting a
  technical implementation detail instead?
 
 
 Yes and no :-) As Ryan mentions, all API code is a plugin in the V3 API.
 However, some must be loaded or the V3 API
 refuses to start up. In nova/api/openstack/__init__.py we have
 API_V3_CORE_EXTENSIONS which hard codes
 which extensions must be loaded and there is no config option to override
 this (blacklisting a core plugin will result in the
 V3 API not starting up).

 So for compatibility testing I think what will probably happen is that
 we'll be defining a minimum set (API_V3_CORE_EXTENSIONS)
 that must be implemented and clients can rely on that always being present
 on a compliant cloud. But clients can also then query through /extensions
 what other functionality (which is backwards compatible with respect to
 core) may also be present on that specific cloud.

This really seems similar to the idea of having a router class, some
controllers and you map them. From my observation at the summit, calling
everything an extension creates confusion. An extension extends something.
For example, Chrome has extensions, and they extend the idea of the core
features of a browser. If you want to do more than back/forward, go to an
address, stop, etc, that's an extension. If you want it to play an audio
clip
stop, hammer time after clicking the stop button, that's an example of an
extension.

In OpenStack, we use extensions to extend core. Core are the essential
feature(s) of the project. In Cinder for example, core is volume. In core
you
can create a volume, delete a volume, attach a volume, detach a volume,
etc. If
you want to go beyond that, that's an extension. If you want to do volume
encryption, that's an example of an extension.

I'm worried by the discrepancies this will create among the programs. You
mentioned maintainability being a plus for this. I don't think it'll be
great
from the deployers perspective when you have one program that thinks
everything
is an extension and some of them have to be enabled that the deployer has
to be
mindful of, while the rest of the programs consider all extensions to be
optional.


Thanks,
Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominations to Horizon Core

2013-12-11 Thread Lyle, David
So again, nothing prevents a non-core security reviewer from reviewing 
blueprints and doing code reviews.  Believe me any security minded input is 
always welcome and weighed carefully.

Although the principle of having a minimum number of security reviewers in core 
is certainly a fair point of debate, in this particular case, the participation 
level does not warrant the outcry.  

Per http://russellbryant.net/openstack-stats/horizon-reviewers-365.txt

Reviews for the last 365 days in horizon
** -- horizon-core team member
++--+-+
|   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %  | 
Disagreements* |
++--+-+
|   paul-mcmillan **   |2010 1 1 50.0% 
|0 (  0.0%) | 

As with other projects in OpenStack, removing a person from core merely implies 
that they are not actively reviewing enough to remain current with the code 
base and provide informed reviews with regards to the architecture and project 
direction.   Also in-line with other OpenStack projects, reviewers removed from 
core who begin providing regular and meaningful reviews will have a reduced 
period of time to be re-added to core.  Which I would be very happy to see.

David 

 -Original Message-
 From: Nathan Kinder [mailto:nkin...@redhat.com]
 Sent: Wednesday, December 11, 2013 9:33 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Horizon] Nominations to Horizon Core
 
 On 12/11/2013 08:08 PM, Bryan D. Payne wrote:
  We can involve people in security reviews without having them on the
  core review team.  They are separate concerns.
 
 
  Yes, but those people can't ultimately approve the patch.  So you'd need
  to have a security reviewer do their review, and then someone who isn't
  a security person be able to offer the +1/+2 based on the opinion of the
  security reviewer.  This doesn't make any sense to me.  You're involving
  an extra person needlessly, and creating extra work.
 
 
 
  This has been discussed quite a bit.  We can't handle security patches
  on gerrit right now while they are embargoed because we can't
 completely
  hide them.
 
 
  I think that you're confusing security reviews of new code changes with
  reviews of fixes to security problems.  In this part of my email, I'm
  talking about the former.  These are not embargoed.  They are just the
  everyday improvements to the system.  That is the best time to identify
  and gate on security issues.  Without someone on core that can give a -2
  when there's a problem, this will basically never happen.  Then we'll be
  back to fixing a greater number of things as bugs.
 
 +1.  I'd really like to see at least one security representative per
 project on core who makes sure that incoming code an blueprints are
 following security best practices.  These best practices still need to
 be clearly defined, but it's going to be impossible to uphold them once
 they are established unless someone with review power is involved.  We
 want security to be more proactive instead of reactive.
 
 -NGK
 
 
  -bryan
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev