Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Jay Dobies



On 12/06/2013 09:39 PM, Tzu-Mainn Chen wrote:

Thanks for the comments and questions!  I fully expect that this list of 
requirements
will need to be fleshed out, refined, and heavily modified, so the more the 
merrier.

Comments inline:



*** Requirements are assumed to be targeted for Icehouse, unless marked
otherwise:
(M) - Maybe Icehouse, dependency on other in-development features
(F) - Future requirement, after Icehouse

* NODES


Note that everything in this section should be Ironic API calls.


* Creation
   * Manual registration
  * hardware specs from Ironic based on mac address (M)


Ironic today will want IPMI address + MAC for each NIC + disk/cpu/memory
stats


  * IP auto populated from Neutron (F)


Do you mean IPMI IP ? I'd say IPMI address managed by Neutron here.


   * Auto-discovery during undercloud install process (M)
* Monitoring
* assignment, availability, status
* capacity, historical statistics (M)


Why is this under 'nodes'? I challenge the idea that it should be
there. We will need to surface some stuff about nodes, but the
underlying idea is to take a cloud approach here - so we're monitoring
services, that happen to be on nodes. There is room to monitor nodes,
as an undercloud feature set, but lets be very very specific about
what is sitting at what layer.


That's a fair point.  At the same time, the UI does want to monitor both
services and the nodes that the services are running on, correct?  I would
think that a user would want this.

Would it be better to explicitly split this up into two separate requirements?


That was my understanding as well, that Tuskar would not only care about 
the services of the undercloud but the health of the actual hardware on 
which it's running. As I write that I think you're correct, two separate 
requirements feels much more explicit in how that's different from 
elsewhere in OpenStack.



* Management node (where triple-o is installed)


This should be plural :) - TripleO isn't a single service to be
installed - We've got Tuskar, Ironic, Nova, Glance, Keystone, Neutron,
etc.


I misspoke here - this should be where the undercloud is installed.  My
current understanding is that our initial release will only support the 
undercloud
being installed onto a single node, but my understanding could very well be 
flawed.


* created as part of undercloud install process
* can create additional management nodes (F)
 * Resource nodes


 ^ nodes is again confusing layers - nodes are
what things are deployed to, but they aren't the entry point


 * searchable by status, name, cpu, memory, and all attributes from
 ironic
 * can be allocated as one of four node types


Not by users though. We need to stop thinking of this as 'what we do
to nodes' - Nova/Ironic operate on nodes, we operate on Heat
templates.


Right, I didn't mean to imply that users would be doing this allocation.  But 
once Nova
does this allocation, the UI does want to be aware of how the allocation is 
done, right?
That's what this requirement meant.


 * compute
 * controller
 * object storage
 * block storage
 * Resource class - allows for further categorization of a node type
 * each node type specifies a single default resource class
 * allow multiple resource classes per node type (M)


Whats a node type?


Compute/controller/object storage/block storage.  Is another term besides node 
type
more accurate?




 * optional node profile for a resource class (M)
 * acts as filter for nodes that can be allocated to that
 class (M)


I'm not clear on this - you can list the nodes that have had a
particular thing deployed on them; we probably can get a good answer
to being able to see what nodes a particular flavor can deploy to, but
we don't want to be second guessing the scheduler..


Correct; the goal here is to provide a way through the UI to send additional 
filtering
requirements that will eventually be passed into the scheduler, allowing the 
scheduler
to apply additional filters.


 * nodes can be viewed by node types
 * additional group by status, hardware specification


*Instances* - e.g. hypervisors, storage, block storage etc.


 * controller node type


Again, need to get away from node type here.


* each controller node will run all openstack services
   * allow each node to run specified service (F)
* breakdown by workload (percentage of cpu used per node) (M)
 * Unallocated nodes


This implies an 'allocation' step, that we don't have - how about
'Idle nodes' or something.


Is it imprecise to say that nodes are allocated by the scheduler?  Would 
something like
'active/idle' be better?


 * Archived nodes (F)
 * Will be 

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread mar...@redhat.com
On 09/12/13 18:01, Jay Dobies wrote:
 I believe we are still 'fighting' here with two approaches and I believe
 we need both. We can't only provide a way 'give us resources we will do
 a magic'. Yes this is preferred way - especially for large deployments,
 but we also need a fallback so that user can say - no, this node doesn't
 belong to the class, I don't want it there - unassign. Or I need to have
 this node there - assign.
 
 +1 to this. I think there are still a significant amount of admins out
 there that are really opposed to magic and want that fine-grained
 control. Even if they don't use it that frequently, in my experience
 they want to know it's there in the event they need it (and will often
 dream up a case that they'll need it).

+1 to the responses to the 'automagic' vs 'manual' discussion. The
latter is in fact only really possible in small deployments. But that's
not to say it is not a valid use case. Perhaps we need to split it
altogether into two use cases.

At least we should have a level of agreement here and register
blueprints for both: for Icehouse the auto selection of which services
go onto which nodes (i.e. allocation of services to nodes is entirely
transparent). For post Icehouse allow manual allocation of services to
nodes. This last bit may also coincide with any work being done in
Ironic/Nova scheduler which will make this allocation prettier than the
current force_nodes situation.


 
 I'm absolutely for pushing the magic approach as the preferred use. And
 in large deployments that's where people are going to see the biggest
 gain. The fine-grained approach can even be pushed off as a future
 feature. But I wouldn't be surprised to see people asking for it and I'd
 like to at least be able to say it's been talked about.
 
 - As an infrastructure administrator, Anna wants to be able to view
 the history of nodes that have been in a deployment.
 Why? This is super generic and could mean anything.
 I believe this has something to do with 'archived nodes'. But correct me
 if I am wrong.

 -- Jarda


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron] grenade fails due to iso8601 version mismatch

2013-12-09 Thread Sean Dague
On 12/09/2013 05:16 AM, Akihiro Motoki wrote:
 Hi,
 
 A patch in neutronclient [1] which updates iso8601 requirement to 0.1.8
 fails with grenade due to iso8601 version mismatch [2].
 (It blocks the patch for a month.)
 
 The error occurs in old devstack (i.e., grizzly devstack).
 I I see both iso8601 requirements of 0.1.4 and 0.1.8 in devstack log.
 It seems it is similar to an issue addressed in testing infra during Havana 
 cycle.
 
 What is the way to address it?
 
 2013-12-06 23:44:46.313 | 2013-12-06 23:44:46 Installed /opt/stack/old/nova
 2013-12-06 23:44:46.313 | 2013-12-06 23:44:46 Processing dependencies for 
 nova==2013.1.5.a17.g4655df1
 2013-12-06 23:44:46.314 | 2013-12-06 23:44:46 error: Installed distribution 
 iso8601 0.1.4 conflicts with requirement iso8601=0.1.8
 
 [1] https://review.openstack.org/#/c/56654/
 [2] http://logs.openstack.org/54/56654/1/check/check-grenade-dsvm/116b47e/
 [3] 
 http://logs.openstack.org/54/56654/1/check/check-grenade-dsvm/116b47e/console.html.gz#_2013-12-06_23_44_46_313

We can't get past this until we are actually doing the stable side of
grenade from havana instead of grizzly, and thus get global requirements
sync to step us across the boundary.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Tzu-Mainn Chen
  - As an infrastructure administrator, Anna wants to be able to unallocate a
  node from a deployment.
 
  Why? Whats her motivation. One plausible one for me is 'a machine
 
  needs to be serviced so Anna wants to remove it from the deployment to
 
  avoid causing user visible downtime.'  So lets say that: Anna needs to
 
  be able to take machines out of service so they can be maintained or
 
  disposed of.
 

 Node being serviced is a different user story for me.

 I believe we are still 'fighting' here with two approaches and I believe we
 need both. We can't only provide a way 'give us resources we will do a
 magic'. Yes this is preferred way - especially for large deployments, but we
 also need a fallback so that user can say - no, this node doesn't belong to
 the class, I don't want it there - unassign. Or I need to have this node
 there - assign.
Just for clarification - the wireframes don't cover individual nodes being 
manually assigned, do they? I thought the concession to manual control was 
entirely through resource classes and node profiles, which are still parameters 
to be passed through to the nova-scheduler filter. To me, that's very different 
from manual assignment. 

Mainn 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][neutron] Post-mortem debugging for tests?

2013-12-09 Thread Ben Nemec

On 2013-12-09 08:02, Maru Newby wrote:

Are any other projects interested in adding back the post-mortem
debugging support we lost in the move away from nose?  I have a patch
in review for neutron and salv-orlando asked whether oslo might be the
better place for it.

https://review.openstack.org/#/c/43776/


Seems like a reasonable thing to add to Oslo.  Glancing through the code 
it doesn't look like much of it is Neutron-specific, so it shouldn't be 
too painful to do that.


Also, adding it to Oslo doesn't necessarily mean you can't merge the 
current patch either.  We actually like to see code in use in a project 
before it gets to Oslo, and you can always switch Neutron to use the 
Oslo version once it's merged there.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync

2013-12-09 Thread Sean Dague
On 12/06/2013 05:40 PM, Ben Nemec wrote:
 On 2013-12-06 16:30, Clint Byrum wrote:
 Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:


 On 2013-12-06 15:14, Yuriy Taraday wrote:

  Hello, Sean.
 
  I get the issue with upgrade path. User doesn't want to update
 config unless one is forced to do so.
  But introducing code that weakens security and let it stay is an
 unconditionally bad idea.
  It looks like we have to weigh two evils: having troubles upgrading
 and lessening security. That's obvious.
 
  Here are my thoughts on what we can do with it:
  1. I think we should definitely force user to do appropriate
 configuration to let us use secure ways to do locking.
  2. We can wait one release to do so, e.g. issue a deprecation
 warning now and force user to do it the right way later.
  3. If we are going to do 2. we should do it in the service that is
 affected not in the library because library shouldn't track releases
 of an application that uses it. It should do its thing and do it
 right (secure).
 
  So I would suggest to deal with it in Cinder by importing
 'lock_path' option after parsing configs and issuing a deprecation
 warning and setting it to tempfile.gettempdir() if it is still None.

 This is what Sean's change is doing, but setting lock_path to
 tempfile.gettempdir() is the security concern.

 Yuriy's suggestion is that we should let Cinder override the config
 variable's default with something insecure. Basically only deprecate
 it in Cinder's world, not oslo's. That makes more sense from a library
 standpoint as it keeps the library's expected interface stable.
 
 Ah, I see the distinction now.  If we get this split off into
 oslo.lockutils (which I believe is the plan), that's probably what we'd
 have to do.
 

 Since there seems to be plenty of resistance to using /tmp by default,
 here is my proposal:

 1) We make Sean's change to open files in append mode. I think we can
 all agree this is a good thing regardless of any config changes.

 2) Leave lockutils broken in Icehouse if lock_path is not set, as I
 believe Mark suggested earlier. Log an error if we find that
 configuration. Users will be no worse off than they are today, and if
 they're paying attention they can get the fixed lockutils behavior
 immediately.

 Broken how? Broken in that it raises an exception, or broken in that it
 carries a security risk?
 
 Broken as in external locks don't actually lock.  If we fall back to
 using a local semaphore it might actually be a little better because
 then at least the locks work within a single process, whereas before
 there was no locking whatsoever.

Right, so broken as in doesn't actually locks, potentially completely
scrambles the user's data, breaking them forever.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-09 Thread David Chadwick
Ack

On 09/12/2013 15:54, Tiwari, Arvind wrote:
 Thanks David,
 
 Let me update the etherpad with this proposal.
 
 Arvind
 
 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
 Sent: Friday, December 06, 2013 2:44 AM
 To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List (not for 
 usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 Another alternative is to change role name into role display name,
 indicating that the string is only to be used in GUIs, is not guaranteed
 to be unique, is set by the role creator, can be any string in any
 character set, and is not used by the system anywhere. Only role ID is
 used by the system, in policy evaluation, in user-role assignments, in
 permission-role assignments etc.
 
 regards
 
 David
 
 On 05/12/2013 16:21, Tiwari, Arvind wrote:
 Hi David,

 Let me capture these details in ether pad. I will drop an email after adding 
 these details in etherpad.

 Thanks,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
 Sent: Thursday, December 05, 2013 4:15 AM
 To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List (not for 
 usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 we are making good progress, but what I dont like about your proposal
 below is that the role name is not unique. There can be multiple roles
 with the same name, but different IDs, and different scopes. I dont like
 this, and I think it would be confusing to users/administrators. I think
 the role names should be different as well. This is not difficult to
 engineer if the names are hierarchically structured based on the name of
 the role creator. The creator might be owner of the resource that is
 being scoped, but it need not necessarily be. Assuming it was, then in
 your examples below we might have role names of NovaEast.admin and
 NovaWest.admin. Since these are strings, policies can be easily adapted
 to match on NovaWest.admin instead of admin.

 regards

 david

 On 04/12/2013 17:21, Tiwari, Arvind wrote:
 Hi Adam,

 I have added my comments in line. 

 As per my request yesterday and David's proposal, following role-def data 
 model is looks generic enough and seems innovative to accommodate future 
 extensions.

 {
   role: {
 id: 76e72a,
 name: admin, (you can give whatever name you like)
 scope: {
   id: ---id--, (ID should be  1 to 1 mapped with resource in type 
 and must be immutable value)
   type: service | file | domain etc., (Type can be any type of 
 resource which explains the scoping context)
   interface:--interface--  (We are still need working on this 
 field. My idea of this optional field is to indicate the interface of the 
 resource (endpoint for service, path for File,) for which the role-def 
 is   created and can be empty.)
 }
   }
 }

 Based on above data model two admin roles for nova for two separate region 
 wd be as below

 {
   role: {
 id: 76e71a,
 name: admin,
 scope: {
   id: 110, (suppose 110 is Nova serviceId)
   interface: 1101, (suppose 1101 is Nova region East endpointId)
   type: service
 }
   }
 }

 {
   role: {
 id: 76e72a,
 name: admin,
 scope: {
   id: 110, 
   interface: 1102,(suppose 1102 is Nova region West endpointId)
   type: service
 }
   }
 }

 This way we can keep role-assignments abstracted from resource on which the 
 assignment is created. This also open doors to have service and/or endpoint 
 scoped token as I mentioned in https://etherpad.openstack.org/p/1Uiwcbfpxq.

 David, I have updated 
 https://etherpad.openstack.org/p/service-scoped-role-definition line #118 
 explaining the rationale behind the field.
 I wd also appreciate your vision on 
 https://etherpad.openstack.org/p/1Uiwcbfpxq too which is support 
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.


 Thanks,
 Arvind

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Tuesday, December 03, 2013 6:52 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage 
 questions)
 Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 I've been thinking about your comment that nested roles are confusing
 AT: Thanks for considering my comment about nested role-def.

 What if we backed off and said the following:


 Some role-definitions are owned by services.  If a Role definition is 
 owned by a service, in role assignment lists in tokens, those roles we 
 be prefixd by the service name.  / is a reserved cahracter and weill be 
 used as the divider between segments of the role definition 

 That drops arbitrary 

Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-09 Thread Herman Narkaytis
Hi All,
  Last couple of month Mirantis team was working on new scalable scheduler
architecture. The main concept was proposed by Boris Pavlovic in the
following blue print
https://blueprints.launchpad.net/nova/+spec/no-db-scheduler and Alexey
Ovchinnikov prepared a bunch of patches
https://review.openstack.org/#/c/45867/9
  This patch set was intensively reviewed by community and there was a call
for some kind of documentation that describes overall architecture and
details of implementation. Here is an etherpad document
https://etherpad.openstack.org/p/scheduler-design-proposal (a copy in
google doc
https://docs.google.com/a/mirantis.com/document/d/1irmDDYWWKWAGWECX8bozu8AAmzgQxMCAAdjhk53L9aM/edit
).
  Comments and critics are highly welcome.

HHN.



On Wed, Dec 4, 2013 at 12:12 AM, Russell Bryant rbry...@redhat.com wrote:

 On 12/03/2013 03:17 AM, Robert Collins wrote:
  The team size was a minimum, not a maximum - please add your names.
 
  We're currently waiting on the prerequisite blueprint to land before
  work starts in earnest; and for the blueprint to be approved (he says,
  without having checked to see if it has been now:))

 I approved it.

 https://blueprints.launchpad.net/nova/+spec/forklift-scheduler-breakout

 Once this is moving, please keep me in the loop on progress.

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Herman Narkaytis
DoO Ru, PhD
Tel.: +7 (8452) 674-555, +7 (8452) 431-555
Tel.: +7 (495) 640-4904
Tel.: +7 (812) 640-5904
Tel.: +38(057)728-4215
Tel.: +1 (408) 715-7897
ext 2002
http://www.mirantis.com

This email (including any attachments) is confidential. If you are not the
intended recipient you must not copy, use, disclose, distribute or rely on
the information contained in it. If you have received this email in error,
please notify the sender immediately by reply email and delete the email
from your system. Confidentiality and legal privilege attached to this
communication are not waived or lost by reason of mistaken delivery to you.
Mirantis does not guarantee (that this email or the attachment's) are
unaffected by computer virus, corruption or other defects. Mirantis may
monitor incoming and outgoing emails for compliance with its Email Policy.
Please note that our servers may not be located in your country.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron] grenade fails due to iso8601 version mismatch

2013-12-09 Thread Akihiro Motoki
Thanks for the clarification.
After I posted the mail, I found the bug [1] due to the same cause and
ongoing work.

[1] https://bugs.launchpad.net/openstack-ci/+bug/1255419
jenkins tests fails for neutron/grizzly duo to iso8601 version
requirement conflict

On Tue, Dec 10, 2013 at 1:14 AM, Sean Dague s...@dague.net wrote:
 On 12/09/2013 05:16 AM, Akihiro Motoki wrote:
 Hi,

 A patch in neutronclient [1] which updates iso8601 requirement to 0.1.8
 fails with grenade due to iso8601 version mismatch [2].
 (It blocks the patch for a month.)

 The error occurs in old devstack (i.e., grizzly devstack).
 I I see both iso8601 requirements of 0.1.4 and 0.1.8 in devstack log.
 It seems it is similar to an issue addressed in testing infra during Havana 
 cycle.

 What is the way to address it?

 2013-12-06 23:44:46.313 | 2013-12-06 23:44:46 Installed /opt/stack/old/nova
 2013-12-06 23:44:46.313 | 2013-12-06 23:44:46 Processing dependencies for 
 nova==2013.1.5.a17.g4655df1
 2013-12-06 23:44:46.314 | 2013-12-06 23:44:46 error: Installed distribution 
 iso8601 0.1.4 conflicts with requirement iso8601=0.1.8

 [1] https://review.openstack.org/#/c/56654/
 [2] http://logs.openstack.org/54/56654/1/check/check-grenade-dsvm/116b47e/
 [3] 
 http://logs.openstack.org/54/56654/1/check/check-grenade-dsvm/116b47e/console.html.gz#_2013-12-06_23_44_46_313

 We can't get past this until we are actually doing the stable side of
 grenade from havana instead of grizzly, and thus get global requirements
 sync to step us across the boundary.

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Second working group meeting on language packs today

2013-12-09 Thread Clayton Coleman
Hi,

We will hold our second Git Integration working group meeting on IRC in #solum 
on Monday, December 9, 2013 1700 UTC / 0900 PST.

Agenda for today's meeting:
* Administrative:
* Decide whether to continue this meeting at the same time in 
January
* Topics:
* Determine minimal set of milestone-1 functionality
* Get volunteers for milestone-1 example language packs
* Discuss names
* General discussion

- Original Message -
 Meeting 1 was conducted on Monday and consisted mostly of freeform
 discussions, clarification, and QA:
 http://irclogs.solum.io/2013/solum.2013-12-02-17.05.html.  The next meeting
 is Monday, December 9, 2013 1700 UTC.
 
 Wiki:
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/BuildingSourceIntoDeploymentArtifacts
 
 Action and open topics:
 
 * Updated proposal wiki with personas
 * Discussed whether a single command or multiple commands are needed to
 transform an image (one for build+test, or two)
 ** Believe 1 is sufficient to encapsulate the entire transformation, use
 cases are needed to justify extras.
 * The parent blueprint and wiki page represents the general design and
 philosophy, and specific child blueprints will target exact scenarios for
 milestones
 * [NEXT] Helpful discussion on the injection of artifacts such that the
 transformation script can operator on them.  No clear design, but lots of
 suggestions
 * Strong consensus that the the transformation script should depend on Solum
 or OpenStack concepts - a developer should be able to debug a transformation
 by using the image themselves, or run it outside of Solum.
 * Should anticipate that the transformation environment and execution
 environment might have different resource needs - i.e. Java maven may
 require more memory than Java execution
 * [NEXT] Should the transformation happen inside the environment the app is
 destined for (i.e. able to reach existing deployed resources) or in a
 special build env?
 ** No consensus, arguments for and against
 ** Transformation environments may need external network access, production
 environments may not allow it
 * Transformation should remove or disable debugging or compile tools prior to
 execution for security - this is a recommendation to image authors
 * The final execution environment may dynamically define exposed ports that
 Solum must be aware of
 ** For M1 the recommendation was to focus on upfront definition via metadata
 * [AGREED] No external metadata service would be part of Solum, any metadata
 that needs to be provided to the image during transformation would be done
 via injection and be an argument to prepare
 ** That might be via userdata or docker environment variables and bind
 mounted files

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync

2013-12-09 Thread Clint Byrum
Excerpts from Sean Dague's message of 2013-12-09 08:17:45 -0800:
 On 12/06/2013 05:40 PM, Ben Nemec wrote:
  On 2013-12-06 16:30, Clint Byrum wrote:
  Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:
 
 
  On 2013-12-06 15:14, Yuriy Taraday wrote:
 
   Hello, Sean.
  
   I get the issue with upgrade path. User doesn't want to update
  config unless one is forced to do so.
   But introducing code that weakens security and let it stay is an
  unconditionally bad idea.
   It looks like we have to weigh two evils: having troubles upgrading
  and lessening security. That's obvious.
  
   Here are my thoughts on what we can do with it:
   1. I think we should definitely force user to do appropriate
  configuration to let us use secure ways to do locking.
   2. We can wait one release to do so, e.g. issue a deprecation
  warning now and force user to do it the right way later.
   3. If we are going to do 2. we should do it in the service that is
  affected not in the library because library shouldn't track releases
  of an application that uses it. It should do its thing and do it
  right (secure).
  
   So I would suggest to deal with it in Cinder by importing
  'lock_path' option after parsing configs and issuing a deprecation
  warning and setting it to tempfile.gettempdir() if it is still None.
 
  This is what Sean's change is doing, but setting lock_path to
  tempfile.gettempdir() is the security concern.
 
  Yuriy's suggestion is that we should let Cinder override the config
  variable's default with something insecure. Basically only deprecate
  it in Cinder's world, not oslo's. That makes more sense from a library
  standpoint as it keeps the library's expected interface stable.
  
  Ah, I see the distinction now.  If we get this split off into
  oslo.lockutils (which I believe is the plan), that's probably what we'd
  have to do.
  
 
  Since there seems to be plenty of resistance to using /tmp by default,
  here is my proposal:
 
  1) We make Sean's change to open files in append mode. I think we can
  all agree this is a good thing regardless of any config changes.
 
  2) Leave lockutils broken in Icehouse if lock_path is not set, as I
  believe Mark suggested earlier. Log an error if we find that
  configuration. Users will be no worse off than they are today, and if
  they're paying attention they can get the fixed lockutils behavior
  immediately.
 
  Broken how? Broken in that it raises an exception, or broken in that it
  carries a security risk?
  
  Broken as in external locks don't actually lock.  If we fall back to
  using a local semaphore it might actually be a little better because
  then at least the locks work within a single process, whereas before
  there was no locking whatsoever.
 
 Right, so broken as in doesn't actually locks, potentially completely
 scrambles the user's data, breaking them forever.
 

Things I'd like to see OpenStack do in the short term, ranked in ascending
order of importance:

4) Upgrade smoothly.
3) Scale.
2) Help users manage external risks.
1) Not do what Sean described above.

I mean, how can we even suggest silently destroying integrity?

I suggest merging Sean's patch and putting a warning in the release
notes that running without setting this will be deprecated in the next
release. If that is what this is preventing this debate should not have
happened, and I sincerely apologize for having delayed it. I believe my
mistake was assuming this was something far more trivial than without
this patch we destroy users' data.

I thought we were just talking about making upgrades work. :-P

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread David Boucha
On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com wrote:



 On 12/08/2013 07:36 AM, Robert Collins wrote:
  On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com wrote:
 
 
  I suggested salt because we could very easily make trove and savana into
  salt masters (if we wanted to) just by having them import salt library
  and run an api call. When they spin up nodes using heat, we could easily
  have that to the cert exchange - and the admins of the site need not
  know _anything_ about salt, puppet or chef - only about trove or savana.
 
  Are salt masters multi-master / HA safe?
 
  E.g. if I've deployed 5 savanna API servers to handle load, and they
  all do this 'just import', does that work?
 
  If not, and we have to have one special one, what happens when it
  fails / is redeployed?

 Yes. You can have multiple salt masters.

  Can salt minions affect each other? Could one pretend to be a master,
  or snoop requests/responses to another minion?

 Yes and no. By default no - and this is protected by key encryption and
 whatnot. They can affect each other if you choose to explicitly grant
 them the ability to. That is - you can give a minion an acl to allow it
 inject specific command requests back up into the master. We use this in
 the infra systems to let a jenkins slave send a signal to our salt
 system to trigger a puppet run. That's all that slave can do though -
 send the signal that the puppet run needs to happen.

 However - I don't think we'd really want to use that in this case, so I
 think they answer you're looking for is no.

  Is salt limited: is it possible to assert that we *cannot* run
  arbitrary code over salt?

 In as much as it is possible to assert that about any piece of software
 (bugs, of course, blah blah) But the messages that salt sends to a
 minion are run this thing that you have a local definition for rather
 than here, have some python and run it

 Monty



Salt was originally designed to be a unified agent for a system like
openstack. In fact, many people use it for this purpose right now.

I discussed this with our team management and this is something SaltStack
wants to support.

Are there any specifics things that the salt minion lacks right now to
support this use case?

-- 
Dave Boucha  |  Sr. Engineer

Join us at SaltConf, Jan. 28-30, 2014 in Salt Lake City. www.saltconf.com


5272 South College Drive, Suite 301 | Murray, UT 84123
*office* 801-305-3563
d...@saltstack.com | www.saltstack.com http://saltstack.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Second working group meeting on language packs today

2013-12-09 Thread Clayton Coleman
- Original Message -
 Hi,
 
 We will hold our second Git Integration working group meeting on IRC in
 #solum on Monday, December 9, 2013 1700 UTC / 0900 PST.
 
 Agenda for today's meeting:
   * Administrative:
   * Decide whether to continue this meeting at the same time in 
 January
   * Topics:
   * Determine minimal set of milestone-1 functionality
 * Get volunteers for milestone-1 example language packs

Adding:
* Discuss 
https://wiki.openstack.org/wiki/Solum/specify-lang-pack-design

   * Discuss names
   * General discussion
 
 - Original Message -
  Meeting 1 was conducted on Monday and consisted mostly of freeform
  discussions, clarification, and QA:
  http://irclogs.solum.io/2013/solum.2013-12-02-17.05.html.  The next meeting
  is Monday, December 9, 2013 1700 UTC.
  
  Wiki:
  https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/BuildingSourceIntoDeploymentArtifacts
  
  Action and open topics:
  
  * Updated proposal wiki with personas
  * Discussed whether a single command or multiple commands are needed to
  transform an image (one for build+test, or two)
  ** Believe 1 is sufficient to encapsulate the entire transformation, use
  cases are needed to justify extras.
  * The parent blueprint and wiki page represents the general design and
  philosophy, and specific child blueprints will target exact scenarios for
  milestones
  * [NEXT] Helpful discussion on the injection of artifacts such that the
  transformation script can operator on them.  No clear design, but lots of
  suggestions
  * Strong consensus that the the transformation script should depend on
  Solum
  or OpenStack concepts - a developer should be able to debug a
  transformation
  by using the image themselves, or run it outside of Solum.
  * Should anticipate that the transformation environment and execution
  environment might have different resource needs - i.e. Java maven may
  require more memory than Java execution
  * [NEXT] Should the transformation happen inside the environment the app is
  destined for (i.e. able to reach existing deployed resources) or in a
  special build env?
  ** No consensus, arguments for and against
  ** Transformation environments may need external network access, production
  environments may not allow it
  * Transformation should remove or disable debugging or compile tools prior
  to
  execution for security - this is a recommendation to image authors
  * The final execution environment may dynamically define exposed ports that
  Solum must be aware of
  ** For M1 the recommendation was to focus on upfront definition via
  metadata
  * [AGREED] No external metadata service would be part of Solum, any
  metadata
  that needs to be provided to the image during transformation would be done
  via injection and be an argument to prepare
  ** That might be via userdata or docker environment variables and bind
  mounted files
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Schduler] Volunteers wanted for a modest proposal for an external scheduler in our lifetime

2013-12-09 Thread Clint Byrum
Excerpts from Herman Narkaytis's message of 2013-12-09 08:18:17 -0800:
 Hi All,
   Last couple of month Mirantis team was working on new scalable scheduler
 architecture. The main concept was proposed by Boris Pavlovic in the
 following blue print
 https://blueprints.launchpad.net/nova/+spec/no-db-scheduler and Alexey
 Ovchinnikov prepared a bunch of patches
 https://review.openstack.org/#/c/45867/9
   This patch set was intensively reviewed by community and there was a call
 for some kind of documentation that describes overall architecture and
 details of implementation. Here is an etherpad document
 https://etherpad.openstack.org/p/scheduler-design-proposal (a copy in
 google doc
 https://docs.google.com/a/mirantis.com/document/d/1irmDDYWWKWAGWECX8bozu8AAmzgQxMCAAdjhk53L9aM/edit
 ).
   Comments and critics are highly welcome.
 

Looks great. I think I would post this message as new rather than a
reply. It it isn't really related to the original thread. Many people
who are interested in scheduler improvements may have already killed this
thread in their mail reader and thus may miss your excellent message. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-09 Thread Keith Basil
On Dec 5, 2013, at 8:11 PM, Fox, Kevin M wrote:

 I think the security issue can be handled by not actually giving the 
 underlying resource to the user in the first place.
 
 So, for example, if I wanted a bare metal node's worth of resource for my own 
 containering, I'd ask for a bare metal node and use a blessed image that 
 contains docker+nova bits that would hook back to the cloud. I wouldn't be 
 able to login to it, but containers started on it would be able to access my 
 tenant's networks. All access to it would have to be through nova 
 suballocations. The bare resource would count against my quotas, but nothing 
 run under it.
 
So this would be an extremely light weight hypervisor alternative, then?

It's interesting because bare-metal-to-tenant security issues are 
tricky
to overcome.

-k

 Come to think of it, this sounds somewhat similar to what is planned for 
 Neutron service vm's. They count against the user's quota on one level but 
 not all access is directly given to the user. Maybe some of the same 
 implementation bits could be used.
 
 Thanks,
 Kevin
 
 From: Mark McLoughlin [mar...@redhat.com]
 Sent: Thursday, December 05, 2013 1:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][TripleO] Nested resources
 
 Hi Kevin,
 
 On Mon, 2013-12-02 at 12:39 -0800, Fox, Kevin M wrote:
 Hi all,
 
 I just want to run a crazy idea up the flag pole. TripleO has the
 concept of an under and over cloud. In starting to experiment with
 Docker, I see a pattern start to emerge.
 
 * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I may want to run multiple VM's on it to reduce my own
 cost. Now I have to manage the BareMetal nodes myself or nest
 OpenStack into them.
 * As a User, I may want to allocate a VM. I then want to run multiple
 Docker containers on it to use it more efficiently. Now I have to
 manage the VM's myself or nest OpenStack into them.
 * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I then want to run multiple Docker containers on it to
 use it more efficiently. Now I have to manage the BareMetal nodes
 myself or nest OpenStack into them.
 
 I think this can then be generalized to:
 As a User, I would like to ask for resources of one type (One AZ?),
 and be able to delegate resources back to Nova so that I can use Nova
 to subdivide and give me access to my resources as a different type.
 (As a different AZ?)
 
 I think this could potentially cover some of the TripleO stuff without
 needing an over/under cloud. For that use case, all the BareMetal
 nodes could be added to Nova as such, allocated by the services
 tenant as running a nested VM image type resource stack, and then made
 available to all tenants. Sys admins then could dynamically shift
 resources from VM providing nodes to BareMetal Nodes and back as
 needed.
 
 This allows a user to allocate some raw resources as a group, then
 schedule higher level services to run only in that group, all with the
 existing api.
 
 Just how crazy an idea is this?
 
 FWIW, I don't think it's a crazy idea at all - indeed I mumbled
 something similar a few times in conversation with random people over
 the past few months :)
 
 With the increasing interest in containers, it makes a tonne of sense -
 you provision a number of VMs and now you want to carve them up by
 allocating containers on them. Right now, you'd need to run your own
 instance of Nova for that ... which is far too heavyweight.
 
 It is a little crazy in the sense that it's a tonne of work, though.
 There's not a whole lot of point in discussing it too much until someone
 shows signs of wanting to implement it :)
 
 One problem is how the API would model this nesting, another problem is
 making the scheduler aware that some nodes are only available to the
 tenant which owns them but maybe a bigger problem is the security model
 around allowing a node managed by an untrusted become a compute node.
 
 Mark.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync

2013-12-09 Thread Sean Dague
On 12/09/2013 11:38 AM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2013-12-09 08:17:45 -0800:
 On 12/06/2013 05:40 PM, Ben Nemec wrote:
 On 2013-12-06 16:30, Clint Byrum wrote:
 Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:


 On 2013-12-06 15:14, Yuriy Taraday wrote:

 Hello, Sean.

 I get the issue with upgrade path. User doesn't want to update
 config unless one is forced to do so.
 But introducing code that weakens security and let it stay is an
 unconditionally bad idea.
 It looks like we have to weigh two evils: having troubles upgrading
 and lessening security. That's obvious.

 Here are my thoughts on what we can do with it:
 1. I think we should definitely force user to do appropriate
 configuration to let us use secure ways to do locking.
 2. We can wait one release to do so, e.g. issue a deprecation
 warning now and force user to do it the right way later.
 3. If we are going to do 2. we should do it in the service that is
 affected not in the library because library shouldn't track releases
 of an application that uses it. It should do its thing and do it
 right (secure).

 So I would suggest to deal with it in Cinder by importing
 'lock_path' option after parsing configs and issuing a deprecation
 warning and setting it to tempfile.gettempdir() if it is still None.

 This is what Sean's change is doing, but setting lock_path to
 tempfile.gettempdir() is the security concern.

 Yuriy's suggestion is that we should let Cinder override the config
 variable's default with something insecure. Basically only deprecate
 it in Cinder's world, not oslo's. That makes more sense from a library
 standpoint as it keeps the library's expected interface stable.

 Ah, I see the distinction now.  If we get this split off into
 oslo.lockutils (which I believe is the plan), that's probably what we'd
 have to do.


 Since there seems to be plenty of resistance to using /tmp by default,
 here is my proposal:

 1) We make Sean's change to open files in append mode. I think we can
 all agree this is a good thing regardless of any config changes.

 2) Leave lockutils broken in Icehouse if lock_path is not set, as I
 believe Mark suggested earlier. Log an error if we find that
 configuration. Users will be no worse off than they are today, and if
 they're paying attention they can get the fixed lockutils behavior
 immediately.

 Broken how? Broken in that it raises an exception, or broken in that it
 carries a security risk?

 Broken as in external locks don't actually lock.  If we fall back to
 using a local semaphore it might actually be a little better because
 then at least the locks work within a single process, whereas before
 there was no locking whatsoever.

 Right, so broken as in doesn't actually locks, potentially completely
 scrambles the user's data, breaking them forever.

 
 Things I'd like to see OpenStack do in the short term, ranked in ascending
 order of importance:
 
 4) Upgrade smoothly.
 3) Scale.
 2) Help users manage external risks.
 1) Not do what Sean described above.
 
 I mean, how can we even suggest silently destroying integrity?
 
 I suggest merging Sean's patch and putting a warning in the release
 notes that running without setting this will be deprecated in the next
 release. If that is what this is preventing this debate should not have
 happened, and I sincerely apologize for having delayed it. I believe my
 mistake was assuming this was something far more trivial than without
 this patch we destroy users' data.
 
 I thought we were just talking about making upgrades work. :-P

Honestly, I haven't looked exactly how bad the corruption would be. But
we are locking to handle something around simultaneous db access in
cinder, so I'm going to assume the worst here.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Third-party testing

2013-12-09 Thread Matt Riedemann



On Sunday, December 08, 2013 11:32:50 PM, Yoshihiro Kaneko wrote:

Hi Neutron team,

I'm working on building Third-party testing for Neutron Ryu plugin.
I intend to use Jenkins and gerrit-trigger plugin.

It is required that Third-party testing provides verify vote for
all changes to a plugin/driver's code, and all code submissions
by the jenkins user.
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Testing_Requirements

For this requirements, what kind of filter for the trigger should
I set?
It is easy to set a file path of the plugin/driver:
   project: plain:neutron
   branch:  plain:master
   file:path:neutron/plugins/ryu/**
However, this is not enough because it lacks dependencies.
It is difficult to judge a patchset which affects the plugin/driver.
In addition, gerrit trigger has a file path filter, but there is no
patchset owner filter, so it is not able to set a trigger to a
patchset which is submitted by the jenkins user.

Can Third-party testing execute tests for all patchset including the
thing which may not affect the plugin/driver?

Thanks,
Kaneko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I can't speak for the Neutron team, but in Nova the requirement is to 
run all patches through the vendor plugin third party CI, not just 
vendor-specific patches.


https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes and logs - 12/09/2013

2013-12-09 Thread Renat Akhmerov
Hi,

Thanks for joining today’s Mistral community meeting. Here are the links to 
meeting minutes and log:

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-09-16.01.html
Log: 
http://eavesdrop.openstack.org/meetings/mistral/2013/mistral.2013-12-09-16.01.log.html

Join us next time to discuss Mistral PoC and Mistral Engine architecture.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Keystoneclient tests to tempest

2013-12-09 Thread Brant Knudson
Responses inline.

On Mon, Dec 9, 2013 at 10:07 AM, Sean Dague s...@dague.net wrote:

 On 12/09/2013 10:12 AM, Brant Knudson wrote:
  Monty -
 
  Thanks for doing the work already to get the infrastructure set up.
  Looks like I've got the easy part here. I posted an initial patch that
  has one test from keystone in https://review.openstack.org/#/c/60724/ .
  I hope to be able to move all the tests over unchanged. The tricky part
  is getting all the fixtures set up the same way that keystone does.

 I think a direct port of the keystone fixtures is the wrong approach.
 These really need to act more like the scenario tests that exist over
 there. And if the intent is just a dump of the keystone tests we need to
 step back... because that's not going to get accepted.


The reason I'd like to keep keystone's client tests as they are is that
they provide us with coverage of keystone and keystoneclient functionality.
This doesn't mean they have to stay that way forever, since once they're
moved out of Keystone we can start refactoring them.

An alternative approach is to clean up Keystone's client tests as much as
possible first to make them essentially the scenario tests that tempest
would accept. This would leave the tests in keystone longer than we'd like
since we'd like them out of there ASAP.

I actually think that we should solve #4 first - how you test the thing
 you actually want to test in the gate. Which is about getting
 devstack-gate to setup the world that you want to test. I really think
 the location of the tests all flow from there. Because right now it
 seems like the cart is before the horse.


OK. Let's solve #4. If the tests as they are aren't going to be accepted in
Tempest then we can put them elsewhere (leave them in keystone (just run
differently), move to keystoneclient or a new project). To me this testing
has some similarities to Grenade since that involves testing with multiple
versions too, so I'll look into how Grenade works.

Is testing multiple versions of keystoneclient actually worth it? If the
other projects don't feel the need for this then why does Keystone? It's
actually caught problems so it's proved useful to Keystone, and we're
making changes to the client so this type of testing seems important, but
maybe it's not useful enough to continue to do the multiple version
testing. If we're going to support backwards compatibility we should test
it.

If we put something together to test multiple versions of keystoneclient
would other projects want to use it for their clients?

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Keystoneclient tests to tempest

2013-12-09 Thread Jeremy Stanley
On 2013-12-09 11:07:50 -0600 (-0600), Brant Knudson wrote:
[...]
 Is testing multiple versions of keystoneclient actually worth it?
 If the other projects don't feel the need for this then why does
 Keystone? It's actually caught problems so it's proved useful to
 Keystone, and we're making changes to the client so this type of
 testing seems important, but maybe it's not useful enough to
 continue to do the multiple version testing. If we're going to
 support backwards compatibility we should test it.
[...]

Well, at a minimum we should be testing both that the tip of master
for the client works with servers, and also that the tip of
server branches (master, stable/x, stable/y)  work with the latest
released version of the client. There are already plans in motion to
solve this in integration testing for all clients, because not doing
so allows us to break ourselves in unfortunate ways.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync

2013-12-09 Thread Ben Nemec

On 2013-12-09 10:55, Sean Dague wrote:

On 12/09/2013 11:38 AM, Clint Byrum wrote:

Excerpts from Sean Dague's message of 2013-12-09 08:17:45 -0800:

On 12/06/2013 05:40 PM, Ben Nemec wrote:

On 2013-12-06 16:30, Clint Byrum wrote:

Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:



On 2013-12-06 15:14, Yuriy Taraday wrote:


Hello, Sean.

I get the issue with upgrade path. User doesn't want to update

config unless one is forced to do so.

But introducing code that weakens security and let it stay is an

unconditionally bad idea.
It looks like we have to weigh two evils: having troubles 
upgrading

and lessening security. That's obvious.


Here are my thoughts on what we can do with it:
1. I think we should definitely force user to do appropriate

configuration to let us use secure ways to do locking.

2. We can wait one release to do so, e.g. issue a deprecation

warning now and force user to do it the right way later.
3. If we are going to do 2. we should do it in the service that 
is
affected not in the library because library shouldn't track 
releases

of an application that uses it. It should do its thing and do it
right (secure).


So I would suggest to deal with it in Cinder by importing

'lock_path' option after parsing configs and issuing a deprecation
warning and setting it to tempfile.gettempdir() if it is still 
None.


This is what Sean's change is doing, but setting lock_path to
tempfile.gettempdir() is the security concern.


Yuriy's suggestion is that we should let Cinder override the config
variable's default with something insecure. Basically only 
deprecate
it in Cinder's world, not oslo's. That makes more sense from a 
library

standpoint as it keeps the library's expected interface stable.


Ah, I see the distinction now.  If we get this split off into
oslo.lockutils (which I believe is the plan), that's probably what 
we'd

have to do.



Since there seems to be plenty of resistance to using /tmp by 
default,

here is my proposal:

1) We make Sean's change to open files in append mode. I think we 
can

all agree this is a good thing regardless of any config changes.

2) Leave lockutils broken in Icehouse if lock_path is not set, as 
I

believe Mark suggested earlier. Log an error if we find that
configuration. Users will be no worse off than they are today, and 
if

they're paying attention they can get the fixed lockutils behavior
immediately.


Broken how? Broken in that it raises an exception, or broken in 
that it

carries a security risk?


Broken as in external locks don't actually lock.  If we fall back to
using a local semaphore it might actually be a little better because
then at least the locks work within a single process, whereas before
there was no locking whatsoever.


Right, so broken as in doesn't actually locks, potentially 
completely

scrambles the user's data, breaking them forever.



Things I'd like to see OpenStack do in the short term, ranked in 
ascending

order of importance:

4) Upgrade smoothly.
3) Scale.
2) Help users manage external risks.
1) Not do what Sean described above.

I mean, how can we even suggest silently destroying integrity?

I suggest merging Sean's patch and putting a warning in the release
notes that running without setting this will be deprecated in the next
release. If that is what this is preventing this debate should not 
have
happened, and I sincerely apologize for having delayed it. I believe 
my

mistake was assuming this was something far more trivial than without
this patch we destroy users' data.

I thought we were just talking about making upgrades work. :-P


Honestly, I haven't looked exactly how bad the corruption would be. But
we are locking to handle something around simultaneous db access in
cinder, so I'm going to assume the worst here.


Yeah, my understanding is that this doesn't come up much in actual use 
because lock_path is set in most production environments.  Still, 
obviously not cool when your locks don't lock, which is why we made the 
unpleasant change to require lock_path.  It wasn't something we did 
lightly (I even sent something to the list before it merged, although I 
got no responses at the time).


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-09 Thread Fox, Kevin M
I'm thinking more generic:

The cloud provider will provide one or more suballocating images. The one 
Triple O uses to take a bare metal node and make vm's available would be the 
obvious one to make available initially. I think that one should not have a 
security concern since it is already being used in that way safely.

I think a docker based one shouldn't have the safety concern either, since I 
think docker containerizes network resources too? I could be wrong though.

Thanks,
Kevin


From: Keith Basil [kba...@redhat.com]
Sent: Monday, December 09, 2013 8:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][TripleO] Nested resources

On Dec 5, 2013, at 8:11 PM, Fox, Kevin M wrote:

 I think the security issue can be handled by not actually giving the 
 underlying resource to the user in the first place.

 So, for example, if I wanted a bare metal node's worth of resource for my own 
 containering, I'd ask for a bare metal node and use a blessed image that 
 contains docker+nova bits that would hook back to the cloud. I wouldn't be 
 able to login to it, but containers started on it would be able to access my 
 tenant's networks. All access to it would have to be through nova 
 suballocations. The bare resource would count against my quotas, but nothing 
 run under it.

So this would be an extremely light weight hypervisor alternative, then?

It's interesting because bare-metal-to-tenant security issues are 
tricky
to overcome.

-k

 Come to think of it, this sounds somewhat similar to what is planned for 
 Neutron service vm's. They count against the user's quota on one level but 
 not all access is directly given to the user. Maybe some of the same 
 implementation bits could be used.

 Thanks,
 Kevin
 
 From: Mark McLoughlin [mar...@redhat.com]
 Sent: Thursday, December 05, 2013 1:53 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][TripleO] Nested resources

 Hi Kevin,

 On Mon, 2013-12-02 at 12:39 -0800, Fox, Kevin M wrote:
 Hi all,

 I just want to run a crazy idea up the flag pole. TripleO has the
 concept of an under and over cloud. In starting to experiment with
 Docker, I see a pattern start to emerge.

 * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I may want to run multiple VM's on it to reduce my own
 cost. Now I have to manage the BareMetal nodes myself or nest
 OpenStack into them.
 * As a User, I may want to allocate a VM. I then want to run multiple
 Docker containers on it to use it more efficiently. Now I have to
 manage the VM's myself or nest OpenStack into them.
 * As a User, I may want to allocate a BareMetal node so that it is
 entirely mine. I then want to run multiple Docker containers on it to
 use it more efficiently. Now I have to manage the BareMetal nodes
 myself or nest OpenStack into them.

 I think this can then be generalized to:
 As a User, I would like to ask for resources of one type (One AZ?),
 and be able to delegate resources back to Nova so that I can use Nova
 to subdivide and give me access to my resources as a different type.
 (As a different AZ?)

 I think this could potentially cover some of the TripleO stuff without
 needing an over/under cloud. For that use case, all the BareMetal
 nodes could be added to Nova as such, allocated by the services
 tenant as running a nested VM image type resource stack, and then made
 available to all tenants. Sys admins then could dynamically shift
 resources from VM providing nodes to BareMetal Nodes and back as
 needed.

 This allows a user to allocate some raw resources as a group, then
 schedule higher level services to run only in that group, all with the
 existing api.

 Just how crazy an idea is this?

 FWIW, I don't think it's a crazy idea at all - indeed I mumbled
 something similar a few times in conversation with random people over
 the past few months :)

 With the increasing interest in containers, it makes a tonne of sense -
 you provision a number of VMs and now you want to carve them up by
 allocating containers on them. Right now, you'd need to run your own
 instance of Nova for that ... which is far too heavyweight.

 It is a little crazy in the sense that it's a tonne of work, though.
 There's not a whole lot of point in discussing it too much until someone
 shows signs of wanting to implement it :)

 One problem is how the API would model this nesting, another problem is
 making the scheduler aware that some nodes are only available to the
 tenant which owns them but maybe a bigger problem is the security model
 around allowing a node managed by an untrusted become a compute node.

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Steven Dake

On 12/09/2013 09:41 AM, David Boucha wrote:
On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com 
mailto:mord...@inaugust.com wrote:




On 12/08/2013 07:36 AM, Robert Collins wrote:
 On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
mailto:mord...@inaugust.com wrote:


 I suggested salt because we could very easily make trove and
savana into
 salt masters (if we wanted to) just by having them import salt
library
 and run an api call. When they spin up nodes using heat, we
could easily
 have that to the cert exchange - and the admins of the site
need not
 know _anything_ about salt, puppet or chef - only about trove
or savana.

 Are salt masters multi-master / HA safe?

 E.g. if I've deployed 5 savanna API servers to handle load, and they
 all do this 'just import', does that work?

 If not, and we have to have one special one, what happens when it
 fails / is redeployed?

Yes. You can have multiple salt masters.

 Can salt minions affect each other? Could one pretend to be a
master,
 or snoop requests/responses to another minion?

Yes and no. By default no - and this is protected by key
encryption and
whatnot. They can affect each other if you choose to explicitly grant
them the ability to. That is - you can give a minion an acl to
allow it
inject specific command requests back up into the master. We use
this in
the infra systems to let a jenkins slave send a signal to our salt
system to trigger a puppet run. That's all that slave can do though -
send the signal that the puppet run needs to happen.

However - I don't think we'd really want to use that in this case,
so I
think they answer you're looking for is no.

 Is salt limited: is it possible to assert that we *cannot* run
 arbitrary code over salt?

In as much as it is possible to assert that about any piece of
software
(bugs, of course, blah blah) But the messages that salt sends to a
minion are run this thing that you have a local definition for
rather
than here, have some python and run it

Monty



Salt was originally designed to be a unified agent for a system like 
openstack. In fact, many people use it for this purpose right now.


I discussed this with our team management and this is something 
SaltStack wants to support.


Are there any specifics things that the salt minion lacks right now to 
support this use case?




David,

If I am correct of my parsing of the salt nomenclature, Salt provides a 
Master (eg a server) and minions (eg agents that connect to the salt 
server).  The salt server tells the minions what to do.


This is not desirable for a unified agent (atleast in the case of Heat).

The bar is very very very high for introducing new *mandatory* *server* 
dependencies into OpenStack.  Requiring a salt master (or a puppet 
master, etc) in my view is a non-starter for a unified guest agent 
proposal.  Now if a heat user wants to use puppet, and can provide a 
puppet master in their cloud environment, that is fine, as long as it is 
optional.


A guest agent should have the following properties:
* minimal library dependency chain
* no third-party server dependencies
* packaged in relevant cloudy distributions

In terms of features:
* run shell commands
* install files (with selinux properties as well)
* create users and groups (with selinux properties as well)
* install packages via yum, apt-get, rpm, pypi
* start and enable system services for systemd or sysvinit
* Install and unpack source tarballs
* run scripts
* Allow grouping, selection, and ordering of all of the above operations

Agents are a huge pain to maintain and package.  It took a huge amount 
of willpower to get cloud-init standardized across the various 
distributions.  We have managed to get heat-cfntools (the heat agent) 
into every distribution at this point and this was a significant amount 
of work.  We don't want to keep repeating this process for each 
OpenStack project!


Regards,
-steve



--
Dave Boucha  |  Sr. Engineer

Join us at SaltConf, Jan. 28-30, 2014 in Salt Lake City. 
www.saltconf.com http://www.saltconf.com/



5272 South College Drive, Suite 301 | Murray, UT 84123
*office*801-305-3563
d...@saltstack.com mailto:d...@saltstack.com | www.saltstack.com 
http://saltstack.com/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Team meeting agenda for tomorrow @ 1500 UTC

2013-12-09 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-alt
on Tuesdays, 1500 
UTChttp://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0sec=0.

The next meeting is Tomorrow, Dec. 10. Everyone is welcome, but please
take a minute to review the wiki before attending for the first time:

http://wiki.openstack.org/marconi

This week we will be cleaning up our bps and bugs, making sure we have
everything scheduled appropriately.

 .-.
o   \ .-.
   ..'   \
 .'o)  / `.   o
/ |
\_)   /-.
  '_.`\  \
   `.  |  \
|   \ |
.--/`-. / /
  .'.-/`-. `.  .\|
 /.' /`._ `-'-.
(|__/`-..`-   '-._ \
   |`--.'-._ `  ||\ \
   || #   /-.   `   /   || \|
   ||   #/   `--'  /  /_::_|)__
   `||-._.-`  /  ||``
 \-.___.` | / || #  |
  \   | | ||   #  # |
  /`.___.'\ |.`||
  | /`.__.'|'.`
__/ \__/ \
   /__.-.)  /__.-.) LGB

Proposed Agenda:

  *   Review actions from last time
  *   Review Graduation BPs/Bugs
  *   Updates on bugs
  *   Updates on blueprints
  *   Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and
note your IRC name so we can call on you during the meeting:

http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Keystoneclient tests to tempest

2013-12-09 Thread Brant Knudson
On Mon, Dec 9, 2013 at 11:18 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2013-12-09 11:07:50 -0600 (-0600), Brant Knudson wrote:
 [...]
  Is testing multiple versions of keystoneclient actually worth it?
  If the other projects don't feel the need for this then why does
  Keystone? It's actually caught problems so it's proved useful to
  Keystone, and we're making changes to the client so this type of
  testing seems important, but maybe it's not useful enough to
  continue to do the multiple version testing. If we're going to
  support backwards compatibility we should test it.
 [...]

 Well, at a minimum we should be testing both that the tip of master
 for the client works with servers, and also that the tip of
 server branches (master, stable/x, stable/y)  work with the latest
 released version of the client. There are already plans in motion to
 solve this in integration testing for all clients, because not doing
 so allows us to break ourselves in unfortunate ways.


This isn't the testing that Keystone's client tests do. Keystone's client
tests verify that the client doesn't change in an incompatible way. For
example, we've got a test that verifies that you can do
client.tenants.list() ; so if someone decides to change tenants to
projects then this test will fail.

I don't know of any specific testing that verifies that the client works
against older (stable/*) servers. Maybe it happens when you submit a change
to stable/ later.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-09 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2013-12-09 09:34:06 -0800:
 I'm thinking more generic:
 
 The cloud provider will provide one or more suballocating images. The one 
 Triple O uses to take a bare metal node and make vm's available would be the 
 obvious one to make available initially. I think that one should not have a 
 security concern since it is already being used in that way safely.

I like where you're going with this, in that the cloud should eventually
become self aware enough to be able to privision the baremetal resources
it has and spin nova up on them. I do think that is quite far out. Right
now, we have two nova's.. an undercloud nova which owns all the baremetal,
and an overcloud nova which owns all the vms. This is definitely nested,
but there is a hard line between the two.

For many people, that hard line is a feature. For others, it is a bug.  :)

 
 I think a docker based one shouldn't have the safety concern either, since I 
 think docker containerizes network resources too? I could be wrong though.
 

The baremetal-to-tenant issues have little to do with networking. They
are firmware problems. Root just has too much power on baremetal.
Somebody should make some hardware which defends against that. For now
the best thing is virtualization extensions.

Docker isn't really going to fix that. The containerization that is
available is good, but does not do nearly as much as true virtualization
does to isolate the user from the hardware. There's still a single
kernel there, and thus, if you can trick that kernel, you can own the
whole box. I've heard it descried as a little better than chroot.
AFAIK, the people using containers for multi-tenant are doing so by
leveraging kernel security modules heavily.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Zane Bitter

On 09/12/13 06:31, Steven Hardy wrote:

Hi all,

So I've been getting concerned about $subject recently, and based on some
recent discussions so have some other heat-core folks, so I wanted to start
a discussion where we can agree and communicate our expectations related to
nomination for heat-core membership (becuase we do need more core
reviewers):

The issues I have are:
- Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.

- Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.


+1

Fun fact: due to the quirks of how Gerrit produces the JSON data dump, 
it's not actually possible for the reviewstats tools to count +0 
reviews. So, for example, one can juice one's review stats by actively 
obstructing someone else's work (voting -1) when a friendly comment 
would have sufficed. This is one of many ways in which metrics offer 
perverse incentives.


Statistics can be useful. They can be particularly useful *in the 
aggregate*. But as soon as you add a closed feedback loop you're no 
longer measuring what you originally thought - mostly you're just 
measuring the gain of the feedback loop.



So if you're aiming for heat-core nomination, here's my personal wish-list,
but hopefully others can proide their input and we can update the wiki with
the resulting requirements/guidelines:

- Make your reviews high-quality.  Focus on spotting logical errors,
   reducing duplication, consistency with existing interfaces, opportunities
   for reuse/simplification etc.  If every review you do is +1, or -1 for a
   trivial/cosmetic issue, you are not making a strong case for -core IMHO.

- Send patches.  Some folks argue that -core membership is only about
   reviews, I disagree - There are many aspects of reviews which require
   deep knowledge of the code, e.g spotting structural issues, logical
   errors caused by interaction with code not modified by the patch,
   effective use of test infrastructure, etc etc.  This deep knowledge comes
   from writing code, not only reviewing it.  This also gives us a way to
   verify your understanding and alignment with our sylistic conventions.


I agree, though I have also heard a lot of folks say it should be just 
about the reviews. Of course the job of core is reviewing - to make sure 
good changes get in and bad changes get turned into good changes. But 
there are few better ways to acquire and demonstrate the familiarity 
with the codebase and conventions of the project necessary to be an 
effective reviewer than to submit patches. It makes no sense to blind 
ourselves to code contributions when considering whom to add to core - a 
single patch contains far more information than a thousand +1 no 
comment or -1 typo in commit message reviews.



- Fix bugs.  Related to the above, help us fix real problems by testing,
   reporting bugs, and fixing them, or take an existing bug and post a patch
   fixing it.  Ask an existing team member to direct you if you're not sure
   which bug to tackle.  Sending patches doing trivial cosmetic cleanups is
   sometimes worthwhile, but make sure that's not all you do, as we need
   -core folk who can find, report, fix and review real user-impacting
   problems (not just new features).  This is also a great way to build
   trust and knowledge if you're aiming to contribute features to Heat.

- Engage in discussions related to the project (here on the ML, helping
   users on the general list, in #heat on Freenode, attend our weekly
   meeting if it's not an anti-social time in your TZ)

Anyone have any more thoughts to add here?


+1

The way to be recognised in an Open Source project should be to 
consistently add value to the community. Concentrate on that and the 
stats will look after themselves.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2013-12-09 Thread Gordon Sim

On 12/09/2013 04:10 PM, Russell Bryant wrote:

From looking it appears that RabbitMQ's support is via an experimental
plugin.  I don't know any more about it.  Has anyone looked at it in
detail?


I believe initial support was added in 3.1.0: 
http://www.rabbitmq.com/release-notes/README-3.1.0.txt


I have certainly successfully tested basic interaction with RabbitMQ 
over 1.0.



 https://www.rabbitmq.com/specification.html

As I understand it, Qpid supports it, in that it's a completely new
implementation as a library (Proton) under the Qpid project umbrella.
  There's also a message router called Dispatch.  This is not to be
confused with the existing Qpid libraries, or the existing Qpid broker
(qpidd).


Yes, proton is a library that encapsulates the AMQP 1.0 encoding and 
protocol rules. It is used in the existing native broker (i.e. qpidd) to 
offer 1.0 support (as well as the qpid::messaging c++ client library).


In addition there is as you mention the 'Dispatch Router'. This is an 
alternative architecture for an intermediary to address some of the 
issues that can arise with qpidd or other brokers (distributed in nature 
to scale better, end-to-end reliability rather than store and forward etc).


So the Qpid project offers both new components as well as 1.0 support 
and smooth transition for existing components. (Disclosure, I'm a 
developer in the Qpid community also).


(There are of course other implementations also e.g., ActiveMQ, 
ApolloMQ, HornetQ, Microsoft ServiceBus, SwiftMQ.)



 http://qpid.apache.org/proton/
 http://qpid.apache.org/components/dispatch-router/



[...]


All of this sounds fine to me.  Surely a single driver for multiple
systems is an improvement.  What's not really mentioned though is why
we should care about AMQP 1.0 beyond that.  Why is it architecturally
superior?  It has been discussed on this list some before, but I
figure it's worth re-visiting if some code is going to show up soon.


Personally I think there is benefit to having a standardised, open 
wire-protocol as the basis for communication in systems like OpenStack, 
rather than having the driver tied to a particular implementation 
throughout (and having more of the key details of the interaction as 
details of the implementation of the driver). The bytes over the wire 
are another level of interface and having that tightly specified can be 
valuable.


Having one driver that still offers choice with regard to intermediaries 
used (I avoid the term broker in case it is implies particular 
approaches), is I think an advantage. Hypothetically for example it 
would have been an advantage had the same driver been usable against 
both RabbitMQ and Qpid previously. The (bumpy!) evolution  of AMQP meant 
that wasn't quite possible since they both spoke different versions 
ofthe early protocol. AMQP 1.0 might in the future avoid needing new 
drivers in such cases however, making it easier to adopt alternative or 
emerging solutions.


AMQP is not the only messaging protocol of course, However its general 
purpose nature and the fact that both RabbitMQ and Qpid really came 
about through AMQP makes it a reasonable choice.



In the case of Nova (and others that followed Nova's messaging
patterns), I firmly believe that for scaling reasons, we need to move
toward it becoming the norm to use peer-to-peer messaging for most
things.  For example, the API and conductor services should be talking
directly to compute nodes instead of through a broker.


Is scale the only reason for preferring direct communication? I don't 
think an intermediary based solution _necessarily_ scales less 
effectively (providing it is distributed in nature, which for example is 
one of the central aims of the dispatch router in Qpid).


That's not to argue that peer-to-peer shouldn't be used, just trying to 
understand all the factors.


One other pattern that can benefit from intermediated message flow is in 
load balancing. If the processing entities are effectively 'pulling' 
messages, this can more naturally balance the load according to capacity 
than when the producer of the workload is trying to determine the best 
balance.



 The exception
to that is cases where we use a publish-subscribe model, and a broker
serves that really well.  Notifications and notification consumers
(such as Ceilometer) are the prime example.


The 'fanout' RPC cast would perhaps be another?


In terms of existing messaging drivers, you could accomplish this with
a combination of both RabbitMQ or Qpid for brokered messaging and
ZeroMQ for the direct messaging cases.  It would require only a small
amount of additional code to allow you to select a separate driver for
each case.

Based on my understanding, AMQP 1.0 could be used for both of these
patterns.  It seems ideal long term to be able to use the same
protocol for everything we need.


That is certainly true. AMQP 1.0 is fully symmetric so it can be used 
directly peer-to-peer as well as 

Re: [openstack-dev] [Solum] Second working group meeting on language packs today

2013-12-09 Thread Clayton Coleman
- Original Message -
 - Original Message -
  Hi,
  
  We will hold our second Git Integration working group meeting on IRC in
  #solum on Monday, December 9, 2013 1700 UTC / 0900 PST.
  
  Agenda for today's meeting:
  * Administrative:
  * Decide whether to continue this meeting at the same time in 
  January
  * Topics:
  * Determine minimal set of milestone-1 functionality
  * Get volunteers for milestone-1 example language packs
 
 Adding:
 * Discuss
 https://wiki.openstack.org/wiki/Solum/specify-lang-pack-design
 
* Discuss names
  * General discussion
  

Extremely productive discussion all - notes are here [1] and I have a few 
action items for creating child blueprint specs.  gokrokve and devkulkarni will 
be iterating on the JSON API for Dev's spec [2] via an Etherpad, reach out to 
them if you are interested in continuing to discuss [2].

[1] http://irclogs.solum.io/2013/solum.2013-12-09-17.01.html
[2] https://wiki.openstack.org/wiki/Solum/specify-lang-pack-design

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] L7 model - an alternative

2013-12-09 Thread Avishay Balderman
Sorry for the broken link
Here is a better one:

https://docs.google.com/drawings/d/119JV_dG_odWVVKTcn51qyRh3rW-253uUqfcg9CFQDtg/pub?w=960h=720

Sent from my iPhone

On 8 בדצמ 2013, at 17:31, Avishay Balderman 
avish...@radware.commailto:avish...@radware.com wrote:

Hi
I was thinking about a different way for L7 modeling.
The key points are:
- The Rule has no action attribute
- A Policy is a collection of rules
- Association keep a reference to a Vip and to a Policy
- Association holds the action (what to do if the Policy return True)
- Association holds (optional) the Pool ID. When the action is redirection to a 
pool
See:
https://docs.google.com/drawings/d/119JV_dG_odWVVKTcn51qyRh3rW-253uUqfcg9CFQDtg/edit?usp=sharing

Please let me know what do you think about this model.

Thanks

Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Clint Byrum
Excerpts from Steven Dake's message of 2013-12-09 09:41:06 -0800:
 On 12/09/2013 09:41 AM, David Boucha wrote:
  On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com 
  mailto:mord...@inaugust.com wrote:
 
 
 
  On 12/08/2013 07:36 AM, Robert Collins wrote:
   On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
  mailto:mord...@inaugust.com wrote:
  
  
   I suggested salt because we could very easily make trove and
  savana into
   salt masters (if we wanted to) just by having them import salt
  library
   and run an api call. When they spin up nodes using heat, we
  could easily
   have that to the cert exchange - and the admins of the site
  need not
   know _anything_ about salt, puppet or chef - only about trove
  or savana.
  
   Are salt masters multi-master / HA safe?
  
   E.g. if I've deployed 5 savanna API servers to handle load, and they
   all do this 'just import', does that work?
  
   If not, and we have to have one special one, what happens when it
   fails / is redeployed?
 
  Yes. You can have multiple salt masters.
 
   Can salt minions affect each other? Could one pretend to be a
  master,
   or snoop requests/responses to another minion?
 
  Yes and no. By default no - and this is protected by key
  encryption and
  whatnot. They can affect each other if you choose to explicitly grant
  them the ability to. That is - you can give a minion an acl to
  allow it
  inject specific command requests back up into the master. We use
  this in
  the infra systems to let a jenkins slave send a signal to our salt
  system to trigger a puppet run. That's all that slave can do though -
  send the signal that the puppet run needs to happen.
 
  However - I don't think we'd really want to use that in this case,
  so I
  think they answer you're looking for is no.
 
   Is salt limited: is it possible to assert that we *cannot* run
   arbitrary code over salt?
 
  In as much as it is possible to assert that about any piece of
  software
  (bugs, of course, blah blah) But the messages that salt sends to a
  minion are run this thing that you have a local definition for
  rather
  than here, have some python and run it
 
  Monty
 
 
 
  Salt was originally designed to be a unified agent for a system like 
  openstack. In fact, many people use it for this purpose right now.
 
  I discussed this with our team management and this is something 
  SaltStack wants to support.
 
  Are there any specifics things that the salt minion lacks right now to 
  support this use case?
 
 
 David,
 
 If I am correct of my parsing of the salt nomenclature, Salt provides a 
 Master (eg a server) and minions (eg agents that connect to the salt 
 server).  The salt server tells the minions what to do.
 
 This is not desirable for a unified agent (atleast in the case of Heat).
 
 The bar is very very very high for introducing new *mandatory* *server* 
 dependencies into OpenStack.  Requiring a salt master (or a puppet 
 master, etc) in my view is a non-starter for a unified guest agent 
 proposal.  Now if a heat user wants to use puppet, and can provide a 
 puppet master in their cloud environment, that is fine, as long as it is 
 optional.
 

What if we taught Heat to speak salt-master-ese? AFAIK it is basically
an RPC system. I think right now it is 0mq, so it would be relatively
straight forward to just have Heat start talking to the agents in 0mq.

 A guest agent should have the following properties:
 * minimal library dependency chain
 * no third-party server dependencies
 * packaged in relevant cloudy distributions
 

That last one only matters if the distributions won't add things like
agents to their images post-release. I am pretty sure work well in
OpenStack is important for server distributions and thus this is at
least something we don't have to freak out about too much.

 In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations
 

All of those things are general purpose low level system configuration
features. None of them will be needed for Trove or Savanna. They need
to do higher level things like run a Hadoop job or create a MySQL user.

 Agents are a huge pain to maintain and package.  It took a huge amount 
 of willpower to get cloud-init standardized across the various 
 distributions.  We have managed to get heat-cfntools (the heat agent) 
 into every distribution at this point and this was a significant amount 
 of work.  We don't want to keep repeating this 

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Pitucha, Stanislaw Izaak
 If I am correct of my parsing of the salt nomenclature, Salt provides a
Master (eg a server) and minions (eg agents that connect to the salt
server).  The salt server tells the minions what to do.

Almost - salt can use master, but it can also use the local filesystem (or
other providers of data). For the basic scenarios, salt master behaves
almost like a file server of the state files that describe what to do. When
using only local filesystem, you can run without a master.

 In terms of features:

Not sure about the properties (it is pretty minimal in terms of dependencies
on ubuntu/debian at least), but it can do most of the things from the
feature list. Selinux labels are missing
(https://github.com/saltstack/salt/issues/1349) and unpacking/installing
source tarballs doesn't go that well with declarative descriptions style IMO
(but is definitely possible). 

Regards,
Stanisław Pitucha
Cloud Services 
Hewlett Packard



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Kurt Griffiths
This list of features makes me very nervous from a security standpoint. Are we 
talking about giving an agent an arbitrary shell command or file to install, 
and it goes and does that, or are we simply triggering a preconfigured action 
(at the time the agent itself was installed)?

From: Steven Dake sd...@redhat.commailto:sd...@redhat.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, December 9, 2013 at 11:41 AM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Unified Guest Agent proposal

In terms of features:
* run shell commands
* install files (with selinux properties as well)
* create users and groups (with selinux properties as well)
* install packages via yum, apt-get, rpm, pypi
* start and enable system services for systemd or sysvinit
* Install and unpack source tarballs
* run scripts
* Allow grouping, selection, and ordering of all of the above operations
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread David Boucha
On Mon, Dec 9, 2013 at 10:41 AM, Steven Dake sd...@redhat.com wrote:

  On 12/09/2013 09:41 AM, David Boucha wrote:

  On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.comwrote:



 On 12/08/2013 07:36 AM, Robert Collins wrote:
  On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com wrote:
 
 
  I suggested salt because we could very easily make trove and savana
 into
  salt masters (if we wanted to) just by having them import salt library
  and run an api call. When they spin up nodes using heat, we could
 easily
  have that to the cert exchange - and the admins of the site need not
  know _anything_ about salt, puppet or chef - only about trove or
 savana.
 
  Are salt masters multi-master / HA safe?
 
  E.g. if I've deployed 5 savanna API servers to handle load, and they
  all do this 'just import', does that work?
 
  If not, and we have to have one special one, what happens when it
  fails / is redeployed?

  Yes. You can have multiple salt masters.

  Can salt minions affect each other? Could one pretend to be a master,
  or snoop requests/responses to another minion?

  Yes and no. By default no - and this is protected by key encryption and
 whatnot. They can affect each other if you choose to explicitly grant
 them the ability to. That is - you can give a minion an acl to allow it
 inject specific command requests back up into the master. We use this in
 the infra systems to let a jenkins slave send a signal to our salt
 system to trigger a puppet run. That's all that slave can do though -
 send the signal that the puppet run needs to happen.

 However - I don't think we'd really want to use that in this case, so I
 think they answer you're looking for is no.

  Is salt limited: is it possible to assert that we *cannot* run
  arbitrary code over salt?

  In as much as it is possible to assert that about any piece of software
 (bugs, of course, blah blah) But the messages that salt sends to a
 minion are run this thing that you have a local definition for rather
 than here, have some python and run it

 Monty



  Salt was originally designed to be a unified agent for a system like
 openstack. In fact, many people use it for this purpose right now.

  I discussed this with our team management and this is something
 SaltStack wants to support.

  Are there any specifics things that the salt minion lacks right now to
 support this use case?


 David,

 If I am correct of my parsing of the salt nomenclature, Salt provides a
 Master (eg a server) and minions (eg agents that connect to the salt
 server).  The salt server tells the minions what to do.


That is the default setup.  The salt-minion can also run in standalone mode
without a master.


 This is not desirable for a unified agent (atleast in the case of Heat).

 The bar is very very very high for introducing new *mandatory* *server*
 dependencies into OpenStack.  Requiring a salt master (or a puppet master,
 etc) in my view is a non-starter for a unified guest agent proposal.  Now
 if a heat user wants to use puppet, and can provide a puppet master in
 their cloud environment, that is fine, as long as it is optional.

 A guest agent should have the following properties:
 * minimal library dependency chain


Salt only has a few dependencies

 * no third-party server dependencies


As mentioned above, the salt-minion can run without a salt master in
standalone mode

 * packaged in relevant cloudy distributions


The Salt Minion is packaged for all major (and many smaller) distributions.
RHEL/EPEL/Debian/Ubuntu/Gentoo/FreeBSD/Arch/MacOS
There is also a Windows installer.


 In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations


Salt-Minion excels at all the above



 Agents are a huge pain to maintain and package.  It took a huge amount of
 willpower to get cloud-init standardized across the various distributions.
 We have managed to get heat-cfntools (the heat agent) into every
 distribution at this point and this was a significant amount of work.  We
 don't want to keep repeating this process for each OpenStack project!


I agree. It's a lot of work. The SaltStack organization has already done
the work to package for all these distributions and maintains the packages.



 Regards,
 -steve




Regards,

Dave
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday December 10th at 19:00 UTC

2013-12-09 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday December 10th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread David Boucha
On Mon, Dec 9, 2013 at 11:19 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Steven Dake's message of 2013-12-09 09:41:06 -0800:
  On 12/09/2013 09:41 AM, David Boucha wrote:
   On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
  
  
  
   On 12/08/2013 07:36 AM, Robert Collins wrote:
On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
   
   
I suggested salt because we could very easily make trove and
   savana into
salt masters (if we wanted to) just by having them import salt
   library
and run an api call. When they spin up nodes using heat, we
   could easily
have that to the cert exchange - and the admins of the site
   need not
know _anything_ about salt, puppet or chef - only about trove
   or savana.
   
Are salt masters multi-master / HA safe?
   
E.g. if I've deployed 5 savanna API servers to handle load, and
 they
all do this 'just import', does that work?
   
If not, and we have to have one special one, what happens when it
fails / is redeployed?
  
   Yes. You can have multiple salt masters.
  
Can salt minions affect each other? Could one pretend to be a
   master,
or snoop requests/responses to another minion?
  
   Yes and no. By default no - and this is protected by key
   encryption and
   whatnot. They can affect each other if you choose to explicitly
 grant
   them the ability to. That is - you can give a minion an acl to
   allow it
   inject specific command requests back up into the master. We use
   this in
   the infra systems to let a jenkins slave send a signal to our salt
   system to trigger a puppet run. That's all that slave can do
 though -
   send the signal that the puppet run needs to happen.
  
   However - I don't think we'd really want to use that in this case,
   so I
   think they answer you're looking for is no.
  
Is salt limited: is it possible to assert that we *cannot* run
arbitrary code over salt?
  
   In as much as it is possible to assert that about any piece of
   software
   (bugs, of course, blah blah) But the messages that salt sends to a
   minion are run this thing that you have a local definition for
   rather
   than here, have some python and run it
  
   Monty
  
  
  
   Salt was originally designed to be a unified agent for a system like
   openstack. In fact, many people use it for this purpose right now.
  
   I discussed this with our team management and this is something
   SaltStack wants to support.
  
   Are there any specifics things that the salt minion lacks right now to
   support this use case?
  
 
  David,
 
  If I am correct of my parsing of the salt nomenclature, Salt provides a
  Master (eg a server) and minions (eg agents that connect to the salt
  server).  The salt server tells the minions what to do.
 
  This is not desirable for a unified agent (atleast in the case of Heat).
 
  The bar is very very very high for introducing new *mandatory* *server*
  dependencies into OpenStack.  Requiring a salt master (or a puppet
  master, etc) in my view is a non-starter for a unified guest agent
  proposal.  Now if a heat user wants to use puppet, and can provide a
  puppet master in their cloud environment, that is fine, as long as it is
  optional.
 

 What if we taught Heat to speak salt-master-ese? AFAIK it is basically
 an RPC system. I think right now it is 0mq, so it would be relatively
 straight forward to just have Heat start talking to the agents in 0mq.

  A guest agent should have the following properties:
  * minimal library dependency chain
  * no third-party server dependencies
  * packaged in relevant cloudy distributions
 

 That last one only matters if the distributions won't add things like
 agents to their images post-release. I am pretty sure work well in
 OpenStack is important for server distributions and thus this is at
 least something we don't have to freak out about too much.

  In terms of features:
  * run shell commands
  * install files (with selinux properties as well)
  * create users and groups (with selinux properties as well)
  * install packages via yum, apt-get, rpm, pypi
  * start and enable system services for systemd or sysvinit
  * Install and unpack source tarballs
  * run scripts
  * Allow grouping, selection, and ordering of all of the above operations
 

 All of those things are general purpose low level system configuration
 features. None of them will be needed for Trove or Savanna. They need
 to do higher level things like run a Hadoop job or create a MySQL user.

  Agents are a huge pain to maintain and package.  It took a huge amount
  of willpower to get cloud-init standardized across the various
  distributions.  We have 

Re: [openstack-dev] [ironic][qa] How will ironic tests run in tempest?

2013-12-09 Thread Devananda van der Veen
On Fri, Dec 6, 2013 at 2:13 PM, Clark Boylan clark.boy...@gmail.com wrote:

 On Fri, Dec 6, 2013 at 1:53 PM, David Kranz dkr...@redhat.com wrote:
  It's great that tempest tests for ironic have been submitted! I was
  reviewing https://review.openstack.org/#/c/48109/ and noticed that the
 tests
  do not actually run. They are skipped because baremetal is not enabled.
 This
  is not terribly surprising but we have had a policy in tempest to only
 merge
  code that has demonstrated that it works. For services that cannot run in
  the single-vm environment of the upstream gate we said there could be a
  system running somewhere that would run them and report a result to
 gerrit.
  Is there a plan for this, or to make an exception for ironic?
 
   -David
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 There is a change[0] to openstack-infra/config to add experimental
 tempest jobs to test ironic. I think that change is close to being
 ready, but I need to give it time for a proper review. Once in that
 will allow you to test 48109 (in theory, not sure if all the bits will
 just work). I don't think these tests fall under the cannot run in a
 single vm environment umbrella, we should be able to test the
 baremetal code via the pxe booting of VMs within the single VM
 environment.

 [0] https://review.openstack.org/#/c/53917/


 Clark


We can test the ironic services, database, and the driver interfaces by
using our fake driver within a single devstack VM today (I'm not sure the
exercises for all of this have been written yet, but it's practical to test
it). OTOH, I don't believe we can test a PXE deploy within a single VM
today, and need to resume discussions with infra about this.

There are some other aspects of Ironic (IPMI, SOL access, any
vendor-specific drivers) which we'll need real hardware to test because
they can't effectively be virtualized. TripleO should cover some (much?) of
those needs, once they are able to switch to using Ironic instead of
nova-baremetal.

-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Retiring reverify no bug

2013-12-09 Thread James E. Blair
Hi,

On Wednesday December 11, 2013 we will remove the ability to use
reverify no bug to re-trigger gate runs for changes that have failed
tests.

This was previously discussed[1] on this list.  There are a few key
things to keep in mind:

* This only applies to reverify, not recheck.  That is, it only
  affects the gate pipeline, not the check pipeline.  You can still use
  recheck no bug to make sure that your patch still works.

* Core reviewers can still resubmit a change to the queue by leaving
  another Approved vote.  Please don't abuse this to bypass the intent
  of this change: to help identify and close gate-blocking bugs.

* You may still use reverify bug # to re-enqueue if there is a bug
  report for a failure, and of course you are encouraged to file a bug
  report if there is not.  Elastic-recheck is doing a great job of
  indicating which bugs might have caused a failure.

As discussed in the previous thread, the goal is to prevent new
transient bugs from landing in code by ensuring that if a change fails a
gate test that it is because of a known bug, and not because it's
actually introducing a bug, so please do your part to help in this
effort.

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/020280.html

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Design proposals for blueprints to record scheduler information

2013-12-09 Thread Qiu Yu
Hi, ALL

Recently I've been working on two blueprints[1][2], both involved with
recording scheduling information. And would like to hear some comments for
several design choices.

Problem Statement
--
* NoValidHost exception might masked out real failure reason to spin up an
instance.

Consider following event sequence, run_instance on host1 failed to spin
up an instance due to port allocation failure in neutron. The request
casted back to scheduler to pick next available host. It failed again on
host2 for the same reason of port allocation error. After Maximum 3 times
to retry, instance is set in ERROR state with a NoValidHost exception.
And there's no easy way to find out what is really going wrong.

* Current scheduling information are recorded in several different log
items, which is difficult to lookup when debugging.

Design Proposal
--
1. Blueprint internal-scheduler[1] will try to address problem #1. After
conductor retrieved selected destination hosts from scheduler, it will
create a scheduler_records_allocations item in database, for each
allocated instance/host allocation.

Design choices:
a) Correlate this scheduler_records_allocations with the 'create' instance
action, and generate a combined view with instance-action events.
b) Add separate new API to retrieve this information.

I prefer the choice #a, because instance action events perfectly fits such
usage case. And allocation records will supplement necessary information
when viewing 'create' action events of an instance.

Thoughts?

NOTE: Please find the following chart in link[3], in case of any
format/display issue.

  scheduler_records_allocations
  +-+
  |allocation_id: 9001  |
  |instance_uuid: inst1_uuid|
 scheduler_records|scheduler_record_id: 1210|
 +--+ |host: host1  |
 |scheduler_record_id: 1210 | |weight: 197.0|
+---+
 |user_id: 'u_fakeid'   | |result: Failed   |
|instance1  |
 |project_id: 'p_fakeid'| |reason: 'No more IP addresses|
+---+
 |request_id: 'req-xxx' | +-+
 |instance_uuids: [ | +-+
+---+
 |'inst1_uuid', | |allocation_id: 9002  |
|instance2  |
 |'inst2_uuid'] | |instance_uuid: inst2_uuid|
+---+
 |request_spec: {...}   | |scheduler_record_id: 1210|
 |filter_properties: {...}  | |host: host2  |
 |scheduler_records_allocations:| |weight: 128.0|
 |[9001, 9002]  | |result: Success  |
 |start_time: ...   | |reason:  |
 |finish_time: ...  | +-+
 +--+ +-+
  |allocation_id: 9003  |
  |instance_uuid: inst1_uuid|
  |scheduler_record_id: 1210|
  |host: host2  |
  |weight: 64.0 |
  |result: Failed   |
  |reason: 'No more IP addresses|
  +-+

2. Blueprint record-scheduler-information[2] will try to solve the problem
#2, to generate a structured information for each scheduler run.

Design choices:
a) Record 'scheduler_records' info in database, which is easy to query, but
introduce a great burden in terms of performance, extra database space
usage, clean up/archiving policy, security relate issue[4], etc.
b) Record 'scheduler_records' into a separate log file, in JSON format, and
each line for a single record of each scheduler run. And then add a new API
extension to retrieve last n (as a query parameter) scheduler records. The
benefit of this approach avoided database issue, and plays well with
external tooling, as well as provide a central place to view the log. But
as a compromise, we won't be able to query logs for specific request_id.

So the problem here is, is database storage solution still desirable? Or...
implement backend driver which deployer could choose? However, in such
case, API would be the minimum set to support both.

Any comments or thoughts are highly appreciated.

[1] https://blueprints.launchpad.net/nova/+spec/internal-scheduler
[2] https://blueprints.launchpad.net/nova/+spec/record-scheduler-information
[3]
https://docs.google.com/document/d/1EsSNeq_tD-3NiX4IphCrQj4ii0_dO-8-Jn7NWHRJPNg/edit?usp=sharing
[4] https://bugs.launchpad.net/nova/+bug/1175193

Thanks,
--
Qiu Yu
___

Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2013-12-09 03:31:36 -0800:
 Hi all,
 
 So I've been getting concerned about $subject recently, and based on some
 recent discussions so have some other heat-core folks, so I wanted to start
 a discussion where we can agree and communicate our expectations related to
 nomination for heat-core membership (becuase we do need more core
 reviewers):
 
 The issues I have are:
 - Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.
 

This is quite misleading, so please do put the TripleO reference in
context:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016186.html
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016232.html

Reading the text of those two I think you can see that while the stats
are a tool Robert is using to find the good reviewers, it is not the
principal metric.

I also find it quite frustrating that you are laying accusations of
stats-seeking without proof. That is just spreading FUD. I'm sure that
is not what you want to do, so I'd like to suggest that we not accuse our
community of any kind of cheating or gaming of the system without
actual proof. I would also suggest that these accusations be made in
private and dealt with directly rather than as broad passive-aggressive
notes on the mailing list.

 - Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.


Not sure I agree that it absolutely can't, but it certainly isn't
something these stats are even meant to do.

We other reviewers must keep tabs on our aspiring core reviewers and
try to rate them ourselves based on whether or not they're spotting the
problems we would spot, and whether or not they're also upholding the
culture we want to foster in our community. We express our rating of
these people when voting on a nomination in the mailing list.

So what you're saying is, there is more to our votes than the mechanical
number. I'd agree 100%. However, I think the numbers _do_ let people
know where they stand in one very limited aspect versus the rest of the
community.

I would actually find it interesting if we had a meta-gerrit that asked
us to review the reviews. This type of system works fantastically for
stackexchange. That would give us a decent mechanical number as well.

 So if you're aiming for heat-core nomination, here's my personal wish-list,
 but hopefully others can proide their input and we can update the wiki with
 the resulting requirements/guidelines:
 
 - Make your reviews high-quality.  Focus on spotting logical errors,
   reducing duplication, consistency with existing interfaces, opportunities
   for reuse/simplification etc.  If every review you do is +1, or -1 for a
   trivial/cosmetic issue, you are not making a strong case for -core IMHO.
 

Disagree. I am totally fine having somebody in core who is really good
at finding all of the trivial cosmetic issues. Those should mean that
the second +2'er of their code is looking at code free of trivial and
cosmetic issues that distract from the bigger issues.

 - Send patches.  Some folks argue that -core membership is only about
   reviews, I disagree - There are many aspects of reviews which require
   deep knowledge of the code, e.g spotting structural issues, logical
   errors caused by interaction with code not modified by the patch,
   effective use of test infrastructure, etc etc.  This deep knowledge comes
   from writing code, not only reviewing it.  This also gives us a way to
   verify your understanding and alignment with our sylistic conventions.
 

The higher the bar goes, the less reviewers we will have. There are
plenty of people that will find _tons_ of real issues but won't submit
very many patches if any. However, I think there isn't any value in
arguing over this point as most of our reviewers are also submitting
patches already.

 - Fix bugs.  Related to the above, help us fix real problems by testing,
   reporting bugs, and fixing them, or take an existing bug and post a patch
   fixing it.  Ask an existing team member to direct you if you're not sure
   which bug to tackle.  Sending patches doing trivial cosmetic cleanups is
   sometimes worthwhile, but make sure that's not all you do, as we need
   -core folk who can find, report, fix and review real user-impacting
   problems (not just new features).  This is also a great way to build
   trust and knowledge if you're aiming to contribute features to Heat.


There's a theme running through 

Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2013-12-09 09:52:25 -0800:
 On 09/12/13 06:31, Steven Hardy wrote:
  Hi all,
 
  So I've been getting concerned about $subject recently, and based on some
  recent discussions so have some other heat-core folks, so I wanted to start
  a discussion where we can agree and communicate our expectations related to
  nomination for heat-core membership (becuase we do need more core
  reviewers):
 
  The issues I have are:
  - Russell's stats (while very useful) are being used by some projects as
 the principal metric related to -core membership (ref TripleO's monthly
 cull/nameshame, which I am opposed to btw).  This is in some cases
 encouraging some stats-seeking in our review process, IMO.
 
  - Review quality can't be measured mechanically - we have some folks who
 contribute fewer, but very high quality reviews, and are also very active
 contributors (so knowledge of the codebase is not stale).  I'd like to
 see these people do more reviews, but removing people from core just
 because they drop below some arbitrary threshold makes no sense to me.
 
 +1
 
 Fun fact: due to the quirks of how Gerrit produces the JSON data dump, 
 it's not actually possible for the reviewstats tools to count +0 
 reviews. So, for example, one can juice one's review stats by actively 
 obstructing someone else's work (voting -1) when a friendly comment 
 would have sufficed. This is one of many ways in which metrics offer 
 perverse incentives.
 
 Statistics can be useful. They can be particularly useful *in the 
 aggregate*. But as soon as you add a closed feedback loop you're no 
 longer measuring what you originally thought - mostly you're just 
 measuring the gain of the feedback loop.
 

I think I understand the psychology of stats and incentives, and I know
that this _may_ happen.

However, can we please be more careful about how this is referenced?
Your message above is suggesting the absolute _worst_ behavior from our
community. That is not what I expect, and I think anybody who was doing
that would be dealt with _swiftly_.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Dmitry Mescheryakov
2013/12/9 Clint Byrum cl...@fewbar.com

 Excerpts from Steven Dake's message of 2013-12-09 09:41:06 -0800:
  On 12/09/2013 09:41 AM, David Boucha wrote:
   On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
  
  
  
   On 12/08/2013 07:36 AM, Robert Collins wrote:
On 8 December 2013 17:23, Monty Taylor mord...@inaugust.com
   mailto:mord...@inaugust.com wrote:
   
   
I suggested salt because we could very easily make trove and
   savana into
salt masters (if we wanted to) just by having them import salt
   library
and run an api call. When they spin up nodes using heat, we
   could easily
have that to the cert exchange - and the admins of the site
   need not
know _anything_ about salt, puppet or chef - only about trove
   or savana.
   
Are salt masters multi-master / HA safe?
   
E.g. if I've deployed 5 savanna API servers to handle load, and
 they
all do this 'just import', does that work?
   
If not, and we have to have one special one, what happens when it
fails / is redeployed?
  
   Yes. You can have multiple salt masters.
  
Can salt minions affect each other? Could one pretend to be a
   master,
or snoop requests/responses to another minion?
  
   Yes and no. By default no - and this is protected by key
   encryption and
   whatnot. They can affect each other if you choose to explicitly
 grant
   them the ability to. That is - you can give a minion an acl to
   allow it
   inject specific command requests back up into the master. We use
   this in
   the infra systems to let a jenkins slave send a signal to our salt
   system to trigger a puppet run. That's all that slave can do
 though -
   send the signal that the puppet run needs to happen.
  
   However - I don't think we'd really want to use that in this case,
   so I
   think they answer you're looking for is no.
  
Is salt limited: is it possible to assert that we *cannot* run
arbitrary code over salt?
  
   In as much as it is possible to assert that about any piece of
   software
   (bugs, of course, blah blah) But the messages that salt sends to a
   minion are run this thing that you have a local definition for
   rather
   than here, have some python and run it
  
   Monty
  
  
  
   Salt was originally designed to be a unified agent for a system like
   openstack. In fact, many people use it for this purpose right now.
  
   I discussed this with our team management and this is something
   SaltStack wants to support.
  
   Are there any specifics things that the salt minion lacks right now to
   support this use case?
  
 
  David,
 
  If I am correct of my parsing of the salt nomenclature, Salt provides a
  Master (eg a server) and minions (eg agents that connect to the salt
  server).  The salt server tells the minions what to do.
 
  This is not desirable for a unified agent (atleast in the case of Heat).
 
  The bar is very very very high for introducing new *mandatory* *server*
  dependencies into OpenStack.  Requiring a salt master (or a puppet
  master, etc) in my view is a non-starter for a unified guest agent
  proposal.  Now if a heat user wants to use puppet, and can provide a
  puppet master in their cloud environment, that is fine, as long as it is
  optional.
 

 What if we taught Heat to speak salt-master-ese? AFAIK it is basically
 an RPC system. I think right now it is 0mq, so it would be relatively
 straight forward to just have Heat start talking to the agents in 0mq.

  A guest agent should have the following properties:
  * minimal library dependency chain
  * no third-party server dependencies
  * packaged in relevant cloudy distributions
 

 That last one only matters if the distributions won't add things like
 agents to their images post-release. I am pretty sure work well in
 OpenStack is important for server distributions and thus this is at
 least something we don't have to freak out about too much.

  In terms of features:
  * run shell commands
  * install files (with selinux properties as well)
  * create users and groups (with selinux properties as well)
  * install packages via yum, apt-get, rpm, pypi
  * start and enable system services for systemd or sysvinit
  * Install and unpack source tarballs
  * run scripts
  * Allow grouping, selection, and ordering of all of the above operations
 

 All of those things are general purpose low level system configuration
 features. None of them will be needed for Trove or Savanna. They need
 to do higher level things like run a Hadoop job or create a MySQL user.


I agree with Clint on this one, Savanna do needs high level domain-specific
operations. We can do anything having just a root shell. But security-wise,
as it was already mentioned in the 

Re: [openstack-dev] [qa][keystone] Keystoneclient tests to tempest

2013-12-09 Thread Jeremy Stanley
On 2013-12-09 11:50:31 -0600 (-0600), Brant Knudson wrote:
[...]
 I don't know of any specific testing that verifies that the
 client works against older (stable/*) servers. Maybe it happens
 when you submit a change to stable/ later.

When a change is submitted or approved for a stable/foo server
branch, Tempest tests run with the other stable servers from that
release along with the master branch tip of all client libraries.
This helps prevent a stable branch update in a server from doing
something which won't work with the latest client versions (as long
as it's being exercised in Tempest). Work is underway to do the same
for changes to the clients so they don't break stable branches of
servers (as happened recently with the iso8601/grizzly debacle). The
other missing piece of the puzzle is testing the servers against the
latest release of the client, so we don't land changes to servers
which require unreleased client features.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2013-12-09 Thread Russell Bryant
On 12/09/2013 12:56 PM, Gordon Sim wrote:
 In the case of Nova (and others that followed Nova's messaging
 patterns), I firmly believe that for scaling reasons, we need to move
 toward it becoming the norm to use peer-to-peer messaging for most
 things.  For example, the API and conductor services should be talking
 directly to compute nodes instead of through a broker.
 
 Is scale the only reason for preferring direct communication? I don't
 think an intermediary based solution _necessarily_ scales less
 effectively (providing it is distributed in nature, which for example is
 one of the central aims of the dispatch router in Qpid).
 
 That's not to argue that peer-to-peer shouldn't be used, just trying to
 understand all the factors.

Scale is the primary one.  If the intermediary based solution is easily
distributed to handle our scaling needs, that would probably be fine,
too.  That just hasn't been our experience so far with both RabbitMQ and
Qpid.

 One other pattern that can benefit from intermediated message flow is in
 load balancing. If the processing entities are effectively 'pulling'
 messages, this can more naturally balance the load according to capacity
 than when the producer of the workload is trying to determine the best
 balance.

Yes, that's another factor.  Today, we rely on the message broker's
behavior to equally distribute messages to a set of consumers.

One example is how Nova components talk to the nova-scheduler service.
All instances of the nova-scheduler service are reading off a single
'scheduler' queue, so messages hit them round-robin.

In the case of the zeromq driver, this logic is embedded in the client.
 It has to know about all consumers and handles choosing where each
message goes itself.  See references to the 'matchmaker' code for this.

Honestly, using a distributed more lightweight router like Dispatch
sounds *much* nicer.

  The exception
 to that is cases where we use a publish-subscribe model, and a broker
 serves that really well.  Notifications and notification consumers
 (such as Ceilometer) are the prime example.
 
 The 'fanout' RPC cast would perhaps be another?

Good point.

In Nova we have been working to get rid of the usage of this pattern.
In the latest code the only place it's used AFAIK is in some code we
expect to mark deprecated (nova-network).

 In terms of existing messaging drivers, you could accomplish this with
 a combination of both RabbitMQ or Qpid for brokered messaging and
 ZeroMQ for the direct messaging cases.  It would require only a small
 amount of additional code to allow you to select a separate driver for
 each case.

 Based on my understanding, AMQP 1.0 could be used for both of these
 patterns.  It seems ideal long term to be able to use the same
 protocol for everything we need.
 
 That is certainly true. AMQP 1.0 is fully symmetric so it can be used
 directly peer-to-peer as well as between intermediaries. In fact, apart
 from the establishment of the connection in the first place, a process
 need not see any difference in the interaction either way.
 
 We could use only ZeroMQ, as well.  It doesn't have the
 publish-subscribe stuff we need built in necessarily.  Surely that has
 been done multiple times by others already, though.  We could build it
 too, if we had to.
 
 Indeed. However the benefit of choosing a protocol is that you can use
 solutions developed outside OpenStack or any other single project.
 
 Can you (or someone) elaborate further on what will make this solution
 superior to our existing options?
 
 Superior is a very bold claim to make :-) I do personally think that an
 AMQP 1.0 based solution would be worthwhile for the reasons above. Given
 a hypothetical choice between say the current qpid driver and one that
 could talk to different back-ends, over a standard protocol for which
 e.g. semantic monitoring tools could be developed and which would make
 reasoning about partial upgrades or migrations easier, I know which I
 would lean to. Obviously that is not the choice here, since one already
 exists and the other is as yet hypothetical. However, as I say I think
 this could be a worthwhile journey and that would justify at least
 taking some initial steps.

Thanks for sharing some additional insight.

I was already quite optimistic, but you've helped solidify that.  I'm
very interested in diving deeper into how Dispatch would fit into the
various ways OpenStack is using messaging today.  I'd like to get a
better handle on how the use of Dispatch as an intermediary would scale
out for a deployment that consists of 10s of thousands of compute nodes,
for example.

Is it roughly just that you can have a network of N Dispatch routers
that route messages from point A to point B, and for notifications we
would use a traditional message broker (qpidd or rabbitmq) ?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Unified Guest Agent proposal

2013-12-09 Thread Dmitry Mescheryakov
2013/12/9 Kurt Griffiths kurt.griffi...@rackspace.com

  This list of features makes me *very* nervous from a security
 standpoint. Are we talking about giving an agent an arbitrary shell command
 or file to install, and it goes and does that, or are we simply triggering
 a preconfigured action (at the time the agent itself was installed)?


I believe the agent must execute only a set of preconfigured actions
exactly due to security reasons. It should be up to the using project
(Savanna/Trove) to decide which actions must be exposed by the agent.



   From: Steven Dake sd...@redhat.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Monday, December 9, 2013 at 11:41 AM
 To: OpenStack Dev openstack-dev@lists.openstack.org

 Subject: Re: [openstack-dev] Unified Guest Agent proposal

  In terms of features:
 * run shell commands
 * install files (with selinux properties as well)
 * create users and groups (with selinux properties as well)
 * install packages via yum, apt-get, rpm, pypi
 * start and enable system services for systemd or sysvinit
 * Install and unpack source tarballs
 * run scripts
 * Allow grouping, selection, and ordering of all of the above operations

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Zane Bitter

On 09/12/13 14:03, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2013-12-09 09:52:25 -0800:

On 09/12/13 06:31, Steven Hardy wrote:

Hi all,

So I've been getting concerned about $subject recently, and based on some
recent discussions so have some other heat-core folks, so I wanted to start
a discussion where we can agree and communicate our expectations related to
nomination for heat-core membership (becuase we do need more core
reviewers):

The issues I have are:
- Russell's stats (while very useful) are being used by some projects as
the principal metric related to -core membership (ref TripleO's monthly
cull/nameshame, which I am opposed to btw).  This is in some cases
encouraging some stats-seeking in our review process, IMO.

- Review quality can't be measured mechanically - we have some folks who
contribute fewer, but very high quality reviews, and are also very active
contributors (so knowledge of the codebase is not stale).  I'd like to
see these people do more reviews, but removing people from core just
because they drop below some arbitrary threshold makes no sense to me.


+1

Fun fact: due to the quirks of how Gerrit produces the JSON data dump,
it's not actually possible for the reviewstats tools to count +0
reviews. So, for example, one can juice one's review stats by actively
obstructing someone else's work (voting -1) when a friendly comment
would have sufficed. This is one of many ways in which metrics offer
perverse incentives.

Statistics can be useful. They can be particularly useful *in the
aggregate*. But as soon as you add a closed feedback loop you're no
longer measuring what you originally thought - mostly you're just
measuring the gain of the feedback loop.



I think I understand the psychology of stats and incentives, and I know
that this _may_ happen.

However, can we please be more careful about how this is referenced?
Your message above is suggesting the absolute _worst_ behavior from our
community. That is not what I expect, and I think anybody who was doing
that would be dealt with _swiftly_.


Sorry for the confusion, I wasn't trying to suggest that at all. FWIW I 
haven't noticed anyone gaming the stats (maybe I haven't been looking at 
enough reviews ;). What I have noticed is that every time I leave a +0 
comment on a patch, I catch myself thinking this won't look good on the 
stats - and then I continue on regardless. If somebody who wasn't core 
but wanted to be were to -1 the patch instead in similar circumstances, 
then I wouldn't blame them in the least for responding to that incentive.


My point, and I think Steve's, is that we should be careful how we *use* 
the stats, so that folks won't feel this pressure. It's not at all about 
calling anybody out, but I apologise for not making that clearer.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Keystoneclient tests to tempest

2013-12-09 Thread Adam Young

On 12/09/2013 11:07 AM, Sean Dague wrote:

On 12/09/2013 10:12 AM, Brant Knudson wrote:

Monty -

Thanks for doing the work already to get the infrastructure set up.
Looks like I've got the easy part here. I posted an initial patch that
has one test from keystone in https://review.openstack.org/#/c/60724/ .
I hope to be able to move all the tests over unchanged. The tricky part
is getting all the fixtures set up the same way that keystone does.

I think a direct port of the keystone fixtures is the wrong approach.
These really need to act more like the scenario tests that exist over
there. And if the intent is just a dump of the keystone tests we need to
step back... because that's not going to get accepted.

I actually think that we should solve #4 first - how you test the thing
you actually want to test in the gate. Which is about getting
devstack-gate to setup the world that you want to test. I really think
the location of the tests all flow from there. Because right now it
seems like the cart is before the horse.



I think we can rework the Keystone tests to meet the Tempest standard 
without making it inot a Major rewrite.



The biggest Sin of the current tests is that it creates self.user_foo 
with an id of 'foo'.  But the test should never be looking for the 
string 'foo'. Instead, They should be doing the equivalent to:


assertEquals(self.user_foo.id, some_user.id)

If we make the fixture Setup create user_foo, but with an ID generated 
for each test using uuid4, we should be able to have the proper test 
semantics.






-Sean




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Robert Collins
On 10 December 2013 00:31, Steven Hardy sha...@redhat.com wrote:
 Hi all,

 So I've been getting concerned about $subject recently, and based on some
 recent discussions so have some other heat-core folks, so I wanted to start
 a discussion where we can agree and communicate our expectations related to
 nomination for heat-core membership (becuase we do need more core
 reviewers):

Great! (Because I think you do too :).

 The issues I have are:
 - Russell's stats (while very useful) are being used by some projects as
   the principal metric related to -core membership (ref TripleO's monthly
   cull/nameshame, which I am opposed to btw).  This is in some cases
   encouraging some stats-seeking in our review process, IMO.

With all due respect - you are entirely wrong, and I am now worried
about the clarity of my emails vis-a-vis review team makeup. I presume
you've read them in detail before referencing them of course - have
you? What can I do to make the actual process clearer?

IMO the primary metric for inclusion is being perceived by a bunch of
existing -cores as being substantial contributors of effective, useful
reviews. The reviewstats stats are a useful aide de memoir, nothing
more. Yes, if you don't do enough reviews for an extended period -
several months - then you're likely to no longer be perceived as being
a substantial contributor of effective, useful reviews - and no longer
aware enough of the current codebase and design to just step back into
-core shoes.

So it's a gross mischaracterisation to imply that a democratic process
aided by some [crude] stats has been reduced to name  shame, and a
rather offensive one.

Anyone can propose members for inclusion in TripleO-core, and we all
vote - likewise removal. The fact I do a regular summary and propose
some folk and some cleanups is my way of ensuring that we don't get
complacent - that we recognise folk who are stepping up, and give them
guidance if they aren't stepping up in an effective manner - or if
they are no longer being effective enough to be recognised - in my
opinion. If the rest of the TripleO core team agrees with my opinion,
we get changes to -core, if not, we don't. If someone else wants to
propose a -core member, they are welcome to! Hopefully with my taking
point on this, that effort isn't needed - but it's still possible.

 - Review quality can't be measured mechanically - we have some folks who
   contribute fewer, but very high quality reviews, and are also very active
   contributors (so knowledge of the codebase is not stale).  I'd like to
   see these people do more reviews, but removing people from core just
   because they drop below some arbitrary threshold makes no sense to me.

In principle, review quality *can* be measured mechanically, but the
stats we have do not do that - and cannot. We'd need mechanical follow
through to root cause analysis for reported defects (including both
crashes and softer defects like 'not as fast as I want' or 'feature X
is taking too long to develop') and impact on review and contribution
rates to be able to assess the impact of a reviewer over time - what
bugs they prevented entering the code base, how their reviews kept the
code base maintainable and flexible, and how their interactions with
patch submitters helped grow the community. NONE of the stats we have
today even vaguely approximate that.

 So if you're aiming for heat-core nomination, here's my personal wish-list,
 but hopefully others can proide their input and we can update the wiki with
 the resulting requirements/guidelines:

I'm not aiming for heat-core, so this is just kibbitzing- take it for
what it's worth:

 - Make your reviews high-quality.  Focus on spotting logical errors,
   reducing duplication, consistency with existing interfaces, opportunities
   for reuse/simplification etc.  If every review you do is +1, or -1 for a
   trivial/cosmetic issue, you are not making a strong case for -core IMHO.

There is a tension here. I agree that many trivial/cosmetic issues are
not a big deal in themselves. But in aggregate I would argue that a
codebase with lots of tpyos, poor idioms, overly large classes and
other shallow-at-the-start issues is one that will become
progressively harder to contribute to.

 - Send patches.  Some folks argue that -core membership is only about
   reviews, I disagree - There are many aspects of reviews which require
   deep knowledge of the code, e.g spotting structural issues, logical
   errors caused by interaction with code not modified by the patch,
   effective use of test infrastructure, etc etc.  This deep knowledge comes
   from writing code, not only reviewing it.  This also gives us a way to
   verify your understanding and alignment with our sylistic conventions.

I think seeing patches from people is a great way to see whether they
are able to write code that fits with the codebase. I'm not sure it's
a good indicator that folk will be effective, positive reviewers.
Personally, I 

Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-09 Thread Adam Young

On 12/06/2013 04:44 AM, David Chadwick wrote:

Another alternative is to change role name into role display name,
indicating that the string is only to be used in GUIs, is not guaranteed
to be unique, is set by the role creator, can be any string in any
character set, and is not used by the system anywhere. Only role ID is
used by the system, in policy evaluation, in user-role assignments, in
permission-role assignments etc.


That will make policy much harder to read.  I'd recommend that the role 
name continue to be the good name, for both UI and for policy enforcement.







regards

David

On 05/12/2013 16:21, Tiwari, Arvind wrote:

Hi David,

Let me capture these details in ether pad. I will drop an email after adding 
these details in etherpad.

Thanks,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
Sent: Thursday, December 05, 2013 4:15 AM
To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List (not for 
usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Arvind

we are making good progress, but what I dont like about your proposal
below is that the role name is not unique. There can be multiple roles
with the same name, but different IDs, and different scopes. I dont like
this, and I think it would be confusing to users/administrators. I think
the role names should be different as well. This is not difficult to
engineer if the names are hierarchically structured based on the name of
the role creator. The creator might be owner of the resource that is
being scoped, but it need not necessarily be. Assuming it was, then in
your examples below we might have role names of NovaEast.admin and
NovaWest.admin. Since these are strings, policies can be easily adapted
to match on NovaWest.admin instead of admin.

regards

david

On 04/12/2013 17:21, Tiwari, Arvind wrote:

Hi Adam,

I have added my comments in line.

As per my request yesterday and David's proposal, following role-def data model 
is looks generic enough and seems innovative to accommodate future extensions.

{
   role: {
 id: 76e72a,
 name: admin, (you can give whatever name you like)
 scope: {
   id: ---id--, (ID should be  1 to 1 mapped with resource in type and 
must be immutable value)
   type: service | file | domain etc., (Type can be any type of 
resource which explains the scoping context)
   interface:--interface--  (We are still need working on this field. 
My idea of this optional field is to indicate the interface of the resource (endpoint for service, 
path for File,) for which the role-def is  created 
and can be empty.)
 }
   }
}

Based on above data model two admin roles for nova for two separate region wd 
be as below

{
   role: {
 id: 76e71a,
 name: admin,
 scope: {
   id: 110, (suppose 110 is Nova serviceId)
   interface: 1101, (suppose 1101 is Nova region East endpointId)
   type: service
 }
   }
}

{
   role: {
 id: 76e72a,
 name: admin,
 scope: {
   id: 110,
   interface: 1102,(suppose 1102 is Nova region West endpointId)
   type: service
 }
   }
}

This way we can keep role-assignments abstracted from resource on which the 
assignment is created. This also open doors to have service and/or endpoint 
scoped token as I mentioned in https://etherpad.openstack.org/p/1Uiwcbfpxq.

David, I have updated 
https://etherpad.openstack.org/p/service-scoped-role-definition line #118 
explaining the rationale behind the field.
I wd also appreciate your vision on https://etherpad.openstack.org/p/1Uiwcbfpxq 
too which is support 
https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.


Thanks,
Arvind

-Original Message-
From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, December 03, 2013 6:52 PM
To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

I've been thinking about your comment that nested roles are confusing
AT: Thanks for considering my comment about nested role-def.

What if we backed off and said the following:


Some role-definitions are owned by services.  If a Role definition is
owned by a service, in role assignment lists in tokens, those roles we
be prefixd by the service name.  / is a reserved cahracter and weill be
used as the divider between segments of the role definition 

That drops arbitrary nesting, and provides a reasonable namespace.  Then
a role def would look like:

glance/admin  for the admin role on the glance project.

AT: It seems this approach is not going to help, service rename would impact 
all the role-def for a particular service. And we are back to the same problem.

In theory, we could add the domain to the namespace, but that seems

[openstack-dev] [climate] IRC meeting exceptionnally today 2000 UTC

2013-12-09 Thread Sylvain Bauza
As per https://wiki.openstack.org/wiki/Meetings there is no meeting at 2000
UTC, so let's meet up together on #openstack-meeting in 10 mins.

Thanks,
-Sylvain


2013/12/9 Sergey Lukjanov slukja...@mirantis.com

 Time is ok for me too.

 @Sylvain, IMO policies could not be high prioritized when nothing else
 really completed and their support doesn't blocks main functionality. It'll
 be great to have them in 0.1, but I think that it's a nice to have for 0.1.

 Thanks.


 On Mon, Dec 9, 2013 at 1:46 PM, Dina Belova dbel...@mirantis.com wrote:

 Ok, we'll discuss :)


 On Monday, December 9, 2013, Sylvain Bauza wrote:

  Well, I wouldn't say more, I also have family concerns ;-)

 I will prepare the meeting by checking all individual actions, so I will
 only raise the left ones.
 Hooks clarification can be postponed to the next meeting or discussed
 directly on chan.

 I saw you made a first pass on triaging blueprints for 0.1. Some are
 good, some need to be discussed, IMHO. For example, not having policies in
 Climate sounds a showstopper to me.

 -Sylvain

 Le 09/12/2013 10:38, Dina Belova a écrit :

 I think it's ok, but not more than a half of hour

 On Monday, December 9, 2013, Sylvain Bauza wrote:

  Hi Dina,
 Forwarding your request to openstack-dev@ as everyone needs to be
 aware of that.

 Which time do you propose for meeting us today ? On my side, I'll be
 travelling tomorrow and the day after, I'll be out of office with limited
 access, so we need to sync up today.
 I can propose 2000 UTC today, but that's very late for you :/

 -Sylvain

 Le 09/12/2013 07:10, Dina Belova a écrit :

 Sylvin,

  Sergey and I have no opportunity to be today on meeting.

  What do you think about moving it to the evening (in #climate
 channel) to have a full quorum?

  Thank you.

  --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-09 Thread David Chadwick


On 09/12/2013 19:37, Adam Young wrote:
 On 12/06/2013 04:44 AM, David Chadwick wrote:
 Another alternative is to change role name into role display name,
 indicating that the string is only to be used in GUIs, is not guaranteed
 to be unique, is set by the role creator, can be any string in any
 character set, and is not used by the system anywhere. Only role ID is
 used by the system, in policy evaluation, in user-role assignments, in
 permission-role assignments etc.
 
 That will make policy much harder to read.  I'd recommend that the role
 name continue to be the good name, for both UI and for policy
 enforcement.

in which case all role names must be unique

David

 
 
 
 

 regards

 David

 On 05/12/2013 16:21, Tiwari, Arvind wrote:
 Hi David,

 Let me capture these details in ether pad. I will drop an email after
 adding these details in etherpad.

 Thanks,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Thursday, December 05, 2013 4:15 AM
 To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List
 (not for usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 we are making good progress, but what I dont like about your proposal
 below is that the role name is not unique. There can be multiple roles
 with the same name, but different IDs, and different scopes. I dont like
 this, and I think it would be confusing to users/administrators. I think
 the role names should be different as well. This is not difficult to
 engineer if the names are hierarchically structured based on the name of
 the role creator. The creator might be owner of the resource that is
 being scoped, but it need not necessarily be. Assuming it was, then in
 your examples below we might have role names of NovaEast.admin and
 NovaWest.admin. Since these are strings, policies can be easily adapted
 to match on NovaWest.admin instead of admin.

 regards

 david

 On 04/12/2013 17:21, Tiwari, Arvind wrote:
 Hi Adam,

 I have added my comments in line.

 As per my request yesterday and David's proposal, following role-def
 data model is looks generic enough and seems innovative to
 accommodate future extensions.

 {
role: {
  id: 76e72a,
  name: admin, (you can give whatever name you like)
  scope: {
id: ---id--, (ID should be  1 to 1 mapped with resource
 in type and must be immutable value)
type: service | file | domain etc., (Type can be any type
 of resource which explains the scoping context)
interface:--interface--  (We are still need working on
 this field. My idea of this optional field is to indicate the
 interface of the resource (endpoint for service, path for File,)
 for which the role-def is created and can be
 empty.)
  }
}
 }

 Based on above data model two admin roles for nova for two separate
 region wd be as below

 {
role: {
  id: 76e71a,
  name: admin,
  scope: {
id: 110, (suppose 110 is Nova serviceId)
interface: 1101, (suppose 1101 is Nova region East
 endpointId)
type: service
  }
}
 }

 {
role: {
  id: 76e72a,
  name: admin,
  scope: {
id: 110,
interface: 1102,(suppose 1102 is Nova region West
 endpointId)
type: service
  }
}
 }

 This way we can keep role-assignments abstracted from resource on
 which the assignment is created. This also open doors to have
 service and/or endpoint scoped token as I mentioned in
 https://etherpad.openstack.org/p/1Uiwcbfpxq.

 David, I have updated
 https://etherpad.openstack.org/p/service-scoped-role-definition line
 #118 explaining the rationale behind the field.
 I wd also appreciate your vision on
 https://etherpad.openstack.org/p/1Uiwcbfpxq too which is support
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.



 Thanks,
 Arvind

 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com]
 Sent: Tuesday, December 03, 2013 6:52 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List (not for
 usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 I've been thinking about your comment that nested roles are confusing
 AT: Thanks for considering my comment about nested role-def.

 What if we backed off and said the following:


 Some role-definitions are owned by services.  If a Role definition is
 owned by a service, in role assignment lists in tokens, those roles we
 be prefixd by the service name.  / is a reserved cahracter and weill be
 used as the divider between segments of the role definition 

 That drops arbitrary nesting, and provides a reasonable namespace. 
 Then
 a role def would look like:

 glance/admin  for the admin role on the glance project.

 AT: It seems this approach is not going to help, 

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Robert Collins
On 9 December 2013 23:56, Jaromir Coufal jcou...@redhat.com wrote:

 On 2013/07/12 01:59, Robert Collins wrote:

* Creation
   * Manual registration
  * hardware specs from Ironic based on mac address (M)

 Ironic today will want IPMI address + MAC for each NIC + disk/cpu/memory
 stats

 For registration it is just Management MAC address which is needed right? Or
 does Ironic need also IP? I think that MAC address might be enough, we can
 display IP in details of node later on.

Ironic needs all the details I listed today. Management MAC is not
currently used at all, but would be needed in future when we tackle
IPMI IP managed by Neutron.

* Auto-discovery during undercloud install process (M)
* Monitoring
* assignment, availability, status
* capacity, historical statistics (M)

 Why is this under 'nodes'? I challenge the idea that it should be
 there. We will need to surface some stuff about nodes, but the
 underlying idea is to take a cloud approach here - so we're monitoring
 services, that happen to be on nodes. There is room to monitor nodes,
 as an undercloud feature set, but lets be very very specific about
 what is sitting at what layer.

 We need both - we need to track services but also state of nodes (CPU, RAM,
 Network bandwidth, etc). So in node detail you should be able to track both.

Those are instance characteristics, not node characteristics. An
instance is software running on a Node, and the amount of CPU/RAM/NIC
utilisation is specific to that software while it's on that Node, not
to future or past instances running on that Node.

* created as part of undercloud install process
* can create additional management nodes (F)
 * Resource nodes

 ^ nodes is again confusing layers - nodes are
 what things are deployed to, but they aren't the entry point

 Can you, please be a bit more specific here? I don't understand this note.

By the way, can you get your email client to insert  before the text
you are replying to rather than HTML | marks? Hard to tell what I
wrote and what you did :).

By that note I meant, that Nodes are not resources, Resource instances
run on Nodes. Nodes are the generic pool of hardware we can deploy
things onto.

 * searchable by status, name, cpu, memory, and all attributes from
 ironic
 * can be allocated as one of four node types

 Not by users though. We need to stop thinking of this as 'what we do
 to nodes' - Nova/Ironic operate on nodes, we operate on Heat
 templates.

 Discussed in other threads, but I still believe (and I am not alone) that we
 need to allow 'force nodes'.

I'll respond in the other thread :).

 * Unallocated nodes

 This implies an 'allocation' step, that we don't have - how about
 'Idle nodes' or something.

 It can be auto-allocation. I don't see problem with 'unallocated' term.

Ok, it's not a biggy. I do think it will frame things poorly and lead
to an expectation about how TripleO works that doesn't match how it
does, but we can change it later if I'm right, and if I'm wrong, well
it won't be the first time :).

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-09 Thread Adam Young

On 12/09/2013 03:04 PM, David Chadwick wrote:


On 09/12/2013 19:37, Adam Young wrote:

On 12/06/2013 04:44 AM, David Chadwick wrote:

Another alternative is to change role name into role display name,
indicating that the string is only to be used in GUIs, is not guaranteed
to be unique, is set by the role creator, can be any string in any
character set, and is not used by the system anywhere. Only role ID is
used by the system, in policy evaluation, in user-role assignments, in
permission-role assignments etc.

That will make policy much harder to read.  I'd recommend that the role
name continue to be the good name, for both UI and for policy
enforcement.

in which case all role names must be unique

David


Hat is my understanding, yes, and I think that your proposal covers 
that.  A role name for policy will be the full name, for example 
domain/project/role in the 3 portion version you posted.









regards

David

On 05/12/2013 16:21, Tiwari, Arvind wrote:

Hi David,

Let me capture these details in ether pad. I will drop an email after
adding these details in etherpad.

Thanks,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
Sent: Thursday, December 05, 2013 4:15 AM
To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List
(not for usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Arvind

we are making good progress, but what I dont like about your proposal
below is that the role name is not unique. There can be multiple roles
with the same name, but different IDs, and different scopes. I dont like
this, and I think it would be confusing to users/administrators. I think
the role names should be different as well. This is not difficult to
engineer if the names are hierarchically structured based on the name of
the role creator. The creator might be owner of the resource that is
being scoped, but it need not necessarily be. Assuming it was, then in
your examples below we might have role names of NovaEast.admin and
NovaWest.admin. Since these are strings, policies can be easily adapted
to match on NovaWest.admin instead of admin.

regards

david

On 04/12/2013 17:21, Tiwari, Arvind wrote:

Hi Adam,

I have added my comments in line.

As per my request yesterday and David's proposal, following role-def
data model is looks generic enough and seems innovative to
accommodate future extensions.

{
role: {
  id: 76e72a,
  name: admin, (you can give whatever name you like)
  scope: {
id: ---id--, (ID should be  1 to 1 mapped with resource
in type and must be immutable value)
type: service | file | domain etc., (Type can be any type
of resource which explains the scoping context)
interface:--interface--  (We are still need working on
this field. My idea of this optional field is to indicate the
interface of the resource (endpoint for service, path for File,)
for which the role-def is created and can be
empty.)
  }
}
}

Based on above data model two admin roles for nova for two separate
region wd be as below

{
role: {
  id: 76e71a,
  name: admin,
  scope: {
id: 110, (suppose 110 is Nova serviceId)
interface: 1101, (suppose 1101 is Nova region East
endpointId)
type: service
  }
}
}

{
role: {
  id: 76e72a,
  name: admin,
  scope: {
id: 110,
interface: 1102,(suppose 1102 is Nova region West
endpointId)
type: service
  }
}
}

This way we can keep role-assignments abstracted from resource on
which the assignment is created. This also open doors to have
service and/or endpoint scoped token as I mentioned in
https://etherpad.openstack.org/p/1Uiwcbfpxq.

David, I have updated
https://etherpad.openstack.org/p/service-scoped-role-definition line
#118 explaining the rationale behind the field.
I wd also appreciate your vision on
https://etherpad.openstack.org/p/1Uiwcbfpxq too which is support
https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.



Thanks,
Arvind

-Original Message-
From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, December 03, 2013 6:52 PM
To: Tiwari, Arvind; OpenStack Development Mailing List (not for
usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

I've been thinking about your comment that nested roles are confusing
AT: Thanks for considering my comment about nested role-def.

What if we backed off and said the following:


Some role-definitions are owned by services.  If a Role definition is
owned by a service, in role assignment lists in tokens, those roles we
be prefixd by the service name.  / is a reserved cahracter and weill be
used as the divider between segments of the role definition 

That drops arbitrary nesting, and provides a reasonable 

[openstack-dev] [Climate] Minutes from today meeting

2013-12-09 Thread Sylvain Bauza
Huge thanks for our Russian peers who did exceptional efforts for joining
us exceptionnally today at 2000 UTC (midnight their time), I owe you a beer
:-)

You can find our weekly meeting minutes on
http://eavesdrop.openstack.org/meetings/climate/2013/climate.2013-12-09-20.01.html

Thanks,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-09 Thread Tiwari, Arvind
Hi David,

I have updated the ether pad with below comments.

Regards, 
Arvind



Another alternative is to change role name into role display name, indicating 
that the string is only to be used in GUIs, is not guaranteed to be unique, is 
set by the role creator, can be any string in any character set, and is not 
used by the system anywhere (AT1). Only role ID is used by the system, in 
policy evaluation, in user-role assignments, in permission-role assignments 
etc. (AT2)

AT1 - 
1.  Display name proposal does not seems to work because, we cannot enforce 
service (e.g. Nova, Swift) to use role_id to define their policy.
AT2 - 
1.  Using role_id for policy evaluation is doable but it will be an 
enormous impact on token data structure, policy etc, which won't be 
acceptable to community.
2.  permission-role assignments goes with policy file which is  again not 
acceptable due to same reason as #1.
3.  user-role (or group-role) assignments uses the role_id, so there won't 
be any change.

I think we should consider composite key to make the role  entity unique and 
keep having duplicate role_names in system. Something as below

{
  role: {
id: 76e72a,
name: ---role_name---, (resource name spaced name e.g. 
nova.east.admin)
scope: {
  id: ---id---, (resource_id)
  type: service | file | domain etc.,
  endpoint:---endpoint--- 
}
 domain_id = --id--,(optional)
 project_id = --id--(optional)
  }
}
Fields name, scope.id, domain_id and project_id makes the composite key.



-Original Message-
From: Adam Young [mailto:ayo...@redhat.com] 
Sent: Monday, December 09, 2013 1:28 PM
To: David Chadwick; Tiwari, Arvind; OpenStack Development Mailing List (not for 
usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

On 12/09/2013 03:04 PM, David Chadwick wrote:

 On 09/12/2013 19:37, Adam Young wrote:
 On 12/06/2013 04:44 AM, David Chadwick wrote:
 Another alternative is to change role name into role display name,
 indicating that the string is only to be used in GUIs, is not guaranteed
 to be unique, is set by the role creator, can be any string in any
 character set, and is not used by the system anywhere. Only role ID is
 used by the system, in policy evaluation, in user-role assignments, in
 permission-role assignments etc.
 That will make policy much harder to read.  I'd recommend that the role
 name continue to be the good name, for both UI and for policy
 enforcement.
 in which case all role names must be unique

 David

Hat is my understanding, yes, and I think that your proposal covers 
that.  A role name for policy will be the full name, for example 
domain/project/role in the 3 portion version you posted.





 regards

 David

 On 05/12/2013 16:21, Tiwari, Arvind wrote:
 Hi David,

 Let me capture these details in ether pad. I will drop an email after
 adding these details in etherpad.

 Thanks,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Thursday, December 05, 2013 4:15 AM
 To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List
 (not for usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 we are making good progress, but what I dont like about your proposal
 below is that the role name is not unique. There can be multiple roles
 with the same name, but different IDs, and different scopes. I dont like
 this, and I think it would be confusing to users/administrators. I think
 the role names should be different as well. This is not difficult to
 engineer if the names are hierarchically structured based on the name of
 the role creator. The creator might be owner of the resource that is
 being scoped, but it need not necessarily be. Assuming it was, then in
 your examples below we might have role names of NovaEast.admin and
 NovaWest.admin. Since these are strings, policies can be easily adapted
 to match on NovaWest.admin instead of admin.

 regards

 david

 On 04/12/2013 17:21, Tiwari, Arvind wrote:
 Hi Adam,

 I have added my comments in line.

 As per my request yesterday and David's proposal, following role-def
 data model is looks generic enough and seems innovative to
 accommodate future extensions.

 {
 role: {
   id: 76e72a,
   name: admin, (you can give whatever name you like)
   scope: {
 id: ---id--, (ID should be  1 to 1 mapped with resource
 in type and must be immutable value)
 type: service | file | domain etc., (Type can be any type
 of resource which explains the scoping context)
 interface:--interface--  (We are still need working on
 this field. My idea of this optional field is to indicate the
 interface of the resource (endpoint for service, path for File,)
 for which the role-def is created and 

Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-09 Thread Robert Collins
On 6 December 2013 14:11, Fox, Kevin M kevin@pnnl.gov wrote:
 I think the security issue can be handled by not actually giving the 
 underlying resource to the user in the first place.

 So, for example, if I wanted a bare metal node's worth of resource for my own 
 containering, I'd ask for a bare metal node and use a blessed image that 
 contains docker+nova bits that would hook back to the cloud. I wouldn't be 
 able to login to it, but containers started on it would be able to access my 
 tenant's networks. All access to it would have to be through nova 
 suballocations. The bare resource would count against my quotas, but nothing 
 run under it.

 Come to think of it, this sounds somewhat similar to what is planned for 
 Neutron service vm's. They count against the user's quota on one level but 
 not all access is directly given to the user. Maybe some of the same 
 implementation bits could be used.

This is a super interesting discussion - thanks for kicking it off.

I think it would be fantastic to be able to use containers for
deploying the cloud rather than full images while still running
entirely OpenStack control up and down the stack.

Briefly, what we need to be able to do that is:

 - the ability to bring up an all in one node with everything on it to
'seed' the environment.
- we currently do that by building a disk image, and manually
running virsh to start it
 - the ability to reboot a machine *with no other machines running* -
we need to be able to power off and on a datacentre - and have the
containers on it come up correctly configured, networking working,
running etc.
 - we explicitly want to be just using OpenStack APIs for all the
deployment operations after the seed is up; so no direct use of lxc or
docker or whathaveyou.

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Creating recipes for mimicking behavior of nova-networking network managers.

2013-12-09 Thread Brent Eagles

On 12/09/2013 04:05 PM, Brent Eagles wrote:

On 12/04/2013 07:56 PM, Tom Fifield wrote:

On 05/12/13 01:14, Brent Eagles wrote:

Hi,


 snip 


I think that's a great idea.

What kind of format would you like to see the recepies in?

Regards,

Tom


I think a wiki is the right way to start. It will allow us to include
diagrams and access other online content fairly easily. Other
suggestions are welcome of course!

Cheers,

Brent


FWIW: I stoked a wiki with some content from network manager 
descriptions if it makes a reasonable starting point.


https://wiki.openstack.org/wiki/NovaNetNeutronRecipes

Cheers,

Brent


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][qa] How will ironic tests run in tempest?

2013-12-09 Thread Robert Collins
On 10 December 2013 07:37, Devananda van der Veen
devananda@gmail.com wrote:


 We can test the ironic services, database, and the driver interfaces by
 using our fake driver within a single devstack VM today (I'm not sure the
 exercises for all of this have been written yet, but it's practical to test
 it). OTOH, I don't believe we can test a PXE deploy within a single VM
 today, and need to resume discussions with infra about this.

I think you can with qemu, but only VM's that are super lightweight
(like cirros) heavier things like OpenStack itself will be
prohibitively slow.

 There are some other aspects of Ironic (IPMI, SOL access, any
 vendor-specific drivers) which we'll need real hardware to test because they
 can't effectively be virtualized. TripleO should cover some (much?) of those
 needs, once they are able to switch to using Ironic instead of
 nova-baremetal.

We can cover that two ways - post deploy feedback, and using machines
to run tests against during gating. The latter is clearly better, but
we'll need a broader set of small machines to do that (we currently
have a set of big machines, which are fantastic for divide and conquer
workloads, but not for anything that needs a full machine.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Language pack attributes schema

2013-12-09 Thread Georgy Okrokvertskhov
Hi,

As a part of Language pack workgroup session we created an etherpad for
language pack attributes definition. Please find a first draft of language
pack attributes here:
https://etherpad.openstack.org/p/Solum-Language-pack-json-format

We have identified a minimal list of attributes which should be supported
by language pack API.

Please, provide your feedback and\or ideas in this etherpad. Once it is
reviewed we can use this as a basis for language packs in PoC.

Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [Security]

2013-12-09 Thread Paul Montgomery
Thanks Clayton!  I added your new content.

To all:
The OpenStack Security Guide is a very good resource to read -
http://docs.openstack.org/security-guide/content/openstack_user_guide.html

Apologies for not being up to speed on how things work in OpenStack yet
but here is a list of topics that I would like to discuss with the
community and reach a conclusion (guidance on how to get there is greatly
appreciated):

1) Solum community agrees that the OpenStack Security Guide (OSSG) will be
the basis for the Solum security architecture
2) Create a list of Solum security requirements that the community agrees
to take on from the OSSG and create a separate list of security
implementation that operators will be expected to implement on their own
3) Create a list of Solum-specific security requirements that the
community approves.  I recommend that
https://wiki.openstack.org/wiki/Solum/Security becomes this Solum-specific
security requirements list which the OSSG doesn't cover
4) Assign each security requirement a future Solum milestone, (beyond
milestone-1), details TBD

Finally determine the method for putting security requirements into the
work queue.  Sometimes blueprints will work but some security topics are
not easily confined to a single task or section of code.  Wikis or even
HACKING.rst might be locations for these requirements.



On 12/9/13 9:09 AM, Clayton Coleman ccole...@redhat.com wrote:



- Original Message -
 I created some relatively high level security best practices that I
 thought would apply to Solum.  I don't think it is ever too early to get
 mindshare around security so that developers keep that in mind
throughout
 the project.  When a design decision point could easily go two ways,
 perhaps these guidelines can sway direction towards a more secure path.
 
 This is a living document, please contribute and let's discuss topics.
 I've worn a security hat in various jobs so I'm always interested. :)
 Also, I realize that many of these features may not directly be
 encapsulated by Solum but rather components such as KeyStone or Horizon.
 
 https://wiki.openstack.org/wiki/Solum/Security
 
 I would like to build on this list and create blueprints or tasks based
on
 topics that the community agrees upon.  We will also need to start
 thinking about timing of these features.
 

A few suggested additions:

  # Request load management

  A number of proposed Solum operations are computationally/resource
expensive.  The fulfilment of those operations should be predictable and
linear, and resist denial-of-service or amplification attacks on a per
user / project / service basis as needed.  This may involve queueing
requests, having high water marks for these queues (where additional
requests are rejected until existing requests clear), throttling delays
on queue processing, separate work pools, or other load management
techniques.  The system must remain available for other tenants even if a
subset are targeted or malicious.

  # Secure Storage (addendum)

  Confidential data such as credential information should not be stored
unencrypted in non-volatile storage. This is a defense in depth topic to
place a barrier in front of attackers in the event that they gain access
to some of the Solum control plane.

  ADD: Where possible, distribute security responsibilities to user
application storage / execution environments.  Even encrypted data in
non-volatile storage is potentially valuable (especially given the
possibility bugs in implementation), creating a high value target.
Pushing secure data out as far as possible reduces the value of any
individual data store to an attacker.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Robert Collins
On 10 December 2013 09:55, Tzu-Mainn Chen tzuma...@redhat.com wrote:
 * created as part of undercloud install process

 By that note I meant, that Nodes are not resources, Resource instances
 run on Nodes. Nodes are the generic pool of hardware we can deploy
 things onto.

 I don't think resource nodes is intended to imply that nodes are resources; 
 rather, it's supposed to
 indicate that it's a node where a resource instance runs.  It's supposed to 
 separate it from management node
 and unallocated node.

So the question is are we looking at /nodes/ that have a /current
role/, or are we looking at /roles/ that have some /current nodes/.

My contention is that the role is the interesting thing, and the nodes
is the incidental thing. That is, as a sysadmin, my hierarchy of
concerns is something like:
 A: are all services running
 B: are any of them in a degraded state where I need to take prompt
action to prevent a service outage [might mean many things: - software
update/disk space criticals/a machine failed and we need to scale the
cluster back up/too much load]
 C: are there any planned changes I need to make [new software deploy,
feature request from user, replacing a faulty machine]
 D: are there long term issues sneaking up on me [capacity planning,
machine obsolescence]

If we take /nodes/ as the interesting thing, and what they are doing
right now as the incidental thing, it's much harder to map that onto
the sysadmin concerns. If we start with /roles/ then can answer:
 A: by showing the list of roles and the summary stats (how many
machines, service status aggregate), role level alerts (e.g. nova-api
is not responding)
 B: by showing the list of roles and more detailed stats (overall
load, response times of services, tickets against services
 and a list of in trouble instances in each role - instances with
alerts against them - low disk, overload, failed service,
early-detection alerts from hardware
 C: probably out of our remit for now in the general case, but we need
to enable some things here like replacing faulty machines
 D: by looking at trend graphs for roles (not machines), but also by
looking at the hardware in aggregate - breakdown by age of machines,
summary data for tickets filed against instances that were deployed to
a particular machine

C: and D: are (F) category work, but for all but the very last thing,
it seems clear how to approach this from a roles perspective.

I've tried to approach this using /nodes/ as the starting point, and
after two terrible drafts I've deleted the section. I'd love it if
someone could show me how it would work:)

  * Unallocated nodes
 
  This implies an 'allocation' step, that we don't have - how about
  'Idle nodes' or something.
 
  It can be auto-allocation. I don't see problem with 'unallocated' term.

 Ok, it's not a biggy. I do think it will frame things poorly and lead
 to an expectation about how TripleO works that doesn't match how it
 does, but we can change it later if I'm right, and if I'm wrong, well
 it won't be the first time :).


 I'm interested in what the distinction you're making here is.  I'd rather get 
 things
 defined correctly the first time, and it's very possible that I'm missing a 
 fundamental
 definition here.

So we have:
 - node - a physical general purpose machine capable of running in
many roles. Some nodes may have hardware layout that is particularly
useful for a given role.
 - role - a specific workload we want to map onto one or more nodes.
Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.
 - instance - A role deployed on a node - this is where work actually happens.
 - scheduling - the process of deciding which role is deployed on which node.

The way TripleO works is that we defined a Heat template that lays out
policy: 5 instances of 'overcloud control plane please', '20
hypervisors' etc. Heat passes that to Nova, which pulls the image for
the role out of Glance, picks a node, and deploys the image to the
node.

Note in particular the order: Heat - Nova - Scheduler - Node chosen.

The user action is not 'allocate a Node to 'overcloud control plane',
it is 'size the control plane through heat'.

So when we talk about 'unallocated Nodes', the implication is that
users 'allocate Nodes', but they don't: they size roles, and after
doing all that there may be some Nodes that are - yes - unallocated,
or have nothing scheduled to them. So... I'm not debating that we
should have a list of free hardware - we totally should - I'm debating
how we frame it. 'Available Nodes' or 'Undeployed machines' or
whatever. I just want to get away from talking about something
([manual] allocation) that we don't offer.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [Ceilometer] Nomination of Sandy Walsh to core team

2013-12-09 Thread Herndon, John Luke
Hi There!

I¹m not 100% sure what the process is around electing an individual to the
core team (i.e., can a non-core person nominate someone?). However, I
believe the ceilometer core team could use a member who is more active in
the development of the event pipeline. A core developer in this area will
not only speed up review times for event patches, but will also help keep
new contributions focused on the overall eventing vision.

To that end, I would like to nominate Sandy Walsh from Rackspace to
ceilometer-core. Sandy is one of the original authors of StackTach, and
spearheaded the original stacktach-ceilometer integration. He has been
instrumental in many of my codes reviews, and has contributed much of the
existing event storage and querying code.

Thanks,
John Herndon
Software Engineer
HP Cloud


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TransportURL and virtualhost/exchnage (was Re: [Oslo] Layering olso.messaging usage of config)

2013-12-09 Thread Mark McLoughlin
Hi Gordon,

On Fri, 2013-12-06 at 18:36 +, Gordon Sim wrote:
 On 11/18/2013 04:44 PM, Mark McLoughlin wrote:
  On Mon, 2013-11-18 at 11:29 -0500, Doug Hellmann wrote:
  IIRC, one of the concerns when oslo.messaging was split out was
  maintaining support for existing deployments with configuration files that
  worked with oslo.rpc. We had said that we would use URL query parameters
  for optional configuration values (with the required values going into
  other places in the URL)
 [...]
  I hadn't ever considered exposing all configuration options via the URL.
  We have a lot of fairly random options, that I don't think you need to
  configure per-connection if you have multiple connections in the one
  application.
 
 I certainly agree that not all configuration options may make sense in a 
 URL. However if you will forgive me for hijacking this thread 
 momentarily on a related though tangential question/suggestion...
 
 Would it make sense to (and/or even be possible to) take the 'exchange' 
 option out of the API, and let transports deduce their implied 
 scope/namespace purely from the transport URL in perhaps transport 
 specific ways?
 
 E.g. you could have rabbit://my-host/my-virt-host/my-exchange or 
 rabbit://my-host/my-virt-host or rabbit://my-host//my-exchange, and the 
 rabbit driver would ensure that the given virtualhost and or exchange 
 was used.
 
 Alternatively you could have zmq://my-host:9876 or zmq://my-host:6789 
 to 'scope' 0MQ communication channels. and hypothetically 
 something-new://my-host/xyz, where xyz would be interpreted by the 
 driver in question in a relevant way to scope the interactions over that 
 transport.
 
 Applications using RPC would then assume they were using a namespace 
 free from the danger of collisions with other applications, but this 
 would all be driven through transport specific configuration.
 
 Just a suggestion based on my initial confusion through ignorance on the 
 different scoping mechanisms described in the API docs. It may not be 
 feasible or may have negative consequences I have not in my naivety 
 foreseen.

It's not a completely unreasonable approach to take, but my thinking was
that a transport URL connects you to a conduit which multiple
applications could be sharing so you need the application to specify its
own application namespace.

e.g. you can have 'scheduler' topics for both Nova and Cinder, and you
need each application to specify its exchange whereas the administrator
is in full control of the transport URL and doesn't need to worry about
application namespacing on the transport.

There are three ways the exchange appears in the API:

  1) A way for an application to set up the default exchange it
 operates in:

 messaging.set_transport_defaults(control_exchange='nova')

 
http://docs.openstack.org/developer/oslo.messaging/transport.html#oslo.messaging.set_transport_defaults

  2) The server can explicitly say what exchange its listening on:

 target = messaging.Target(exchange='nova',
   topic='scheduler',
   server='myhost')
 server = messaging.get_rpc_server(transport, target, endpoints)

 
http://docs.openstack.org/developer/oslo.messaging/server.html#oslo.messaging.get_rpc_server

  3) The client can explicitly say what exchange to connect to:

 target = messaging.Target(exchange='nova',
   topic='scheduler')
 client = messaging.RPCClient(transport, target)

 
http://docs.openstack.org/developer/oslo.messaging/rpcclient.html#oslo.messaging.RPCClient

But also the admin can override the default exchange so that e.g. you
could put two instances of the application on the same transport, but
with different exchanges.

Now, in saying all that, we know that fanout topics of the same name
will conflict even if the exchange name is different:

  https://bugs.launchpad.net/oslo.messaging/+bug/1173552

So that means the API doesn't work quite as intended yet, but I think
the idea makes sense.

I'm guessing you have a concern about how transports might implement
this application namespacing? Could you elaborate if so?

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] anyone aware of networking issues with grizzly live migration of kvm instances?

2013-12-09 Thread Chris Friesen

Hi,

We've got a grizzly setup using quantum networking and libvirt/kvm with 
VIR_MIGRATE_LIVE set.


I was live-migrating an instance back and forth between a couple of 
compute nodes.  It worked fine for maybe half a dozen migrations and 
then after a migration I could no longer ping it.


It appears that packets were making it up to the guest but we never saw 
packets come out of the guest.


Rebooting the instance seems to have restored connectivity.

Anyone aware of something like this?

We're planning on switching to havana when we can, so it'd be nice if 
this was fixed there.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Jay Dobies




So the question is are we looking at /nodes/ that have a /current
role/, or are we looking at /roles/ that have some /current nodes/.

My contention is that the role is the interesting thing, and the nodes
is the incidental thing. That is, as a sysadmin, my hierarchy of
concerns is something like:
  A: are all services running
  B: are any of them in a degraded state where I need to take prompt
action to prevent a service outage [might mean many things: - software
update/disk space criticals/a machine failed and we need to scale the
cluster back up/too much load]
  C: are there any planned changes I need to make [new software deploy,
feature request from user, replacing a faulty machine]
  D: are there long term issues sneaking up on me [capacity planning,
machine obsolescence]

If we take /nodes/ as the interesting thing, and what they are doing
right now as the incidental thing, it's much harder to map that onto
the sysadmin concerns. If we start with /roles/ then can answer:
  A: by showing the list of roles and the summary stats (how many
machines, service status aggregate), role level alerts (e.g. nova-api
is not responding)
  B: by showing the list of roles and more detailed stats (overall
load, response times of services, tickets against services
  and a list of in trouble instances in each role - instances with
alerts against them - low disk, overload, failed service,
early-detection alerts from hardware
  C: probably out of our remit for now in the general case, but we need
to enable some things here like replacing faulty machines
  D: by looking at trend graphs for roles (not machines), but also by
looking at the hardware in aggregate - breakdown by age of machines,
summary data for tickets filed against instances that were deployed to
a particular machine

C: and D: are (F) category work, but for all but the very last thing,
it seems clear how to approach this from a roles perspective.

I've tried to approach this using /nodes/ as the starting point, and
after two terrible drafts I've deleted the section. I'd love it if
someone could show me how it would work:)


 * Unallocated nodes

This implies an 'allocation' step, that we don't have - how about
'Idle nodes' or something.

It can be auto-allocation. I don't see problem with 'unallocated' term.


Ok, it's not a biggy. I do think it will frame things poorly and lead
to an expectation about how TripleO works that doesn't match how it
does, but we can change it later if I'm right, and if I'm wrong, well
it won't be the first time :).



I'm interested in what the distinction you're making here is.  I'd rather get 
things
defined correctly the first time, and it's very possible that I'm missing a 
fundamental
definition here.


So we have:
  - node - a physical general purpose machine capable of running in
many roles. Some nodes may have hardware layout that is particularly
useful for a given role.
  - role - a specific workload we want to map onto one or more nodes.
Examples include 'undercloud control plane', 'overcloud control
plane', 'overcloud storage', 'overcloud compute' etc.
  - instance - A role deployed on a node - this is where work actually happens.
  - scheduling - the process of deciding which role is deployed on which node.


This glossary is really handy to make sure we're all speaking the same 
language.



The way TripleO works is that we defined a Heat template that lays out
policy: 5 instances of 'overcloud control plane please', '20
hypervisors' etc. Heat passes that to Nova, which pulls the image for
the role out of Glance, picks a node, and deploys the image to the
node.

Note in particular the order: Heat - Nova - Scheduler - Node chosen.

The user action is not 'allocate a Node to 'overcloud control plane',
it is 'size the control plane through heat'.

So when we talk about 'unallocated Nodes', the implication is that
users 'allocate Nodes', but they don't: they size roles, and after
doing all that there may be some Nodes that are - yes - unallocated,


I'm not sure if I should ask this here or to your point above, but what 
about multi-role nodes? Is there any piece in here that says The policy 
wants 5 instances but I can fit two of them on this existing 
underutilized node and three of them on unallocated nodes or since it's 
all at the image level you get just what's in the image and that's the 
finest-level of granularity?



or have nothing scheduled to them. So... I'm not debating that we
should have a list of free hardware - we totally should - I'm debating
how we frame it. 'Available Nodes' or 'Undeployed machines' or
whatever. I just want to get away from talking about something
([manual] allocation) that we don't offer.


My only concern here is that we're not talking about cloud users, we're 
talking about admins adminning (we'll pretend it's a word, come with me) 
a cloud. To a cloud user, give me some power so I can do some stuff is 
a safe use case if I trust the cloud I'm running on. I trust 

[openstack-dev] Neutron Distributed Virtual Router

2013-12-09 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,
We are in the process of defining the API for the Neutron Distributed Virtual 
Router, and we have a question.

Just wanted to get the feedback from the community before we implement and post 
for review.

We are planning to use the distributed flag for the routers that are supposed 
to be routing traffic locally (both East West and North South).
This distributed flag is already there in the neutronclient API, but 
currently only utilized by the Nicira Plugin.
We would like to go ahead and use the same distributed flag and add an 
extension to the router table to accommodate the distributed flag.

Please let us know your feedback.

Thanks.

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Steven Hardy
On Tue, Dec 10, 2013 at 08:34:04AM +1300, Robert Collins wrote:
 On 10 December 2013 00:31, Steven Hardy sha...@redhat.com wrote:
  Hi all,
 
  So I've been getting concerned about $subject recently, and based on some
  recent discussions so have some other heat-core folks, so I wanted to start
  a discussion where we can agree and communicate our expectations related to
  nomination for heat-core membership (becuase we do need more core
  reviewers):
 
 Great! (Because I think you do too :).
 
  The issues I have are:
  - Russell's stats (while very useful) are being used by some projects as
the principal metric related to -core membership (ref TripleO's monthly
cull/nameshame, which I am opposed to btw).  This is in some cases
encouraging some stats-seeking in our review process, IMO.
 
 With all due respect - you are entirely wrong, and I am now worried
 about the clarity of my emails vis-a-vis review team makeup. I presume
 you've read them in detail before referencing them of course - have
 you? What can I do to make the actual process clearer?
 
 IMO the primary metric for inclusion is being perceived by a bunch of
 existing -cores as being substantial contributors of effective, useful
 reviews. The reviewstats stats are a useful aide de memoir, nothing
 more. Yes, if you don't do enough reviews for an extended period -
 several months - then you're likely to no longer be perceived as being
 a substantial contributor of effective, useful reviews - and no longer
 aware enough of the current codebase and design to just step back into
 -core shoes.
 
 So it's a gross mischaracterisation to imply that a democratic process
 aided by some [crude] stats has been reduced to name  shame, and a
 rather offensive one.

Yes I have read your monthly core reviewer update emails[1] and I humbly
apologize if you feel my characterization of your process is offensive, it
certainly wasn't intended to be.

All I was trying to articulate is that your message *is* heavily reliant on
stats, you quote them repeatedly, and from my perspective the repeated
references to volume of reviews, along with so frequently naming those to
be removed from core, *could* encourage the wrong behavior.

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-December/021101.html

 Anyone can propose members for inclusion in TripleO-core, and we all
 vote - likewise removal. The fact I do a regular summary and propose
 some folk and some cleanups is my way of ensuring that we don't get
 complacent - that we recognise folk who are stepping up, and give them
 guidance if they aren't stepping up in an effective manner - or if
 they are no longer being effective enough to be recognised - in my
 opinion. If the rest of the TripleO core team agrees with my opinion,
 we get changes to -core, if not, we don't. If someone else wants to
 propose a -core member, they are welcome to! Hopefully with my taking
 point on this, that effort isn't needed - but it's still possible.
 
  - Review quality can't be measured mechanically - we have some folks who
contribute fewer, but very high quality reviews, and are also very active
contributors (so knowledge of the codebase is not stale).  I'd like to
see these people do more reviews, but removing people from core just
because they drop below some arbitrary threshold makes no sense to me.
 
 In principle, review quality *can* be measured mechanically, but the
 stats we have do not do that - and cannot. We'd need mechanical follow
 through to root cause analysis for reported defects (including both
 crashes and softer defects like 'not as fast as I want' or 'feature X
 is taking too long to develop') and impact on review and contribution
 rates to be able to assess the impact of a reviewer over time - what
 bugs they prevented entering the code base, how their reviews kept the
 code base maintainable and flexible, and how their interactions with
 patch submitters helped grow the community. NONE of the stats we have
 today even vaguely approximate that.

Right, so right now, review quality can't be measured mechanically, and the
chances that we'll be able to in any meaningful way anytime in future is
very small.

  So if you're aiming for heat-core nomination, here's my personal wish-list,
  but hopefully others can proide their input and we can update the wiki with
  the resulting requirements/guidelines:
 
 I'm not aiming for heat-core, so this is just kibbitzing- take it for
 what it's worth:
 
  - Make your reviews high-quality.  Focus on spotting logical errors,
reducing duplication, consistency with existing interfaces, opportunities
for reuse/simplification etc.  If every review you do is +1, or -1 for a
trivial/cosmetic issue, you are not making a strong case for -core IMHO.
 
 There is a tension here. I agree that many trivial/cosmetic issues are
 not a big deal in themselves. But in aggregate I would argue that a
 codebase with lots of tpyos, poor idioms, 

Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Robert Collins
On 10 December 2013 10:57, Jay Dobies jason.dob...@redhat.com wrote:


 So we have:
   - node - a physical general purpose machine capable of running in
 many roles. Some nodes may have hardware layout that is particularly
 useful for a given role.
   - role - a specific workload we want to map onto one or more nodes.
 Examples include 'undercloud control plane', 'overcloud control
 plane', 'overcloud storage', 'overcloud compute' etc.
   - instance - A role deployed on a node - this is where work actually
 happens.
   - scheduling - the process of deciding which role is deployed on which
 node.


 This glossary is really handy to make sure we're all speaking the same
 language.


 The way TripleO works is that we defined a Heat template that lays out
 policy: 5 instances of 'overcloud control plane please', '20
 hypervisors' etc. Heat passes that to Nova, which pulls the image for
 the role out of Glance, picks a node, and deploys the image to the
 node.

 Note in particular the order: Heat - Nova - Scheduler - Node chosen.

 The user action is not 'allocate a Node to 'overcloud control plane',
 it is 'size the control plane through heat'.

 So when we talk about 'unallocated Nodes', the implication is that
 users 'allocate Nodes', but they don't: they size roles, and after
 doing all that there may be some Nodes that are - yes - unallocated,


 I'm not sure if I should ask this here or to your point above, but what
 about multi-role nodes? Is there any piece in here that says The policy
 wants 5 instances but I can fit two of them on this existing underutilized
 node and three of them on unallocated nodes or since it's all at the image
 level you get just what's in the image and that's the finest-level of
 granularity?

The way we handle that today is to create a composite role that says
'overcloud-compute+cinder storage', for instance - because image is
the level of granularity. If/when we get automatic container
subdivision - see the other really interesting long-term thread - we
could subdivide, but I'd still do that using image as the level of
granularity, it's just that we'd have the host image + the container
images.

 or have nothing scheduled to them. So... I'm not debating that we
 should have a list of free hardware - we totally should - I'm debating
 how we frame it. 'Available Nodes' or 'Undeployed machines' or
 whatever. I just want to get away from talking about something
 ([manual] allocation) that we don't offer.


 My only concern here is that we're not talking about cloud users, we're
 talking about admins adminning (we'll pretend it's a word, come with me) a
 cloud. To a cloud user, give me some power so I can do some stuff is a
 safe use case if I trust the cloud I'm running on. I trust that the cloud
 provider has taken the proper steps to ensure that my CPU isn't in New York
 and my storage in Tokyo.

Sure :)

 To the admin setting up an overcloud, they are the ones providing that trust
 to eventual cloud users. That's where I feel like more visibility and
 control are going to be desired/appreciated.

 I admit what I just said isn't at all concrete. Might even be flat out
 wrong. I was never an admin, I've just worked on sys management software
 long enough to have the opinion that their levels of OCD are legendary. I
 can't shake this feeling that someone is going to slap some fancy new
 jacked-up piece of hardware onto the network and have a specific purpose
 they are going to want to use it for. But maybe that's antiquated thinking
 on my part.

I think concrete use cases are the only way we'll get light at the end
of the tunnel.

So lets say someone puts a new bit of fancy kit onto their network and
wants it for e.g. GPU VM instances only. Thats a reasonable desire.

The basic stuff we're talking about so far is just about saying each
role can run on some set of undercloud flavors. If that new bit of kit
has the same coarse metadata as other kit, Nova can't tell it apart.
So the way to solve the problem is:
 - a) teach Ironic about the specialness of the node (e.g. a tag 'GPU')
 - b) teach Nova that there is a flavor that maps to the presence of
that specialness, and
   c) teach Nova that other flavors may not map to that specialness

then in Tuskar whatever Nova configuration is needed to use that GPU
is a special role ('GPU compute' for instance) and only that role
would be given that flavor to use. That special config is probably
being in a host aggregate, with an overcloud flavor that specifies
that aggregate, which means at the TripleO level we need to put the
aggregate in the config metadata for that role, and the admin does a
one-time setup in the Nova Horizon UI to configure their GPU compute
flavor.

This isn't 'manual allocation' to me - it's surfacing the capabilities
from the bottom ('has GPU') and the constraints from the top ('needs
GPU') and letting Nova and Heat sort it out.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged 

Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-09 Thread David Chadwick
Hi Arvind

this is still mixing up two separate concepts: naming and policy
constraints. Scope is a policy constraint but in the proposal below is
also part of the unique naming of the role. The fields making up both
concepts need to be separate (e.g. what if 2 different roles from the
same domain and project applied to two different scopes but it just so
happened that the ids of the two resources were the same? They would end
up still having the same unique name.)

I would therefore add service_id = --id--   (optional)
after project_id. This can assure that (composite) role names (keys) are
unique

regards

David


On 09/12/2013 20:36, Tiwari, Arvind wrote:
 Hi David,
 
 I have updated the ether pad with below comments.
 
 Regards, 
 Arvind
 
 
 
 Another alternative is to change role name into role display name, 
 indicating that the string is only to be used in GUIs, is not guaranteed to 
 be unique, is set by the role creator, can be any string in any character 
 set, and is not used by the system anywhere (AT1). Only role ID is used by 
 the system, in policy evaluation, in user-role assignments, in 
 permission-role assignments etc. (AT2)
 
 AT1 - 
 1.Display name proposal does not seems to work because, we cannot enforce 
 service (e.g. Nova, Swift) to use role_id to define their policy.
 AT2 - 
 1.Using role_id for policy evaluation is doable but it will be an 
 enormous impact on token data structure, policy etc, which won't be 
 acceptable to community.
 2.permission-role assignments goes with policy file which is  again not 
 acceptable due to same reason as #1.
 3.user-role (or group-role) assignments uses the role_id, so there won't 
 be any change.
 
 I think we should consider composite key to make the role  entity unique and 
 keep having duplicate role_names in system. Something as below
 
 {
   role: {
 id: 76e72a,
 name: ---role_name---, (resource name spaced name e.g. 
 nova.east.admin)
 scope: {
   id: ---id---, (resource_id)
   type: service | file | domain etc.,
   endpoint:---endpoint--- 
 }
  domain_id = --id--,  (optional)
  project_id = --id--  (optional)
   }
 }
 Fields name, scope.id, domain_id and project_id makes the composite key.
 
 
 
 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Monday, December 09, 2013 1:28 PM
 To: David Chadwick; Tiwari, Arvind; OpenStack Development Mailing List (not 
 for usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 On 12/09/2013 03:04 PM, David Chadwick wrote:

 On 09/12/2013 19:37, Adam Young wrote:
 On 12/06/2013 04:44 AM, David Chadwick wrote:
 Another alternative is to change role name into role display name,
 indicating that the string is only to be used in GUIs, is not guaranteed
 to be unique, is set by the role creator, can be any string in any
 character set, and is not used by the system anywhere. Only role ID is
 used by the system, in policy evaluation, in user-role assignments, in
 permission-role assignments etc.
 That will make policy much harder to read.  I'd recommend that the role
 name continue to be the good name, for both UI and for policy
 enforcement.
 in which case all role names must be unique

 David
 
 Hat is my understanding, yes, and I think that your proposal covers 
 that.  A role name for policy will be the full name, for example 
 domain/project/role in the 3 portion version you posted.
 




 regards

 David

 On 05/12/2013 16:21, Tiwari, Arvind wrote:
 Hi David,

 Let me capture these details in ether pad. I will drop an email after
 adding these details in etherpad.

 Thanks,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Thursday, December 05, 2013 4:15 AM
 To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List
 (not for usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 we are making good progress, but what I dont like about your proposal
 below is that the role name is not unique. There can be multiple roles
 with the same name, but different IDs, and different scopes. I dont like
 this, and I think it would be confusing to users/administrators. I think
 the role names should be different as well. This is not difficult to
 engineer if the names are hierarchically structured based on the name of
 the role creator. The creator might be owner of the resource that is
 being scoped, but it need not necessarily be. Assuming it was, then in
 your examples below we might have role names of NovaEast.admin and
 NovaWest.admin. Since these are strings, policies can be easily adapted
 to match on NovaWest.admin instead of admin.

 regards

 david

 On 04/12/2013 17:21, Tiwari, Arvind wrote:
 Hi Adam,

 I have added my comments in line.

 As per my request yesterday and 

Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync

2013-12-09 Thread Mark McLoughlin
On Mon, 2013-12-09 at 11:11 -0600, Ben Nemec wrote:
 On 2013-12-09 10:55, Sean Dague wrote:
  On 12/09/2013 11:38 AM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2013-12-09 08:17:45 -0800:
  On 12/06/2013 05:40 PM, Ben Nemec wrote:
  On 2013-12-06 16:30, Clint Byrum wrote:
  Excerpts from Ben Nemec's message of 2013-12-06 13:38:16 -0800:
  
  
  On 2013-12-06 15:14, Yuriy Taraday wrote:
  
  Hello, Sean.
  
  I get the issue with upgrade path. User doesn't want to update
  config unless one is forced to do so.
  But introducing code that weakens security and let it stay is an
  unconditionally bad idea.
  It looks like we have to weigh two evils: having troubles 
  upgrading
  and lessening security. That's obvious.
  
  Here are my thoughts on what we can do with it:
  1. I think we should definitely force user to do appropriate
  configuration to let us use secure ways to do locking.
  2. We can wait one release to do so, e.g. issue a deprecation
  warning now and force user to do it the right way later.
  3. If we are going to do 2. we should do it in the service that 
  is
  affected not in the library because library shouldn't track 
  releases
  of an application that uses it. It should do its thing and do it
  right (secure).
  
  So I would suggest to deal with it in Cinder by importing
  'lock_path' option after parsing configs and issuing a deprecation
  warning and setting it to tempfile.gettempdir() if it is still 
  None.
  
  This is what Sean's change is doing, but setting lock_path to
  tempfile.gettempdir() is the security concern.
  
  Yuriy's suggestion is that we should let Cinder override the config
  variable's default with something insecure. Basically only 
  deprecate
  it in Cinder's world, not oslo's. That makes more sense from a 
  library
  standpoint as it keeps the library's expected interface stable.
  
  Ah, I see the distinction now.  If we get this split off into
  oslo.lockutils (which I believe is the plan), that's probably what 
  we'd
  have to do.
  
  
  Since there seems to be plenty of resistance to using /tmp by 
  default,
  here is my proposal:
  
  1) We make Sean's change to open files in append mode. I think we 
  can
  all agree this is a good thing regardless of any config changes.
  
  2) Leave lockutils broken in Icehouse if lock_path is not set, as 
  I
  believe Mark suggested earlier. Log an error if we find that
  configuration. Users will be no worse off than they are today, and 
  if
  they're paying attention they can get the fixed lockutils behavior
  immediately.
  
  Broken how? Broken in that it raises an exception, or broken in 
  that it
  carries a security risk?
  
  Broken as in external locks don't actually lock.  If we fall back to
  using a local semaphore it might actually be a little better because
  then at least the locks work within a single process, whereas before
  there was no locking whatsoever.
  
  Right, so broken as in doesn't actually locks, potentially 
  completely
  scrambles the user's data, breaking them forever.
  
  
  Things I'd like to see OpenStack do in the short term, ranked in 
  ascending
  order of importance:
  
  4) Upgrade smoothly.
  3) Scale.
  2) Help users manage external risks.
  1) Not do what Sean described above.
  
  I mean, how can we even suggest silently destroying integrity?
  
  I suggest merging Sean's patch and putting a warning in the release
  notes that running without setting this will be deprecated in the next
  release. If that is what this is preventing this debate should not 
  have
  happened, and I sincerely apologize for having delayed it. I believe 
  my
  mistake was assuming this was something far more trivial than without
  this patch we destroy users' data.
  
  I thought we were just talking about making upgrades work. :-P
  
  Honestly, I haven't looked exactly how bad the corruption would be. But
  we are locking to handle something around simultaneous db access in
  cinder, so I'm going to assume the worst here.
 
 Yeah, my understanding is that this doesn't come up much in actual use 
 because lock_path is set in most production environments.  Still, 
 obviously not cool when your locks don't lock, which is why we made the 
 unpleasant change to require lock_path.  It wasn't something we did 
 lightly (I even sent something to the list before it merged, although I 
 got no responses at the time).

What would happen if we required each service to set a sane default
here?

e.g. for Nova, would a dir under $state_path work? It just needs to be a
directory that isn't world-writeable but is writeable by whatever user
Nova is running as.

Practically speaking, this just means that Cinder needs to do:

  lockutils.set_defaults(lock_path=os.path.join(CONF.state_path, 'tmp'))

and the current behaviour of lockutils.py is fine.

Hmm, that feels like I'm missing something?

Mark.


___
OpenStack-dev mailing list

Re: [openstack-dev] Retiring reverify no bug

2013-12-09 Thread Mark McLoughlin
On Mon, 2013-12-09 at 10:49 -0800, James E. Blair wrote:
 Hi,
 
 On Wednesday December 11, 2013 we will remove the ability to use
 reverify no bug to re-trigger gate runs for changes that have failed
 tests.
 
 This was previously discussed[1] on this list.  There are a few key
 things to keep in mind:
 
 * This only applies to reverify, not recheck.  That is, it only
   affects the gate pipeline, not the check pipeline.  You can still use
   recheck no bug to make sure that your patch still works.
 
 * Core reviewers can still resubmit a change to the queue by leaving
   another Approved vote.  Please don't abuse this to bypass the intent
   of this change: to help identify and close gate-blocking bugs.
 
 * You may still use reverify bug # to re-enqueue if there is a bug
   report for a failure, and of course you are encouraged to file a bug
   report if there is not.  Elastic-recheck is doing a great job of
   indicating which bugs might have caused a failure.
 
 As discussed in the previous thread, the goal is to prevent new
 transient bugs from landing in code by ensuring that if a change fails a
 gate test that it is because of a known bug, and not because it's
 actually introducing a bug, so please do your part to help in this
 effort.

I wonder could we make it standard practice for an infra bug to get
filed whenever there's a known issue causing gate jobs to fail so that
everyone can use that bug number when re-triggering?

(Apologies if that's already happening)

I guess we'd want to broadcast that bug number with statusbot?

Basically, the times I've used 'reverify no bug' is where I see some job
failures that look like an infra issue that was already resolved.

Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Robert Collins
On 10 December 2013 11:04, Steven Hardy sha...@redhat.com wrote:

 So it's a gross mischaracterisation to imply that a democratic process
 aided by some [crude] stats has been reduced to name  shame, and a
 rather offensive one.

 Yes I have read your monthly core reviewer update emails[1] and I humbly
 apologize if you feel my characterization of your process is offensive, it
 certainly wasn't intended to be.

Thank; enough said here - lets move on :)


 I think you have a very different definition of -core to the rest of
 OpenStack. I was actually somewhat concerned about the '+2 Guard the
 gate stuff' at the summit because it's so easily misinterpreted - and
 there is a meme going around (I don't know if it's true or not) that
 some people are assessed - performance review stuff within vendor
 organisations - on becoming core reviewers.

 Core reviewer is not intended to be a gateway to getting features or
 ideas into OpenStack projects. It is solely a volunteered contribution
 to the project: helping the project accept patches with confidence
 about their long term integrity: providing explanation and guidance to
 people that want to contribute patches so that their patch can be
 accepted.

 We need core reviewers who:
 1. Have deep knowledge of the codebase (to identify non-cosmetic structural
 issues)

mmm, core review is a place to identify really significant structural
issues, but it's not ideal. Because - you do a lot of work before you
push code for review, particularly if it's ones first contribution to
a codebase, and that means a lot of waste when the approach is wrong.
Agree that having -core that can spot this is good, but not convinced
that it's a must.

 2. Have used and debugged the codebase (to identify usability, interface
 correctness or other stuff which isn't obvious unless you're using the
 code)

If I may: 2a) Have deployed and used the codebase in production, at
scale. This may conflict with 1) in terms of individual expertise.

 3. Have demonstrated a commitment to the project (so we know they
 understand the mid-term goals and won't approve stuff which is misaligned
 with those goals)

I don't understand this. Are you saying you'd turn down contributions
that are aligned with the long term Heat vision because they don't
advance some short term goal? Or are you saying you'd turn down
contributions because they actively harm short term goals?

Seems to me that that commitment to the project is really orthogonal
to either of those things - folk may have different interpretations
about what the project needs while still being entirely committed to
it! Perhaps you mean 'shared understanding of current goals and
constraints' ? Or something like that? I am niggling on this point
because I wouldn't want someone who is committed to TripleO but
focused on the big picture to be accused of not being committed to
TripleO.

 All of those are aided and demonstrated by helping out doing a few
 bugfixes, along with reviews.

1) might be - depends on the bug. 2) really isn't, IME anyhow, and
three seems entirely disconnected to both reviews and bugfixes, and
more all about communication.



 All I wanted to do was give folks a heads-up that IMHO the stats aren't the
 be-all-and-end-all, and here are a few things which you might want to
 consider doing, if you want to become a core reviewer in due course.

Ok, thats a much better message than the one you appeared to start
with, which sounded like 'stats are bad, here's what it really takes'.

Some food for thought though: being -core is a burden, it's not a
privilege. I think we should focus more on a broad community of
effective reviewers, rather than on what it takes to be in -core.
Being in -core should simply reflect that someone is currently active,
aware of all the review needs of the project, capable (informed,
familiar with the code and design etc) of assessing +2 and APRV
status, and responsible enough to trust with that. Folk that review
and never become -core are still making a significant contribution to
code quality, if they are doing it right - by which I mean:

 - following up and learning the norms are for that project (they
differ project to project in OpenStack)
 - following up and learning about the architectural issues and design
issues they need to watch out for

Someone doing that quickly become able to act as a valuable first-pass
filter for the two +2 votes that actually land a patch; speeding up
the +2 review time [less round trips to +2 land == more +2 bandwidth]
and helping the project as a whole.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Questions on logging setup for development

2013-12-09 Thread Vishvananda Ishaya

On Dec 6, 2013, at 2:09 PM, Paul Michali p...@cisco.com wrote:

 Hi,
 
 For Neutron, I'm creating a module (one of several eventually) as part of a 
 new blueprint I'm working on, and the associated unit test module. I'm in 
 really early development, and just running this UT module as a standalone 
 script (rather than through tox). It allows me to do TDD pretty quickly on 
 the code I'm developing (that's the approach I'm taking right now - fingers 
 crossed :).
 
 In the module, I did an import of the logging package and when I run UTs I 
 can see messaging that would occur, if desired.
 
 I have the following hack to turn off/on the logging for debug level:
 
 if False:  # Debugging
 logging.basicConfig(format='%(asctime)-15s [%(levelname)s] %(message)s',
 level=logging.DEBUG)
 
 I made the log calls the same as what would be in other Neutron code, so that 
 I don't have to change the code later, as I start to fold it into the Neutron 
 code. However, I'd like to import the neutron.openstack.common.log package in 
 my code, so that the code will be identical to what is needed once I start 
 running this code as part of a process, but I had some questions…
 
 When using neutron.openstack.common.log, how do I toggle the debug level 
 logging on, if I run this standalone, as I'm doing now?
 Is there a way to do it, without adding in the above conditional logic to the 
 production code? Maybe put something in the UT module?

I believe you can just make sure to set_override on the CONF option to True and 
then call logging.setup('neutron')

Here is an example with the nova code

 from nova.openstack.common import log as logging
 LOG = logging.getLogger(__name__)
 LOG.debug('foo')
 logging.CONF.set_override('debug', True)
 logging.setup('nova')
 LOG.debug('foo')
2013-12-09 14:25:21.220 72011 DEBUG __main__ [-] foo module input:2

Vish


 
 I can always continue as is, and then switch things over later (changing the 
 import line and pulling the if clause), once I have things mostly done, and 
 want to run as part of Neutron, but it would be nice if I can find a way to 
 do that up front to avoid changes later.
 
 Thoughts? Suggestions?
 
 Thanks!
 
 
 PCM (Paul Michali)
 
 MAIL  p...@cisco.com
 IRCpcm_  (irc.freenode.net)
 TW@pmichali
 GPG key4525ECC253E31A83
 Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-09 Thread Tiwari, Arvind
I think that make sense, how does below data model looks?

{
   role: {
 id: 76e72a,
 name: ---role_name---, (resource name spaced name e.g. 
nova.east.admin)
 scope: {
   id: ---id---, (resource_id)
   type: service | file | domain etc.,
   endpoint:---endpoint--- 
 }
  domain_id = --id--,   (optional)
  project_id = --id--,  (optional)
  service_id = --id--   (optional)
   }
 }

Q. what if two (or more) endpoints want to have same role_name for a service 
(nova.east.admin, nova.west.admin, nova.north.admin .)?

Regards,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Monday, December 09, 2013 3:15 PM
To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List (not for 
usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Arvind

this is still mixing up two separate concepts: naming and policy
constraints. Scope is a policy constraint but in the proposal below is
also part of the unique naming of the role. The fields making up both
concepts need to be separate (e.g. what if 2 different roles from the
same domain and project applied to two different scopes but it just so
happened that the ids of the two resources were the same? They would end
up still having the same unique name.)

I would therefore add service_id = --id--   (optional)
after project_id. This can assure that (composite) role names (keys) are
unique

regards

David


On 09/12/2013 20:36, Tiwari, Arvind wrote:
 Hi David,
 
 I have updated the ether pad with below comments.
 
 Regards, 
 Arvind
 
 
 
 Another alternative is to change role name into role display name, 
 indicating that the string is only to be used in GUIs, is not guaranteed to 
 be unique, is set by the role creator, can be any string in any character 
 set, and is not used by the system anywhere (AT1). Only role ID is used by 
 the system, in policy evaluation, in user-role assignments, in 
 permission-role assignments etc. (AT2)
 
 AT1 - 
 1.Display name proposal does not seems to work because, we cannot enforce 
 service (e.g. Nova, Swift) to use role_id to define their policy.
 AT2 - 
 1.Using role_id for policy evaluation is doable but it will be an 
 enormous impact on token data structure, policy etc, which won't be 
 acceptable to community.
 2.permission-role assignments goes with policy file which is  again not 
 acceptable due to same reason as #1.
 3.user-role (or group-role) assignments uses the role_id, so there won't 
 be any change.
 
 I think we should consider composite key to make the role  entity unique and 
 keep having duplicate role_names in system. Something as below
 
 {
   role: {
 id: 76e72a,
 name: ---role_name---, (resource name spaced name e.g. 
 nova.east.admin)
 scope: {
   id: ---id---, (resource_id)
   type: service | file | domain etc.,
   endpoint:---endpoint--- 
 }
  domain_id = --id--,  (optional)
  project_id = --id--  (optional)
   }
 }
 Fields name, scope.id, domain_id and project_id makes the composite key.
 
 
 
 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Monday, December 09, 2013 1:28 PM
 To: David Chadwick; Tiwari, Arvind; OpenStack Development Mailing List (not 
 for usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 On 12/09/2013 03:04 PM, David Chadwick wrote:

 On 09/12/2013 19:37, Adam Young wrote:
 On 12/06/2013 04:44 AM, David Chadwick wrote:
 Another alternative is to change role name into role display name,
 indicating that the string is only to be used in GUIs, is not guaranteed
 to be unique, is set by the role creator, can be any string in any
 character set, and is not used by the system anywhere. Only role ID is
 used by the system, in policy evaluation, in user-role assignments, in
 permission-role assignments etc.
 That will make policy much harder to read.  I'd recommend that the role
 name continue to be the good name, for both UI and for policy
 enforcement.
 in which case all role names must be unique

 David
 
 Hat is my understanding, yes, and I think that your proposal covers 
 that.  A role name for policy will be the full name, for example 
 domain/project/role in the 3 portion version you posted.
 




 regards

 David

 On 05/12/2013 16:21, Tiwari, Arvind wrote:
 Hi David,

 Let me capture these details in ether pad. I will drop an email after
 adding these details in etherpad.

 Thanks,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Thursday, December 05, 2013 4:15 AM
 To: Tiwari, Arvind; Adam Young; OpenStack Development Mailing List
 (not for usage questions)
 Cc: Henry Nash; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role 

[openstack-dev] [keystone][heat] ec2tokens, v3 credentials and request signing

2013-12-09 Thread Steven Hardy
Hi all,

I have some queries about what the future of the ec2tokens API is for
keystone, context as we're looking to move Heat from a horrible mixture of
v2/v3 keystone to just v3, currently I'm not sure we can:

- The v3/credentials API allows ec2tokens to be stored (if you
  create the access/secret key yourself), but it requires admin, which
  creating an ec2-keypair via the v2 API does not?

- There is no v3 interface for validating signed requests like you can via
  POST v2.0/ec2tokens AFAICT?

- Validating requests signed with ec2 credentials stored via v3/credentials
  does not work, if you try to use v2.0/ec2tokens, should it?

So my question is basically, what's the future of ec2tokens, is there some
alternative in the pipeline for satisfying the same use-case?

The main issues we have in Heat:

- We want to continue supporting AWS style signed requests for our
  cloudformation-compatible API, which is currently done via ec2tokens.

- ec2 keypairs are currently the only method of getting a non-expiring
  credential which we can deploy in-instance, that is no longer possible
  via the v3 API for the reasons above.

What is the recommended way for us to deploy a (non expiring) credential in
an instance (ideally derived from a trust or otherwise role-limited), then
use that credential to authenticate against our API?

My first thought is that the easiest solution would be to allow trust
scoped tokens to optionally be configured to not expire (until we delete
the trust when we delete the Heat stack)?

Can anyone offer any suggestions on a v3 compatible way to do this?

I did start looking at oauth as a possible solution, but it seems the
client support is not yet there, and there's no auth middleware we can use
for authenticating requests containing oauth credentials, any ideas on the
status of this would be most helpful!

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2013-12-09 Thread Tzu-Mainn Chen
Thanks for the explanation!

I'm going to claim that the thread revolves around two main areas of 
disagreement.  Then I'm going
to propose a way through:

a) Manual Node Assignment

I think that everyone is agreed that automated node assignment through 
nova-scheduler is by
far the most ideal case; there's no disagreement there.

The disagreement comes from whether we need manual node assignment or not.  I 
would argue that we
need to step back and take a look at the real use case: heterogeneous nodes.  
If there are literally
no characteristics that differentiate nodes A and B, then why do we care which 
gets used for what?  Why
do we need to manually assign one?

If we can agree on that, then I think it would be sufficient to say that we 
want a mechanism to allow
UI users to deal with heterogeneous nodes, and that mechanism must use 
nova-scheduler.  In my mind,
that's what resource classes and node profiles are intended for.

One possible objection might be: nova scheduler doesn't have the appropriate 
filter that we need to
separate out two nodes.  In that case, I would say that needs to be taken up 
with nova developers.


b) Terminology

It feels a bit like some of the disagreement come from people using different 
words for the same thing.
For example, the wireframes already details a UI where Robert's roles come 
first, but I think that message
was confused because I mentioned node types in the requirements.

So could we come to some agreement on what the most exact terminology would be? 
 I've listed some examples below,
but I'm sure there are more.

node type | role
management node | ?
resource node | ?
unallocated | available | undeployed
create a node distribution | size the deployment
resource classes | ?
node profiles | ?

Mainn

- Original Message -
 On 10 December 2013 09:55, Tzu-Mainn Chen tzuma...@redhat.com wrote:
  * created as part of undercloud install process
 
  By that note I meant, that Nodes are not resources, Resource instances
  run on Nodes. Nodes are the generic pool of hardware we can deploy
  things onto.
 
  I don't think resource nodes is intended to imply that nodes are
  resources; rather, it's supposed to
  indicate that it's a node where a resource instance runs.  It's supposed to
  separate it from management node
  and unallocated node.
 
 So the question is are we looking at /nodes/ that have a /current
 role/, or are we looking at /roles/ that have some /current nodes/.
 
 My contention is that the role is the interesting thing, and the nodes
 is the incidental thing. That is, as a sysadmin, my hierarchy of
 concerns is something like:
  A: are all services running
  B: are any of them in a degraded state where I need to take prompt
 action to prevent a service outage [might mean many things: - software
 update/disk space criticals/a machine failed and we need to scale the
 cluster back up/too much load]
  C: are there any planned changes I need to make [new software deploy,
 feature request from user, replacing a faulty machine]
  D: are there long term issues sneaking up on me [capacity planning,
 machine obsolescence]
 
 If we take /nodes/ as the interesting thing, and what they are doing
 right now as the incidental thing, it's much harder to map that onto
 the sysadmin concerns. If we start with /roles/ then can answer:
  A: by showing the list of roles and the summary stats (how many
 machines, service status aggregate), role level alerts (e.g. nova-api
 is not responding)
  B: by showing the list of roles and more detailed stats (overall
 load, response times of services, tickets against services
  and a list of in trouble instances in each role - instances with
 alerts against them - low disk, overload, failed service,
 early-detection alerts from hardware
  C: probably out of our remit for now in the general case, but we need
 to enable some things here like replacing faulty machines
  D: by looking at trend graphs for roles (not machines), but also by
 looking at the hardware in aggregate - breakdown by age of machines,
 summary data for tickets filed against instances that were deployed to
 a particular machine
 
 C: and D: are (F) category work, but for all but the very last thing,
 it seems clear how to approach this from a roles perspective.
 
 I've tried to approach this using /nodes/ as the starting point, and
 after two terrible drafts I've deleted the section. I'd love it if
 someone could show me how it would work:)
 
   * Unallocated nodes
  
   This implies an 'allocation' step, that we don't have - how about
   'Idle nodes' or something.
  
   It can be auto-allocation. I don't see problem with 'unallocated' term.
 
  Ok, it's not a biggy. I do think it will frame things poorly and lead
  to an expectation about how TripleO works that doesn't match how it
  does, but we can change it later if I'm right, and if I'm wrong, well
  it won't be the first time :).
 
 
  I'm interested in what the distinction you're making 

Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2013-12-09 Thread Russell Bryant
On 12/09/2013 05:16 PM, Gordon Sim wrote:
 On 12/09/2013 07:15 PM, Russell Bryant wrote:
 On 12/09/2013 12:56 PM, Gordon Sim wrote:
 In the case of Nova (and others that followed Nova's messaging
 patterns), I firmly believe that for scaling reasons, we need to move
 toward it becoming the norm to use peer-to-peer messaging for most
 things.  For example, the API and conductor services should be talking
 directly to compute nodes instead of through a broker.

 Is scale the only reason for preferring direct communication? I don't
 think an intermediary based solution _necessarily_ scales less
 effectively (providing it is distributed in nature, which for example is
 one of the central aims of the dispatch router in Qpid).

 That's not to argue that peer-to-peer shouldn't be used, just trying to
 understand all the factors.

 Scale is the primary one.  If the intermediary based solution is easily
 distributed to handle our scaling needs, that would probably be fine,
 too.  That just hasn't been our experience so far with both RabbitMQ and
 Qpid.
 
 Understood. The Dispatch Router was indeed created from an understanding
 of the limitations and drawbacks of the 'federation' feature of qpidd
 (which was the primary mechanism for scaling beyond one broker) as well
 learning lessons around the difficulties of message replication and
 storage.

Cool.  To make the current situation worse, AFAIK, we've never been able
to make Qpid federation work at all for OpenStack.  That may be due to
the way we use Qpid, though.

For RabbitMQ, I know people are at least using active-active clustering
of the broker.

 One other pattern that can benefit from intermediated message flow is in
 load balancing. If the processing entities are effectively 'pulling'
 messages, this can more naturally balance the load according to capacity
 than when the producer of the workload is trying to determine the best
 balance.

 Yes, that's another factor.  Today, we rely on the message broker's
 behavior to equally distribute messages to a set of consumers.
 
 Sometimes you even _want_ message distribution to be 'unequal', if the
 load varies by message or the capacity by consumer. E.g. If one consumer
 is particularly slow (or is given a particularly arduous task), it may
 not be optimal for it to receive the same portion of subsequent messages
 as other less heavily loaded or more powerful consumers.

Indeed.  We haven't tried to do that anywhere, but it would be an
improvement for some cases.

   The exception
 to that is cases where we use a publish-subscribe model, and a broker
 serves that really well.  Notifications and notification consumers
 (such as Ceilometer) are the prime example.

 The 'fanout' RPC cast would perhaps be another?

 Good point.

 In Nova we have been working to get rid of the usage of this pattern.
 In the latest code the only place it's used AFAIK is in some code we
 expect to mark deprecated (nova-network).
 
 Interesting. Is that because of problems in scaling the messaging
 solution or for other reasons?

It's primarily a scaling concern.  We're assuming that broadcasting
messages is generally an anti-pattern for the massive scale we're aiming
for.

 [...]
 I'm very interested in diving deeper into how Dispatch would fit into
 the various ways OpenStack is using messaging today.  I'd like to get
 a better handle on how the use of Dispatch as an intermediary would
 scale out for a deployment that consists of 10s of thousands of
 compute nodes, for example.

 Is it roughly just that you can have a network of N Dispatch routers
 that route messages from point A to point B, and for notifications we
 would use a traditional message broker (qpidd or rabbitmq) ?
 
 For scaling the basic idea is that not all connections are made to the
 same process and therefore not all messages need to travel through a
 single intermediary process.
 
 So for N different routers, each have a portion of the total number of
 publishers and consumers connected to them. Though client can
 communicate even if they are not connected to the same router, each
 router only needs to handle the messages sent by the publishers directly
 attached, or sent to the consumer directly attached. It never needs to
 see messages between publishers and consumer that are not directly
 attached.
 
 To address your example, the 10s of thousands of compute nodes would be
 spread across N routers. Assuming these were all interconnected, a
 message from the scheduler would only travel through at most two of
 these N routers (the one the scheduler was connected to and the one the
 receiving compute node was connected to). No process needs to be able to
 handle 10s of thousands of connections itself (as contrasted with full
 direct, non-intermediated communication, where the scheduler would need
 to manage connections to each of the compute nodes).
 
 This basic pattern is the same as networks of brokers, but Dispatch
 router has been designed from the start to 

Re: [openstack-dev] Retiring reverify no bug

2013-12-09 Thread James E. Blair
Mark McLoughlin mar...@redhat.com writes:

 I wonder could we make it standard practice for an infra bug to get
 filed whenever there's a known issue causing gate jobs to fail so that
 everyone can use that bug number when re-triggering?

 (Apologies if that's already happening)

 I guess we'd want to broadcast that bug number with statusbot?

 Basically, the times I've used 'reverify no bug' is where I see some job
 failures that look like an infra issue that was already resolved.

Yes, in those cases a bug should be filed on the openstack-ci project
(either by us or by anyone encountering such a bug if we haven't gotten
to it yet).

In the past we have sometimes done that, but not always.  This will
force us to be better about it.  :)

And yes, I'd like to use statusbot for that (I'm currently working on
making it more reliable), but otherwise searching for the most recently
filed bug in openstack-ci would probably get you the right one (if there
is one) on those occasions.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][TripleO] Nested resources

2013-12-09 Thread Mark McLoughlin
On Tue, 2013-12-10 at 09:40 +1300, Robert Collins wrote:
 On 6 December 2013 14:11, Fox, Kevin M kevin@pnnl.gov wrote:
  I think the security issue can be handled by not actually giving the 
  underlying resource to the user in the first place.
 
  So, for example, if I wanted a bare metal node's worth of resource for my 
  own containering, I'd ask for a bare metal node and use a blessed image 
  that contains docker+nova bits that would hook back to the cloud. I 
  wouldn't be able to login to it, but containers started on it would be able 
  to access my tenant's networks. All access to it would have to be through 
  nova suballocations. The bare resource would count against my quotas, but 
  nothing run under it.
 
  Come to think of it, this sounds somewhat similar to what is planned for 
  Neutron service vm's. They count against the user's quota on one level but 
  not all access is directly given to the user. Maybe some of the same 
  implementation bits could be used.
 
 This is a super interesting discussion - thanks for kicking it off.
 
 I think it would be fantastic to be able to use containers for
 deploying the cloud rather than full images while still running
 entirely OpenStack control up and down the stack.

Where I think it gets really interesting is to be able to auto-scale
controller services (think nova-api based on request latency) in small
increments just you'd expect to be able to manage a scale-out app on a
cloud.

i.e. our overcloud Heat stack would allocate some baremetal machines,
but then just schedule the controller services to run in small
containers (or VMs) on any of those machines, and then have them
auto-scale.

 Briefly, what we need to be able to do that is:
 
  - the ability to bring up an all in one node with everything on it to
 'seed' the environment.
 - we currently do that by building a disk image, and manually
 running virsh to start it

I'm not sure that would need to change.

  - the ability to reboot a machine *with no other machines running* -
 we need to be able to power off and on a datacentre - and have the
 containers on it come up correctly configured, networking working,
 running etc.

That's tricky because your undercloud Nova DB/conductor needs to be
available for the machine to know what services it's supposed to be
running. It sounds like a reasonable thing to want even for standard KVM
compute nodes too, though.

  - we explicitly want to be just using OpenStack APIs for all the
 deployment operations after the seed is up; so no direct use of lxc or
 docker or whathaveyou.

Yes.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Steven Hardy
On Tue, Dec 10, 2013 at 11:25:49AM +1300, Robert Collins wrote:
 On 10 December 2013 11:04, Steven Hardy sha...@redhat.com wrote:
 
  So it's a gross mischaracterisation to imply that a democratic process
  aided by some [crude] stats has been reduced to name  shame, and a
  rather offensive one.
 
  Yes I have read your monthly core reviewer update emails[1] and I humbly
  apologize if you feel my characterization of your process is offensive, it
  certainly wasn't intended to be.
 
 Thank; enough said here - lets move on :)
 
 
  I think you have a very different definition of -core to the rest of
  OpenStack. I was actually somewhat concerned about the '+2 Guard the
  gate stuff' at the summit because it's so easily misinterpreted - and
  there is a meme going around (I don't know if it's true or not) that
  some people are assessed - performance review stuff within vendor
  organisations - on becoming core reviewers.
 
  Core reviewer is not intended to be a gateway to getting features or
  ideas into OpenStack projects. It is solely a volunteered contribution
  to the project: helping the project accept patches with confidence
  about their long term integrity: providing explanation and guidance to
  people that want to contribute patches so that their patch can be
  accepted.
 
  We need core reviewers who:
  1. Have deep knowledge of the codebase (to identify non-cosmetic structural
  issues)
 
 mmm, core review is a place to identify really significant structural
 issues, but it's not ideal. Because - you do a lot of work before you
 push code for review, particularly if it's ones first contribution to
 a codebase, and that means a lot of waste when the approach is wrong.
 Agree that having -core that can spot this is good, but not convinced
 that it's a must.

So you're saying you would give approve rights to someone without
sufficient knowledge to recognise the implications of a patch in the
context of the whole tree?  Of course it's a must.

  2. Have used and debugged the codebase (to identify usability, interface
  correctness or other stuff which isn't obvious unless you're using the
  code)
 
 If I may: 2a) Have deployed and used the codebase in production, at
 scale. This may conflict with 1) in terms of individual expertise.

Having folks involved with experience of running stuff in production is
invaluable I agree, I just meant people should have some practical
experience outside of the specific feature they may be working on.

  3. Have demonstrated a commitment to the project (so we know they
  understand the mid-term goals and won't approve stuff which is misaligned
  with those goals)
 
 I don't understand this. Are you saying you'd turn down contributions
 that are aligned with the long term Heat vision because they don't
 advance some short term goal? Or are you saying you'd turn down
 contributions because they actively harm short term goals?

I'm saying people in a position to approve patches should be able to
decouple the short term requirements of e.g their employer, or the feature
they are interested in, from the long-term goals (e.g maintainability) of
the project.

We can, and have, turned down contributions because they offered short-term
solutions to problems which didn't make sense long term from an upstream
perspective (normally with discussion of suitable alternative approaches).

 Seems to me that that commitment to the project is really orthogonal
 to either of those things - folk may have different interpretations
 about what the project needs while still being entirely committed to
 it! Perhaps you mean 'shared understanding of current goals and
 constraints' ? Or something like that? I am niggling on this point
 because I wouldn't want someone who is committed to TripleO but
 focused on the big picture to be accused of not being committed to
 TripleO.

Yeah, shared understanding, I'm saying all -core reviewers should have, and
have demonstrated, some grasp of the big picture for the project they
control the gate for.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cells] compute api and objects

2013-12-09 Thread Sam Morrison
Hi,

I’m trying to fix up some cells issues related to objects. Do all compute api 
methods take objects now?
cells is still sending DB objects for most methods (except start and stop) and 
I know there are more than that.

Eg. I know lock/unlock, shelve/unshelve take objects, I assume there are others 
if not all methods now?

Cheers,
Sam



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to best make User Experience a priority in every project

2013-12-09 Thread Kurt Griffiths
I love the idea of treating usability as a first-class citizen; to do that, we 
definitely need a core set of people who are passionate about the topic in 
order to keep it alive in the OpenStack gestalt. Contributors tend to 
prioritize work on new, concrete features over “non-functional” requirements 
that are perceived as tedious and/or abstract. Common (conscious and 
unconcious) rationalizations include:

  *   I don’t have time
  *   It’s too hard
  *   I don’t know how

Over time, I think we as OpenStack should strive toward a rough consensus on 
basic UX tenets, similar to what we have wrt architecture (i.e., Basic Design 
Tenetshttps://wiki.openstack.org/wiki/BasicDesignTenets). PTLs should 
champion these tenets within their respective teams, mentoring individual 
members on the why and how, and be willing to occasionally postpone sexy new 
features, in order to free the requisite bandwidth for making OpenStack more 
pleasant to use.

IMO, our initiatives around security, usability, documentation, testing etc. 
will only succeed inasmuch as we make them a part of our culture and identity.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2013-12-09 Thread Mark McLoughlin
On Mon, 2013-12-09 at 16:05 +0100, Flavio Percoco wrote:
 Greetings,
 
 As $subject mentions, I'd like to start discussing the support for
 AMQP 1.0[0] in oslo.messaging. We already have rabbit and qpid drivers
 for earlier (and different!) versions of AMQP, the proposal would be
 to add an additional driver for a _protocol_ not a particular broker.
 (Both RabbitMQ and Qpid support AMQP 1.0 now).
 
 By targeting a clear mapping on to a protocol, rather than a specific
 implementation, we would simplify the task in the future for anyone
 wishing to move to any other system that spoke AMQP 1.0. That would no
 longer require a new driver, merely different configuration and
 deployment. That would then allow openstack to more easily take
 advantage of any emerging innovations in this space.

Sounds sane to me.

To put it another way, assuming all AMQP 1.0 client libraries are equal,
all the operator cares about is that we have a driver that connect into
whatever AMQP 1.0 messaging topology they want to use.

Of course, not all client libraries will be equal, so if we don't offer
the choice of library/driver to the operator, then the onus is on us to
pick the best client library for this driver.

(Enjoying the rest of this thread too, thanks to Gordon for his
insights)

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Core criteria, review stats vs reality

2013-12-09 Thread Angus Salkeld

On 09/12/13 11:31 +, Steven Hardy wrote:

Hi all,

So I've been getting concerned about $subject recently, and based on some
recent discussions so have some other heat-core folks, so I wanted to start
a discussion where we can agree and communicate our expectations related to
nomination for heat-core membership (becuase we do need more core
reviewers):

The issues I have are:
- Russell's stats (while very useful) are being used by some projects as
 the principal metric related to -core membership (ref TripleO's monthly
 cull/nameshame, which I am opposed to btw).  This is in some cases
 encouraging some stats-seeking in our review process, IMO.

- Review quality can't be measured mechanically - we have some folks who
 contribute fewer, but very high quality reviews, and are also very active
 contributors (so knowledge of the codebase is not stale).  I'd like to
 see these people do more reviews, but removing people from core just
 because they drop below some arbitrary threshold makes no sense to me.

So if you're aiming for heat-core nomination, here's my personal wish-list,
but hopefully others can proide their input and we can update the wiki with
the resulting requirements/guidelines:

- Make your reviews high-quality.  Focus on spotting logical errors,
 reducing duplication, consistency with existing interfaces, opportunities
 for reuse/simplification etc.  If every review you do is +1, or -1 for a
 trivial/cosmetic issue, you are not making a strong case for -core IMHO.

- Send patches.  Some folks argue that -core membership is only about
 reviews, I disagree - There are many aspects of reviews which require
 deep knowledge of the code, e.g spotting structural issues, logical
 errors caused by interaction with code not modified by the patch,
 effective use of test infrastructure, etc etc.  This deep knowledge comes
 from writing code, not only reviewing it.  This also gives us a way to
 verify your understanding and alignment with our sylistic conventions.

- Fix bugs.  Related to the above, help us fix real problems by testing,
 reporting bugs, and fixing them, or take an existing bug and post a patch
 fixing it.  Ask an existing team member to direct you if you're not sure
 which bug to tackle.  Sending patches doing trivial cosmetic cleanups is
 sometimes worthwhile, but make sure that's not all you do, as we need
 -core folk who can find, report, fix and review real user-impacting
 problems (not just new features).  This is also a great way to build
 trust and knowledge if you're aiming to contribute features to Heat.

- Engage in discussions related to the project (here on the ML, helping
 users on the general list, in #heat on Freenode, attend our weekly
 meeting if it's not an anti-social time in your TZ)

Anyone have any more thoughts to add here?


Setting a side the mechanism for choosing team-core, I think we should
be re-evaluating more often (some regular interval - maybe every 2
months).

- Personally I'd not be stressed at all about been taken off core one
  period and re-add later (if I was busy with something else and
  didn't have time for reviews).
- I think this sends a good message that core is not set in stone.
  Given some hard work you too can get in core (if you aspire to).


-Angus



Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Questions on logging setup for development

2013-12-09 Thread Paul Michali
Thanks! That worked



PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83

On Dec 9, 2013, at 5:27 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 
 On Dec 6, 2013, at 2:09 PM, Paul Michali p...@cisco.com wrote:
 
 Hi,
 
 For Neutron, I'm creating a module (one of several eventually) as part of a 
 new blueprint I'm working on, and the associated unit test module. I'm in 
 really early development, and just running this UT module as a standalone 
 script (rather than through tox). It allows me to do TDD pretty quickly on 
 the code I'm developing (that's the approach I'm taking right now - fingers 
 crossed :).
 
 In the module, I did an import of the logging package and when I run UTs I 
 can see messaging that would occur, if desired.
 
 I have the following hack to turn off/on the logging for debug level:
 
 if False:  # Debugging
 logging.basicConfig(format='%(asctime)-15s [%(levelname)s] %(message)s',
 level=logging.DEBUG)
 
 I made the log calls the same as what would be in other Neutron code, so 
 that I don't have to change the code later, as I start to fold it into the 
 Neutron code. However, I'd like to import the neutron.openstack.common.log 
 package in my code, so that the code will be identical to what is needed 
 once I start running this code as part of a process, but I had some 
 questions…
 
 When using neutron.openstack.common.log, how do I toggle the debug level 
 logging on, if I run this standalone, as I'm doing now?
 Is there a way to do it, without adding in the above conditional logic to 
 the production code? Maybe put something in the UT module?
 
 I believe you can just make sure to set_override on the CONF option to True 
 and then call logging.setup('neutron')
 
 Here is an example with the nova code
 
  from nova.openstack.common import log as logging
  LOG = logging.getLogger(__name__)
  LOG.debug('foo')
  logging.CONF.set_override('debug', True)
  logging.setup('nova')
  LOG.debug('foo')
 2013-12-09 14:25:21.220 72011 DEBUG __main__ [-] foo module input:2
 
 Vish
 
 
 
 I can always continue as is, and then switch things over later (changing the 
 import line and pulling the if clause), once I have things mostly done, and 
 want to run as part of Neutron, but it would be nice if I can find a way to 
 do that up front to avoid changes later.
 
 Thoughts? Suggestions?
 
 Thanks!
 
 
 PCM (Paul Michali)
 
 MAIL  p...@cisco.com
 IRCpcm_  (irc.freenode.net)
 TW@pmichali
 GPG key4525ECC253E31A83
 Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-09 Thread Robert Collins
On 6 December 2013 21:56, Jaromir Coufal jcou...@redhat.com wrote:


 Hey there,

 thanks Rob for keeping eye on this. Speaking for myself, as current
 non-coder it was very hard to keep pace with others, especially when UI was
 on hold and I was designing future views. I'll continue working on designs
 much more, but I will also keep an eye on code which is going in. I believe
 that UX reviews will be needed before merging so that we assure keeping the
 vision. That's why I would like to express my will to stay within -core even
 when I don't deliver that big amount of reviews as other engineers. However
 if anybody feels that I should be just +1, I completely understand and I
 will give up my +2 power.

 -- Jarda

Hey, so -

I think there are two key things to highlight here. Firstly, there's
considerable support from other -core for delaying the removals this
month, so we'll re-evaluate in Jan (and be understanding then as there
is a big 1-2 week holiday in there).

That said, I want to try and break down the actual implications here,
both in terms of contributions, recognition and what it means for the
project.

Firstly, contributions. Reviewing isn't currently *directly*
recognised as a 'technical contribution' by the bylaws: writing code
that land in the repository is, and there is a process for other
contributions (such as design, UX, and reviews) to be explicitly
recognised out-of-band. It's possible we should revisit that -
certainly I'd be very happy to nominate people contributing through
that means as a TripleO ATC irrespective of their landing patches in a
TripleO code repository [as long as their reviews *are* helpful :)].
But thats a legalistic sort of approach. A more holistic approach is
to say that any activity that helps TripleO succeed in it's mission is
a contribution, and we should be fairly broad in our recognition of
that activity : whether it's organising contributed hardware for the
test cloud, helping admin the test cloud, doing code review, or UX
design - we should recognise and celebrate all of those things.
Specifically, taking the time to write a thoughtful code review which
avoids a bug landing in TripleO, or keeps the design flexible and
effective *is* contributing to TripleO.

We have a bit of a bug in OpenStack today, IMO, in that there is more
focus on being -core than on being a good effective reviewer. IMO
that's backwards: the magic switch that lets you set +2 and -2 is a
responsibility, and that has some impact on the weight your comments
in reviews have on other people - both other core and non-core, but
the contribution we make by reviewing doesn't suddenly get
significantly better by virtue of being -core. There is an element of
trust and faith in personality etc - you don't want destructive
behaviour in code review, but you don't want that from anyone - it's
not a new requirement place on -core. What I'd like to see is more of
a focus on review (design review, code review, architecture review) as
something we should all contribute towards - jointly share the burden.
For instance, the summit is a fantastic point for us to come together
and do joint design review of the work organisations are pushing on
for the next 6 months : thats a fantastic contribution. But when
organisations don't send people to the summit, because of $reasons,
that reduces our entire ability to catch problems with that planned
work : going to the summit is /hard work/ - long days, exhausting,
intense conversations. The idea (which I've heard some folk mention)
that only -core folk would be sent to the summit is incredibly nutty!

So what does it mean for TripleO when someone stops being -core
because of inactivity:

Firstly it means they have *already* effectively stopped doing code
review at a high frequency: they are *not* contributing in a
substantial fashion through that avenue. It doesn't mean anything
about other avenues of contribution.

Secondly, if they do notice something badly wrong with a patch, or a
patch that needs urgent landing, they can no longer do that
themselves: they need to find a -core and get the -core to do it.

Thats really about it - there is no substantial impact on the core
review bandwidth for the team (they were already largely inactive).

So, how does this apply to you specifically, and to the other Tuskar
UI folk who've been focused on Horizon itself and other things
recently

If you add a -1 to a patch, it should be treated with much the same
consideration as one from me: we all want to get good code in, and the
union of opinions should be fairly harmonious.

If you +1 a patch saying 'the design is great', it helps other folk
worry less about that, but we still have to care for the code, the API
implications etc.

If you (I'm speaking to everyone that I proposed in the 'should we
remove from -core?' section are planning on staying about the same
level of activity w.r.t. code review, then I don't think being in
-core makes a lot of sense. We're pretty up 

Re: [openstack-dev] [Oslo] First steps towards amqp 1.0

2013-12-09 Thread Mike Wilson
This is the first time I've heard of the dispatch router, I'm really
excited now that I've looked at it a bit. Thx Gordon and Russell for
bringing this up. I'm very familiar with the scaling issues associated with
any kind of brokered messaging solution. We grew an Openstack installation
to about 7,000 nodes and started having significant scaling issues with the
qpid broker. We've talked about our problems at a couple summits in a fair
amount of detail[1][2]. I won't bother repeating the information in this
thread.

I really like the idea of separating the logic of routing away from the the
message emitter. Russell mentioned the 0mq matchmaker, we essentially
ditched the qpid broker for direct communication via 0mq and it's
matchmaker. It still has a lot of problems which dispatch seems to address.
For example, in ceilometer we have store-and-forward behavior as a
requirement. This kind of communication requires a broker but 0mq doesn't
really officially support one, which means we would probably end up with
some broker as part of OpenStack. Matchmaker is also a fairly basic
implementation of what is essentially a directory. For any sort of serious
production use case you end up sprinkling JSON files all over the place or
maintaining a Redis backend. I feel like the matchmaker needs a bunch more
work to make modifying the directory simpler for operations. I would rather
put that work into a separate project like dispatch than have to maintain
essentially a one off in Openstack's codebase.

I wonder how this fits into messaging from a driver perspective in
Openstack or even how this fits into oslo.messaging? Right now we have
topics for binaries(compute, network, consoleauth, etc),
hostname.service_topic for nodes, fanout queue per node (not sure if kombu
also has this) and different exchanges per project. If we can abstract the
routing from the emission of the message all we really care about is
emitter, endpoint, messaging pattern (fanout, store and forward, etc). Also
not sure if there's a dispatch analogue in the rabbit world, if not we need
to have some mapping of concepts etc between impls.

So many questions, but in general I'm really excited about this and eager
to contribute. For sure I will start playing with this in Bluehost's
environments that haven't been completely 0mqized. I also have some
lingering concerns about qpid in general. Beyond scaling issues I've run
into some other terrible bugs that motivated our move away from it. Again,
these are mentioned in our presentations at summits and I'd be happy to
talk more about them in a separate discussion. I've also been able to talk
to some other qpid+openstack users who have seen the same bugs. Another
large installation that comes to mind is Qihoo 360 in China. They run a few
thousand nodes with qpid for messaging and are familiar with the snags we
run into.

Gordon,

I would really appreciate if you could watch those two talks and comment.
The bugs are probably separate from the dispatch router discussion, but it
does dampen my enthusiasm a bit not knowing how to fix issues beyond scale
:-(.

-Mike Wilson

[1]
http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment
[2]
http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/going-brokerless-the-transition-from-qpid-to-0mq




On Mon, Dec 9, 2013 at 4:29 PM, Mark McLoughlin mar...@redhat.com wrote:

 On Mon, 2013-12-09 at 16:05 +0100, Flavio Percoco wrote:
  Greetings,
 
  As $subject mentions, I'd like to start discussing the support for
  AMQP 1.0[0] in oslo.messaging. We already have rabbit and qpid drivers
  for earlier (and different!) versions of AMQP, the proposal would be
  to add an additional driver for a _protocol_ not a particular broker.
  (Both RabbitMQ and Qpid support AMQP 1.0 now).
 
  By targeting a clear mapping on to a protocol, rather than a specific
  implementation, we would simplify the task in the future for anyone
  wishing to move to any other system that spoke AMQP 1.0. That would no
  longer require a new driver, merely different configuration and
  deployment. That would then allow openstack to more easily take
  advantage of any emerging innovations in this space.

 Sounds sane to me.

 To put it another way, assuming all AMQP 1.0 client libraries are equal,
 all the operator cares about is that we have a driver that connect into
 whatever AMQP 1.0 messaging topology they want to use.

 Of course, not all client libraries will be equal, so if we don't offer
 the choice of library/driver to the operator, then the onus is on us to
 pick the best client library for this driver.

 (Enjoying the rest of this thread too, thanks to Gordon for his
 insights)

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-09 Thread Mike Wilson
I guess the question that immediately comes to mind is, is there anyone
that doesn't want a distributed router? I guess there could be someone out
there that hates the idea of traffic flowing in a balanced fashion, but
can't they just run a single router then? Does there really need to be some
flag to disable/enable this behavior? Maybe I am oversimplifying things...
you tell me.

-Mike Wilson


On Mon, Dec 9, 2013 at 3:01 PM, Vasudevan, Swaminathan (PNB Roseville) 
swaminathan.vasude...@hp.com wrote:

  Hi Folks,

 We are in the process of defining the API for the Neutron Distributed
 Virtual Router, and we have a question.



 Just wanted to get the feedback from the community before we implement and
 post for review.



 We are planning to use the “distributed” flag for the routers that are
 supposed to be routing traffic locally (both East West and North South).

 This “distributed” flag is already there in the “neutronclient” API, but
 currently only utilized by the “Nicira Plugin”.

 We would like to go ahead and use the same “distributed” flag and add an
 extension to the router table to accommodate the “distributed flag”.



 Please let us know your feedback.



 Thanks.



 Swaminathan Vasudevan

 Systems Software Engineer (TC)





 HP Networking

 Hewlett-Packard

 8000 Foothills Blvd

 M/S 5541

 Roseville, CA - 95747

 tel: 916.785.0937

 fax: 916.785.1815

 email: swaminathan.vasude...@hp.com





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-09 Thread Ian Wells
I would imagine that, from the Neutron perspective, you get a single router
whether or not it's distributed.  I think that if a router is distributed -
regardless of whether it's tenant-tenant or tenant-outside - it certainly
*could* have some sort of SLA flag, but I don't think a simple
'distributed' flag is either here or there; it's not telling the tenant
anything meaningful.


On 10 December 2013 00:48, Mike Wilson geekinu...@gmail.com wrote:

 I guess the question that immediately comes to mind is, is there anyone
 that doesn't want a distributed router? I guess there could be someone out
 there that hates the idea of traffic flowing in a balanced fashion, but
 can't they just run a single router then? Does there really need to be some
 flag to disable/enable this behavior? Maybe I am oversimplifying things...
 you tell me.

 -Mike Wilson


 On Mon, Dec 9, 2013 at 3:01 PM, Vasudevan, Swaminathan (PNB Roseville) 
 swaminathan.vasude...@hp.com wrote:

  Hi Folks,

 We are in the process of defining the API for the Neutron Distributed
 Virtual Router, and we have a question.



 Just wanted to get the feedback from the community before we implement
 and post for review.



 We are planning to use the “distributed” flag for the routers that are
 supposed to be routing traffic locally (both East West and North South).

 This “distributed” flag is already there in the “neutronclient” API, but
 currently only utilized by the “Nicira Plugin”.

 We would like to go ahead and use the same “distributed” flag and add an
 extension to the router table to accommodate the “distributed flag”.



 Please let us know your feedback.



 Thanks.



 Swaminathan Vasudevan

 Systems Software Engineer (TC)





 HP Networking

 Hewlett-Packard

 8000 Foothills Blvd

 M/S 5541

 Roseville, CA - 95747

 tel: 916.785.0937

 fax: 916.785.1815

 email: swaminathan.vasude...@hp.com





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-09 Thread Yongsheng Gong
If distributed router is good enough, why do we still need non-distributed
router?


On Tue, Dec 10, 2013 at 9:04 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 I would imagine that, from the Neutron perspective, you get a single
 router whether or not it's distributed.  I think that if a router is
 distributed - regardless of whether it's tenant-tenant or tenant-outside -
 it certainly *could* have some sort of SLA flag, but I don't think a simple
 'distributed' flag is either here or there; it's not telling the tenant
 anything meaningful.


 On 10 December 2013 00:48, Mike Wilson geekinu...@gmail.com wrote:

 I guess the question that immediately comes to mind is, is there anyone
 that doesn't want a distributed router? I guess there could be someone out
 there that hates the idea of traffic flowing in a balanced fashion, but
 can't they just run a single router then? Does there really need to be some
 flag to disable/enable this behavior? Maybe I am oversimplifying things...
 you tell me.

 -Mike Wilson


 On Mon, Dec 9, 2013 at 3:01 PM, Vasudevan, Swaminathan (PNB Roseville) 
 swaminathan.vasude...@hp.com wrote:

  Hi Folks,

 We are in the process of defining the API for the Neutron Distributed
 Virtual Router, and we have a question.



 Just wanted to get the feedback from the community before we implement
 and post for review.



 We are planning to use the “distributed” flag for the routers that are
 supposed to be routing traffic locally (both East West and North South).

 This “distributed” flag is already there in the “neutronclient” API, but
 currently only utilized by the “Nicira Plugin”.

 We would like to go ahead and use the same “distributed” flag and add an
 extension to the router table to accommodate the “distributed flag”.



 Please let us know your feedback.



 Thanks.



 Swaminathan Vasudevan

 Systems Software Engineer (TC)





 HP Networking

 Hewlett-Packard

 8000 Foothills Blvd

 M/S 5541

 Roseville, CA - 95747

 tel: 916.785.0937

 fax: 916.785.1815

 email: swaminathan.vasude...@hp.com





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] ec2tokens, v3 credentials and request signing

2013-12-09 Thread Adam Young




On 12/09/2013 05:34 PM, Steven Hardy wrote:

Hi all,

I have some queries about what the future of the ec2tokens API is for
keystone, context as we're looking to move Heat from a horrible mixture of
v2/v3 keystone to just v3, currently I'm not sure we can:

- The v3/credentials API allows ec2tokens to be stored (if you
   create the access/secret key yourself), but it requires admin, which
   creating an ec2-keypair via the v2 API does not?

- There is no v3 interface for validating signed requests like you can via
   POST v2.0/ec2tokens AFAICT?

- Validating requests signed with ec2 credentials stored via v3/credentials
   does not work, if you try to use v2.0/ec2tokens, should it?

So my question is basically, what's the future of ec2tokens, is there some
alternative in the pipeline for satisfying the same use-case?

The main issues we have in Heat:

- We want to continue supporting AWS style signed requests for our
   cloudformation-compatible API, which is currently done via ec2tokens.

- ec2 keypairs are currently the only method of getting a non-expiring
   credential which we can deploy in-instance, that is no longer possible
   via the v3 API for the reasons above.

What is the recommended way for us to deploy a (non expiring) credential in
an instance (ideally derived from a trust or otherwise role-limited), then
use that credential to authenticate against our API?


X509.

The issue, as I understand it, is that there is no user object to back 
that credential.  You don't have a user to execute the trust.


Note that you should not be deriving a credential from a trust, you 
should be linking a trust to a credential.


The KDS code base has a similar problem.  We need a longer term 
credential service for internal components of Open Stack.  KDS is going 
to do it with symmetric keys, which might serve your needs. This is 
usually done via Kerberos in enterprise deployments.






My first thought is that the easiest solution would be to allow trust
scoped tokens to optionally be configured to not expire (until we delete
the trust when we delete the Heat stack)?

Can anyone offer any suggestions on a v3 compatible way to do this?

I did start looking at oauth as a possible solution, but it seems the
client support is not yet there, and there's no auth middleware we can use
for authenticating requests containing oauth credentials, any ideas on the
status of this would be most helpful!

OAuth is short term delegation, not what you need.




Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-09 Thread Nachi Ueno
Hi Yong

NSX have two kind of router.
Edge and distributed router.

Edge node will work as some VPN services and advanced service nodes.

Actually, VPNaaS OSS impl is running in l3-agent.
so IMO, we need l3-agent also for basis of some edge services.





2013/12/9 Yongsheng Gong gong...@unitedstack.com:
 If distributed router is good enough, why do we still need non-distributed
 router?


 On Tue, Dec 10, 2013 at 9:04 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 I would imagine that, from the Neutron perspective, you get a single
 router whether or not it's distributed.  I think that if a router is
 distributed - regardless of whether it's tenant-tenant or tenant-outside -
 it certainly *could* have some sort of SLA flag, but I don't think a simple
 'distributed' flag is either here or there; it's not telling the tenant
 anything meaningful.


 On 10 December 2013 00:48, Mike Wilson geekinu...@gmail.com wrote:

 I guess the question that immediately comes to mind is, is there anyone
 that doesn't want a distributed router? I guess there could be someone out
 there that hates the idea of traffic flowing in a balanced fashion, but
 can't they just run a single router then? Does there really need to be some
 flag to disable/enable this behavior? Maybe I am oversimplifying things...
 you tell me.

 -Mike Wilson


 On Mon, Dec 9, 2013 at 3:01 PM, Vasudevan, Swaminathan (PNB Roseville)
 swaminathan.vasude...@hp.com wrote:

 Hi Folks,

 We are in the process of defining the API for the Neutron Distributed
 Virtual Router, and we have a question.



 Just wanted to get the feedback from the community before we implement
 and post for review.



 We are planning to use the “distributed” flag for the routers that are
 supposed to be routing traffic locally (both East West and North South).

 This “distributed” flag is already there in the “neutronclient” API, but
 currently only utilized by the “Nicira Plugin”.

 We would like to go ahead and use the same “distributed” flag and add an
 extension to the router table to accommodate the “distributed flag”.



 Please let us know your feedback.



 Thanks.



 Swaminathan Vasudevan

 Systems Software Engineer (TC)





 HP Networking

 Hewlett-Packard

 8000 Foothills Blvd

 M/S 5541

 Roseville, CA - 95747

 tel: 916.785.0937

 fax: 916.785.1815

 email: swaminathan.vasude...@hp.com






 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OK to Use Flufl.enum

2013-12-09 Thread Adam Young
While Python 3 has enumerated types, Python 2 does not, and the standard 
package to provide id, Flufl.enum, is not yet part of our code base.  Is 
there any strong objection to including Flufl.enum?


http://pythonhosted.org/flufl.enum/

It makes for some very elegant code, especially for enumerated integer 
types.


For an example See ScopeType in

https://review.openstack.org/#/c/55908/4/keystone/contrib/revoke/core.py

Line 62.

the getter/setter in RevokeEvent do not need to do any conditional logic 
if passed either an integer or a ScopeType.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OK to Use Flufl.enum

2013-12-09 Thread Alex Gaynor
Would it make sense to use the `enum34` package, which is a backport of teh
enum package from py3k?

Alex


On Mon, Dec 9, 2013 at 7:37 PM, Adam Young ayo...@redhat.com wrote:

 While Python 3 has enumerated types, Python 2 does not, and the standard
 package to provide id, Flufl.enum, is not yet part of our code base.  Is
 there any strong objection to including Flufl.enum?

 http://pythonhosted.org/flufl.enum/

 It makes for some very elegant code, especially for enumerated integer
 types.

 For an example See ScopeType in

 https://review.openstack.org/#/c/55908/4/keystone/contrib/revoke/core.py

 Line 62.

 the getter/setter in RevokeEvent do not need to do any conditional logic
 if passed either an integer or a ScopeType.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
I disapprove of what you say, but I will defend to the death your right to
say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
The people's good is the highest law. -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Distributed Virtual Router

2013-12-09 Thread Akihiro Motoki
Neutron defines provider attribute and it is/will be used in advanced 
services (LB, FW, VPN).
Doesn't it fit for a distributed router case? If we can cover all services with 
one concept, it would be nice.

According to this thread, we assumes at least two types edge and 
distributed.
Though edge and distributed is a type of implementations, I think they are 
some kind of provider.

I just would like to add an option. I am open to provider vs distirbute 
attributes.

Thanks,
Akihiro

(2013/12/10 7:01), Vasudevan, Swaminathan (PNB Roseville) wrote:
 Hi Folks,

 We are in the process of defining the API for the Neutron Distributed Virtual 
 Router, and we have a question.

 Just wanted to get the feedback from the community before we implement and 
 post for review.

 We are planning to use the “distributed” flag for the routers that are 
 supposed to be routing traffic locally (both East West and North South).
 This “distributed” flag is already there in the “neutronclient” API, but 
 currently only utilized by the “Nicira Plugin”.
 We would like to go ahead and use the same “distributed” flag and add an 
 extension to the router table to accommodate the “distributed flag”.

 Please let us know your feedback.

 Thanks.

 Swaminathan Vasudevan
 Systems Software Engineer (TC)
 HP Networking
 Hewlett-Packard
 8000 Foothills Blvd
 M/S 5541
 Roseville, CA - 95747
 tel: 916.785.0937
 fax: 916.785.1815
 email: swaminathan.vasude...@hp.com mailto:swaminathan.vasude...@hp.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-09 Thread Joe Gordon
On Dec 10, 2013 2:37 AM, Robert Collins robe...@robertcollins.net wrote:

 On 6 December 2013 21:56, Jaromir Coufal jcou...@redhat.com wrote:

 
  Hey there,
 
  thanks Rob for keeping eye on this. Speaking for myself, as current
  non-coder it was very hard to keep pace with others, especially when UI
was
  on hold and I was designing future views. I'll continue working on
designs
  much more, but I will also keep an eye on code which is going in. I
believe
  that UX reviews will be needed before merging so that we assure keeping
the
  vision. That's why I would like to express my will to stay within -core
even
  when I don't deliver that big amount of reviews as other engineers.
However
  if anybody feels that I should be just +1, I completely understand and I
  will give up my +2 power.
 
  -- Jarda

 Hey, so -

 I think there are two key things to highlight here. Firstly, there's
 considerable support from other -core for delaying the removals this
 month, issue. I'll re-evaluate in Jan (and be understanding then as there
 is a big 1-2 week holiday in there).

 That said, I want to try and break down the actual implications here,
 both in terms of contributions, recognition and what it means for the
 project.

 Firstly, contributions. Reviewing isn't currently *directly*
 recognised as a 'technical contribution' by the bylaws: writing code
 that land in the repository is, and there is a process for other
 contributions (such as design, UX, and reviews) to be explicitly
 recognised out-of-band. It's possible we should revisit that -
 certainly I'd be very happy to nominate people contributing through
 that means as a TripleO ATC irrespective of their landing patches in a
 TripleO code repository [as long as their reviews *are* helpful :)].
 But thats a legalistic sort of approach. A more holistic approach is
 to say that any activity that helps TripleO succeed in it's mission is
 a contribution, and we should be fairly broad in our recognition of
 that activity : whether it's organising contributed hardware for the
 test cloud, helping admin the test cloud, doing code review, or UX
 design - we should recognise and celebrate all of those things.
 Specifically, taking the time to write a thoughtful code review which
 avoids a bug landing in TripleO, or keeps the design flexible and
 effective *is* contributing to TripleO.

 We have a bit of a bug in OpenStack today, IMO, in that there is more
 focus on being -core than on being a good effective reviewer. IMO
 that's backwards: the magic switch that lets you set +2 and -2 is a
 responsibility, and that has some impact on the weight your comments
 in reviews have on other people - both other core and non-core, but
 the contribution we make by reviewing doesn't suddenly get
 significantly better by virtue of being -core. There is an element of
 trust and faith in personality etc - you don't want destructive
 behaviour in code review, but you don't want that from anyone - it's
 not a new requirement place on -core. What I'd like to see is more of
 a focus on review (design review, code review, architecture review) as
 something we should all contribute towards - jointly share the burden.
 For instance, the summit is a fantastic point for us to come together
 and do joint design review of the work organisations are pushing on
 for the next 6 months : thats a fantastic contribution. But when
 organisations don't send people to the summit, because of $reasons,
 that reduces our entire ability to catch problems with that planned
 work : going to the summit is /hard work/ - long days, exhausting,
 intense conversations. The idea (which I've heard some folk mention)
 that only -core folk would be sent to the summit is incredibly nutty!

 So what does it mean for TripleO when someone stops being -core
 because of inactivity:

 Firstly it means they have *already* effectively stopped doing code
 review at a high frequency: they are *not* contributing in a
 substantial fashion through that avenue. It doesn't mean anything
 about other avenues of contribution.

 Secondly, if they do notice something badly wrong with a patch, or a
 patch that needs urgent landing, they can no longer do that
 themselves: they need to find a -core and get the -core to do it.

 Thats really about it - there is no substantial impact on the core
 review bandwidth for the team (they were already largely inactive).

+1

Very well put, as you said this is a larger openstack issue, hopefully we
can fix this on the larger openstack scale and not just in tripleo.


 So, how does this apply to you specifically, and to the other Tuskar
 UI folk who've been focused on Horizon itself and other things
 recently

 If you add a -1 to a patch, it should be treated with much the same
 consideration as one from me: we all want to get good code in, and the
 union of opinions should be fairly harmonious.

 If you +1 a patch saying 'the design is great', it helps other folk
 worry less about that, but we 

Re: [openstack-dev] [Neutron] Third-party testing

2013-12-09 Thread Yoshihiro Kaneko
2013/12/10 Matt Riedemann mrie...@linux.vnet.ibm.com:


 On Sunday, December 08, 2013 11:32:50 PM, Yoshihiro Kaneko wrote:

 Hi Neutron team,

 I'm working on building Third-party testing for Neutron Ryu plugin.
 I intend to use Jenkins and gerrit-trigger plugin.

 It is required that Third-party testing provides verify vote for
 all changes to a plugin/driver's code, and all code submissions
 by the jenkins user.

 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Testing_Requirements

 For this requirements, what kind of filter for the trigger should
 I set?
 It is easy to set a file path of the plugin/driver:
project: plain:neutron
branch:  plain:master
file:path:neutron/plugins/ryu/**
 However, this is not enough because it lacks dependencies.
 It is difficult to judge a patchset which affects the plugin/driver.
 In addition, gerrit trigger has a file path filter, but there is no
 patchset owner filter, so it is not able to set a trigger to a
 patchset which is submitted by the jenkins user.

 Can Third-party testing execute tests for all patchset including the
 thing which may not affect the plugin/driver?

 Thanks,
 Kaneko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I can't speak for the Neutron team, but in Nova the requirement is to run
 all patches through the vendor plugin third party CI, not just
 vendor-specific patches.

Thanks for the reply, Matt.
I believe that it is the right way for a smoke testing.


 https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan

 --

 Thanks,

 Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-09 Thread Isaku Yamahata
On Mon, Dec 09, 2013 at 08:07:12PM +0900,
Isaku Yamahata isaku.yamah...@gmail.com wrote:

 On Mon, Dec 09, 2013 at 08:43:59AM +1300,
 Robert Collins robe...@robertcollins.net wrote:
 
  On 9 December 2013 01:43, Maru Newby ma...@redhat.com wrote:
  
  
   If AMQP service is set up not to lose notification, notifications will 
   be piled up
   and stress AMQP service. I would say single node failure isn't 
   catastrophic.
  
   So we should have AMQP set to discard notifications if there is noone
  
   What are the semantics of AMQP discarding notifications when a consumer 
   is no longer present?  Can this be relied upon to ensure that potentially 
   stale notifications do not remain in the queue when an agent restarts?
  
  If the queue is set to autodelete, it will delete when the agent
  disconnects. There will be no queue until the agent reconnects. I
  don't know if we expose that functionality via oslo.messaging, but
  it's certainly something AMQP can do.
 
 What happens if intermittent network instability occur?
 When the connection between agent - AMQP is unintentionally closed,
 will agent die or reconnect to it?

Answering myself. If connection is closed, it will reconnects automatically
at rpc layer. See neutron.openstack.common.rpc.impl_{kombu, qpid}.py.
So notifications during reconnects can be lost if AMQP service is set
to discard notifications during no subscriber.
-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-09 Thread Robert Collins
On 10 December 2013 19:16, Isaku Yamahata isaku.yamah...@gmail.com wrote:

 Answering myself. If connection is closed, it will reconnects automatically
 at rpc layer. See neutron.openstack.common.rpc.impl_{kombu, qpid}.py.
 So notifications during reconnects can be lost if AMQP service is set
 to discard notifications during no subscriber.

Which is fine: the agent repulls the full set it's running on that
machine, and life goes on.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-09 Thread Mark McLoughlin
On Tue, 2013-12-10 at 13:31 +1300, Robert Collins wrote:

 We have a bit of a bug in OpenStack today, IMO, in that there is more
 focus on being -core than on being a good effective reviewer. IMO
 that's backwards: the magic switch that lets you set +2 and -2 is a
 responsibility, and that has some impact on the weight your comments
 in reviews have on other people - both other core and non-core, but
 the contribution we make by reviewing doesn't suddenly get
 significantly better by virtue of being -core. There is an element of
 trust and faith in personality etc - you don't want destructive
 behaviour in code review, but you don't want that from anyone - it's
 not a new requirement place on -core.

FWIW, I see the this focus on being -core as an often healthy desire
to be recognized as a good effective reviewer.

I guess that's related to where you said something similar in the Heat
thread:

  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg11121.html

  there is a meme going around (I don't know if it's true or not) that
  some people are assessed - performance review stuff within vendor
  organisations - on becoming core reviewers.

For example, if managers in these organizations said to people I want
to spend a significant proportion of your time contributing good and
effective upstream reviews that would be a good thing, right?

One way that such well intentioned managers could know whether the
reviewing is good and effective is whether the reviewers are getting
added to the -core teams. That also seems mostly positive. Certainly
better than looking at reviewer stats?

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >