[openstack-dev] [Glance] Why do V2 APIs disallow updating locations?

2014-01-28 Thread David Koo
Hi all,

I see that api.v2.images.ImagesController._do_replace_locations does not
allow replacing a set of locations with another set. Either the existing
set must be empty or the new set must be empty. The replace operation is
therefore just another implementation of add/delete.

So why not just throw an error if user tries to use the 'replace'
operation on locations (in other words not allow a 'replace' operation
for locations)?

The bigger question, though, is why don't we currently support the
replace operation on locations in the first place?

--
Koo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] team meeting Friday 31 Jan 1400 UTC

2014-01-28 Thread Flavio Percoco

On 27/01/14 14:57 -0500, Doug Hellmann wrote:

The Oslo team has a few items we need to discuss, so I'm calling a meeting for
this Friday, 31 Jan. Our normal slot is 1400 UTC Friday in #openstack-meeting. 


The agenda [1] includes 2 items (so far):

1. log translations (see the other thread started today)
2. parallelizing our tests



We should also discuss the process to pull out packages from oslo. I
mean, check if anything has changed in terms of stability since the
summit and what our plans are for moving forward with this.

Cheers,
flaper


If you have anything else you would like to discuss, please add it to the
agenda.

See you Friday!
Doug


[1] https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpvPFGFDmjyf.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-28 Thread John Garbutt
On 27 January 2014 14:52, Justin Santa Barbara jus...@fathomdb.com wrote:
 Day, Phil wrote:


  We already have a mechanism now where an instance can push metadata as
  a way of Windows instances sharing their passwords - so maybe this
  could
  build on that somehow - for example each instance pushes the data its
  willing to share with other instances owned by the same tenant ?
 
  I do like that and think it would be very cool, but it is much more
  complex to
  implement I think.

 I don't think its that complicated - just needs one extra attribute stored
 per instance (for example into instance_system_metadata) which allows the
 instance to be included in the list


 Ah - OK, I think I better understand what you're proposing, and I do like
 it.  The hardest bit of having the metadata store be full read/write would
 be defining what is and is not allowed (rate-limits, size-limits, etc).  I
 worry that you end up with a new key-value store, and with per-instance
 credentials.  That would be a separate discussion: this blueprint is trying
 to provide a focused replacement for multicast discovery for the cloud.

 But: thank you for reminding me about the Windows password though...  It may
 provide a reasonable model:

 We would have a new endpoint, say 'discovery'.  An instance can POST a
 single string value to the endpoint.  A GET on the endpoint will return any
 values posted by all instances in the same project.

 One key only; name not publicly exposed ('discovery_datum'?); 255 bytes of
 value only.

 I expect most instances will just post their IPs, but I expect other uses
 will be found.

 If I provided a patch that worked in this way, would you/others be on-board?

I like that idea. Seems like a good compromise. I have added my review
comments to the blueprint.

We have this related blueprints going on, setting metadata on a
particular server, rather than a group:
https://blueprints.launchpad.net/nova/+spec/metadata-service-callbacks

It is limiting things using the existing Quota on metadata updates.

It would be good to agree a similar format between the two.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: Proposed Logging Standards

2014-01-28 Thread Rossella Sblendido
Hi Sean,

good idea, count me in!
I think it makes more sense for me to help in Neutron, that's the project
that I'm more familiar with.

Rossella


On Tue, Jan 28, 2014 at 7:20 AM, Haiming Yang laserjety...@gmail.comwrote:

  I think it is also good for general i18n effort
  --
 发件人: Christopher Yeoh cbky...@gmail.com
 发送时间: 2014/1/28 11:02
 收件人: OpenStack Development Mailing List (not for usage 
 questions)openstack-dev@lists.openstack.org
 主题: Re: [openstack-dev] Proposed Logging Standards

  On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague s...@dague.net wrote:

 On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
  Hi Sean,
 
  I'm currently working on moving away from the built-in logging to use
 log_config=filename and the python logging framework so that we can start
 shipping to logstash/sentry/insert other useful tool here.
 
  I'd be very interested in getting involved in this, especially from a
 why do we have log messages that are split across multiple lines
 perspective!

 Do we have many that aren't either DEBUG or TRACE? I thought we were
 pretty clean there.

  Cheers,
 
  Matt
 
  P.S. FWIW, I'd also welcome details on what the Audit level gives us
 that the others don't... :)

 Well as far as I can tell the AUDIT level was a prior drive by
 contribution that's not being actively maintained. Honestly, I think we
 should probably rip it out, because I don't see any in tree tooling to
 use it, and it's horribly inconsistent.


 For the uses I've seen of it in the nova api code INFO would be perfectly
 fine in place of AUDIT.

 I'd be happy to help out with patches to cleanup the logging in n-api.

 One other thing to look at - I've noticed with logs is that when something
 like glanceclient code (just as an example) is called from nova,
 we can get ERROR level messages for say image not found when its actually
 perfectly expected that this will occur.
 I'm not sure if we should be changing the error level in glanceclient or
 just forcing any error logging in glanceclient when
 called from Nova to a lower level though.

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova style cleanups with associated hacking check addition

2014-01-28 Thread John Garbutt
On 27 January 2014 10:10, Daniel P. Berrange berra...@redhat.com wrote:
 On Fri, Jan 24, 2014 at 11:42:54AM -0500, Joe Gordon wrote:
 On Fri, Jan 24, 2014 at 7:24 AM, Daniel P. Berrange 
 berra...@redhat.comwrote:

  Periodically I've seen people submit big coding style cleanups to Nova
  code. These are typically all good ideas / beneficial, however, I have
  rarely (perhaps even never?) seen the changes accompanied by new hacking
  check rules.
 
  The problem with not having a hacking check added *in the same commit*
  as the cleanup is two-fold
 
   - No guarantee that the cleanup has actually fixed all violations
 in the codebase. Have to trust the thoroughness of the submitter
 or do a manual code analysis yourself as reviewer. Both suffer
 from human error.
 
   - Future patches will almost certainly re-introduce the same style
 problems again and again and again and again and again and again
 and again and again and again I could go on :-)
 
  I don't mean to pick on one particular person, since it isn't their
  fault that reviewers have rarely/never encouraged people to write
  hacking rules, but to show one example The following recent change
  updates all the nova config parameter declarations cfg.XXXOpt(...) to
  ensure that the help text was consistently styled:
 
https://review.openstack.org/#/c/67647/
 
  One of the things it did was to ensure that the help text always started
  with a capital letter. Some of the other things it did were more subtle
  and hard to automate a check for, but an 'initial capital letter' rule
  is really straightforward.
 
  By updating nova/hacking/checks.py to add a new rule for this, it was
  found that there were another 9 files which had incorrect capitalization
  of their config parameter help. So the hacking rule addition clearly
  demonstrates its value here.

 This sounds like a rule that we should add to
 https://github.com/openstack-dev/hacking.git.

 Yep, it could well be added there. I figure rules added to Nova can
 be upstreamed to the shared module periodically.

+1

I worry about diverging, but I guess thats always going to happen here.

  I will concede that documentation about /how/ to write hacking checks
  is not entirely awesome. My current best advice is to look at how some
  of the existing hacking checks are done - find one that is checking
  something that is similar to what you need and adapt it. There are a
  handful of Nova specific rules in nova/hacking/checks.py, and quite a
  few examples in the shared repo
  https://github.com/openstack-dev/hacking.git
  see the file hacking/core.py. There's some very minimal documentation
  about variables your hacking check method can receive as input
  parameters
  https://github.com/jcrocholl/pep8/blob/master/docs/developer.rst
 
 
  In summary, if you are doing a global coding style cleanup in Nova for
  something which isn't already validated by pep8 checks, then I strongly
  encourage additions to nova/hacking/checks.py to validate the cleanup
  correctness. Obviously with some style cleanups, it will be too complex
  to write logic rules to reliably validate code, so this isn't a code
  review point that must be applied 100% of the time. Reasonable personal
  judgement should apply. I will try comment on any style cleanups I see
  where I think it is pratical to write a hacking check.
 

 I would take this even further, I don't think we should accept any style
 cleanup patches that can be enforced with a hacking rule and aren't.

 IMHO that would mostly just serve to discourage people from submitting
 style cleanup patches because it is too much stick, not enough carrot.
 Realistically for some types of style cleanup, the effort involved in
 writing a style checker that does not have unacceptable false positives
 will be too high to justify. So I think a pragmatic approach to enforcement
 is more suitable.

+1

I would love to enforce it 100% of the time, but sometimes its hard to
write the rules, but still a useful cleanup. Lets see how it goes I
guess.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 28th

2014-01-28 Thread John Garbutt
On 27 January 2014 21:36, Ian Wells ijw.ubu...@cack.org.uk wrote:
 Live migration for the first release is intended to be covered by macvtap,
 in my mind - direct mapped devices have limited support in hypervisors aiui.
 It seemed we had a working theory for that, which we test out and see if
 it's going to work.
 --
 Ian.


 On 27 January 2014 21:38, Robert Li (baoli) ba...@cisco.com wrote:

 Hi Folks,

 Check out  1 Agenda on Jan 28th, 2014. Please update if I have missed any
 thing. Let's finalize who's doing what tomorrow.

 I'm thinking to work on the nova SRIOV items, but the live migration may
 be a stretch for the initial release.

+1

I like the idea of getting the basic plumbing in place, and doing work
to double check live-migrate should work, but in reality, its going to
be very tight to get any code into Nova at this stage in the cycle
(given the mountain of code already waiting).

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Discuss the option delete_on_termination

2014-01-28 Thread John Garbutt
On 26 January 2014 09:23, 黎林果 lilinguo8...@gmail.com wrote:
 Hi,

 I have started the implementation, Please review. Address:
 https://review.openstack.org/#/c/67067/

 Thanks.

 Regards,
 Lee

Hi lee,

I have added the same comments as Christopher to the blueprint.

Please look at updating the blueprint's milestone, and ensuring the
plan in the blueprint is up to date, and only includes Nova work.

Thanks,
John



 2014/1/9 Christopher Yeoh cbky...@gmail.com:
 On Thu, Jan 9, 2014 at 5:00 PM, 黎林果 lilinguo8...@gmail.com wrote:

 Oh, I see. Thank you very much.
 It's just hard coded for attaching volume and swapping volume.

 How to deal the bp:

 https://blueprints.launchpad.net/nova/+spec/add-delete-on-termination-option
 ?


 So I think the only thing left in your bp would be adding a delete on
 terminate option when attaching a volume to an existing server and the
 novaclient changes. So I'd cleanup the blueprint and then set the milestone
 target to icehouse-3 which will trigger it to get it reviewed. Perhaps
 consider whether its reasonable to just apply this to the V3 API rather than
 doing an enhancement for both the V2 and V3 API.

 Regards,

 Chris


 2014/1/9 Christopher Yeoh cbky...@gmail.com:
  On Thu, Jan 9, 2014 at 2:35 PM, 黎林果 lilinguo8...@gmail.com wrote:
 
  Hi Chris,
  Thanks for you reply.
 
  It's not only hard coded for swap volumes. In function
  '_create_instance' which for creating instance of nova/compute/api.py,
  the '_prepare_image_mapping' function will be called. And it hard code
  to True, too.
 
  values = block_device.BlockDeviceDict({
  'device_name': bdm['device'],
  'source_type': 'blank',
  'destination_type': 'local',
  'device_type': 'disk',
  'guest_format': guest_format,
  'delete_on_termination': True,
  'boot_index': -1})
 
 
  Just before that in _prepare_image_mapping is:
 
  if virtual_name == 'ami' or virtual_name == 'root':
  continue
 
  if not block_device.is_swap_or_ephemeral(virtual_name):
  continue
 
 
  Chris
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Tzu-Mainn Chen
I've spent some time thinking about this, and I have a clarification.

I don't like the use of the word 'deployment', because it's not exact
enough for me.  Future plans for the tuskar-ui include management of the
undercloud as well, and at that point, 'deployment role' becomes vague, as
it could also logically apply to the undercloud.

For that reason, I think we should call it an 'overcloud deployment role',
or 'overcloud role' for short.

That being said, I think that the UI could get away with just displaying
'Role', as presumably the user would be in a page with enough context to
make it clear that he's in the overcloud section.


Mainn

- Original Message -
 I'd argue that we should call it 'overcloud role' - at least from the
 modeling
 point of view - since the tuskar-api calls a deployment an overcloud.
 
 But I like the general direction of the term-renaming!
 
 Mainn
 
 - Original Message -
  Based on this thread which didn't seem to get clear outcome, I have one
  last suggestion:
  
  * Deployment Role
  
  It looks that it might satisfy participants of this discussion. When I
  internally talked to people it got the best reactions from already
  suggested terms.
  
  Depending on your reactions for this suggestion, if we don't get to
  agreement of majority by the end of the week, I would call for voting
  starting next week.
  
  Thanks
  -- Jarda
  
  On 2014/21/01 15:19, Jaromir Coufal wrote:
   Hi folks,
  
   when I was getting feedback on wireframes and we talked about Roles,
   there were various objections and not much suggestions. I would love to
   call for action and think a bit about the term for concept currently
   known as Role (= Resource Category).
  
   Definition:
   Role is a representation of a group of nodes, with specific behavior.
   Each role contains (or will contain):
   * one or more Node Profiles (which specify HW which is going in)
   * association with image (which will be provisioned on new coming nodes)
   * specific service settings
  
   So far suggested terms:
   * Role *
  - short name - plus points
  - quite overloaded term (user role, etc)
  
   * Resource Category *
  - pretty long (devs already shorten it - confusing)
  - Heat specific term
  
   * Resource Class *
  - older term
  
   Are there any other suggestions (ideally something short and accurate)?
   Or do you prefer any of already suggested terms?
  
   Any ideas are welcome - we are not very good in finding the best match
   for this particular term.
  
   Thanks
   -- Jarda
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for instance-level snapshots in Nova

2014-01-28 Thread John Garbutt
On 24 January 2014 17:05, Jon Bernard jbern...@tuxion.com wrote:
 * Vishvananda Ishaya vishvana...@gmail.com wrote:

 On Jan 16, 2014, at 1:28 PM, Jon Bernard jbern...@tuxion.com wrote:

  * Vishvananda Ishaya vishvana...@gmail.com wrote:
 
  On Jan 14, 2014, at 2:10 PM, Jon Bernard jbern...@tuxion.com wrote:
 
 
  snip
  As you've defined the feature so far, it seems like most of it could
  be implemented client side:
 
  * pause the instance
  * snapshot the instance
  * snapshot any attached volumes
 
  For the first milestone to offer crash-consistent snapshots you are
  correct.  We'll need some additional support from libvirt, but the
  patchset should be straightforward.  The biggest question I have
  surrounding initial work is whether to use an existing API call or
  create a new one.
 
 
  I think you might have missed the client side part of this point. I 
  agree
  that the snapshot multiple volumes and package it up is valuable, but I 
  was
  trying to make the point that you could do all of this stuff client side
  if you just add support for snapshotting ephemeral drives. An all-in-one
  snapshot command could be valuable, but you are talking about 
  orchestrating
  a lot of commands between nova, glance, and cinder and it could get kind
  of messy to try to run the whole thing from nova.
 
  If you expose each primitive required, then yes, the client could
  implement the logic to call each primitive in the correct order, handle
  error conditions, and exit while leaving everything in the correct
  state.  But that would mean you would have to implement it twice - once
  in python-novaclient and once in Horizon.  I would speculate that doing
  this on the client would be even messier.
 
  If you are concerned about the complexity of the required interactions,
  we could narrow the focus in this way:
 
   Let's say that taking a full snapshot/backup (all volumes) operates
   only on persistent storage volumes.  Users who booted from an
   ephemeral glance image shouldn't expect this feature because, by
   definition, the boot volume is not expected to live a long life.
 
  This should limit the communication to Nova and Cinder, while leaving
  Glance out (initially).  If the user booted an instance from a cinder
  volume, then we have all the volumes necessary to create an OVA and
  import to Glance as a final step.  If the boot volume is an image then
  I'm not sure, we could go in a few directions:
 
   1. No OVA is imported due to lack of boot volume
   2. A copy of the original image is included as a boot volume to create
  an OVA.
   3. Something else I am failing to see.

 
  If [2] seems plausible, then it probably makes sense to just ask glance
  for an image snapshot from nova while the guest is in a paused state.
 
  Thoughts?

 This already exists. If you run a snapshot command on a volume backed 
 instance
 it snapshots all attached volumes. Additionally it does throw a bootable 
 image
 into glance referring to all of the snapshots.  You could modify create image
 to do this for regular instances as well, specifying block device mapping but
 keeping the vda as an image. It could even do the same thing with the 
 ephemeral
 disk without a ton of work. Keeping this all as one command makes a lot of 
 sense
 except that it is unexpected.

 There is a benefit to only snapshotting the root drive sometimes because it
 keeps the image small. Here's what I see as the ideal end state:

 Two commands(names are a strawman):
   create-full-image -- image all drives
   create-root-image -- image just the root drive

 These should work the same regardless of whether the root drive is volume 
 backed
 instead of the craziness we have to day of volume-backed snapshotting all 
 drives
 and instance backed just the root.  I'm not sure how we manage expectations 
 based
 on the current implementation but perhaps the best idea is just adding this 
 in
 v3 with new names?

 FYI the whole OVA thing seems moot since we already have a way of 
 representing
 multiple drives in glance via block_device_mapping properites.

 I've had some time to look closer at nova and rethink things a bit and
 I see what you're saying.  You are correct, taking snapshots of attached
 volumes is currently supported - although not in the way that I would
 like to see.  And this is where I think we can improve.

 Let me first summarize my understanding of what we currently have.
 There are three way of creating a snapshot-like thing in Nova:

   1. create_image - takes a snapshot of the root volume and may take
  snapshots of the attached volumes depending on the volume type of
  the root volume.  I/O is not quiesced.

   2. create_backup - takes a snapshot of the root volume with options
  to specify how often to repeat and how many previous snapshots to
  keep around. I/O is not quiesced.

   3. os-assisted-snapshot - takes a snapshot of a single cinder volume.
  The volume is first quiesced 

Re: [openstack-dev] [Ceilometer] Issues with transformers

2014-01-28 Thread Julien Danjou
On Mon, Jan 27 2014, Adrian Turjak wrote:

 Is there a way to transform data once it reaches the collector?

No, but that would probably be the easiest way.

The other alternative is to compute it via the API.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Thierry Carrez
Sean Dague wrote:
 Back at the beginning of the cycle, I pushed for the idea of doing some
 log harmonization, so that the OpenStack logs, across services, made
 sense. I've pushed a proposed changes to Nova and Keystone over the past
 couple of days.
 
 This is going to be a long process, so right now I want to just focus on
 making INFO level sane, because as someone that spends a lot of time
 staring at logs in test failures, I can tell you it currently isn't.
 
 https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
 written down so far, comments welcomed.
 
 We kind of need to solve this set of recommendations once and for all up
 front, because negotiating each change, with each project, isn't going
 to work (e.g - https://review.openstack.org/#/c/69218/)
 
 What I'd like to find out now:
 
 1) who's interested in this topic?
 2) who's interested in helping flesh out the guidelines for various log
 levels?
 3) who's interested in helping get these kinds of patches into various
 projects in OpenStack?
 4) which projects are interested in participating (i.e. interested in
 prioritizing landing these kinds of UX improvements)
 
 This is going to be progressive and iterative. And will require lots of
 folks involved.

I'm interested too (though I have no idea how much time I'll be able to
dedicate to it).

Sounds like a good area for new contributors too.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] [Ceilometer] Integration

2014-01-28 Thread Julien Danjou
On Mon, Jan 27 2014, Sylvain Bauza wrote:

 Great ! Thanks for your support. Supporting physical hosts reservations
 also means we need to dedicate hosts into Climate (ie. putting these
 hosts into a specific aggregate, thanks to a dedicated Climate API). Even
 if this dedications is admin-only, I think it would be nice to get the
 events related to it in Ceilometer, so we could in the future plan to have
 kind of Autoscaling thanks to Heat.
 At least, having these hosts in Ceilometer is good for scalability purposes.

Sure, so that'd be a new resource to meter for? It's not clear to me
where it fits in the resource model of Climate.

 Well, that's a really good question. AFAIK, the notifications' BP was
 related to emails sending for being notified out of Openstack, but that's
 something which needs to be discussed to see how we can leverage all those
 Keystone/Marconi/Ceilometer concerns. See the etherpad Dina provided for
 putting your comments on the notifications part, so we could discuss it on
 a next weekly meeting.

I've added a note in the Etherpad.

-- 
Julien Danjou
/* Free Software hacker * independent consultant
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/28

2014-01-28 Thread Khanh-Toan Tran
Dear all,

If we have time, I would like to discuss our new blueprint: 
Policy-Based-Scheduler:
https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler
whose code is ready for review:
https://review.openstack.org/#/c/61386/


Best regards,

Toan

- Original Message -
From: Donald D Dugger donald.d.dug...@intel.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, January 28, 2014 4:46:12 AM
Subject: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/28

1) Memcached based scheduler updates
2) Scheduler code forklift
3) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hacking repair scripts

2014-01-28 Thread Chmouel Boudjnah
On Tue, Jan 28, 2014 at 2:13 AM, Joshua Harlow harlo...@yahoo-inc.comwrote:

 Thought people might find it useful and it could become a part of
 automatic repairing/style adjustments in the future (similar to I guess
 what go has in `gofmt`).



nice, it would be cool if this can be hooked directly in th editors (ie:
emacs/vim)

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/28

2014-01-28 Thread Gary Kotton
Hi,

Just an update regarding the instance groups:
1. The API has been posted - https://review.openstack.org/#/c/62557/
2. The CLI has been posted - https://review.openstack.org/#/c/32904/

Thanks
Gary

On 1/28/14 12:45 PM, Khanh-Toan Tran khanh-toan.t...@cloudwatt.com
wrote:

Dear all,

If we have time, I would like to discuss our new blueprint:
Policy-Based-Scheduler:

https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/nova/%2Bspec/policy-based-schedulerk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A
r=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=mlxSl7eekysxEmXTQX
H%2Fn3BYc7EtH9FtCkvzpurmzGs%3D%0As=d2b07d92e07836a2c09310e8af480e091f3ea4
0765ab07e30f3224f9fe3e4825
whose code is ready for review:

https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%2
3/c/61386/k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2B
fDtysg45MkPhCZFxPEq8%3D%0Am=mlxSl7eekysxEmXTQXH%2Fn3BYc7EtH9FtCkvzpurmzGs
%3D%0As=636a4589d04859dc931db99487a6414966ae4b651df123e902905dee787d4b7e


Best regards,

Toan

- Original Message -
From: Donald D Dugger donald.d.dug...@intel.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Tuesday, January 28, 2014 4:46:12 AM
Subject: [openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/28

1) Memcached based scheduler updates
2) Scheduler code forklift
3) Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=mlxSl7eekysxEmXTQXH%2
Fn3BYc7EtH9FtCkvzpurmzGs%3D%0As=b4c1585b0940f1afd95f490dd9ac935c31906b00c
efac8fe709226c913afbb79

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=mlxSl7eekysxEmXTQXH%2
Fn3BYc7EtH9FtCkvzpurmzGs%3D%0As=b4c1585b0940f1afd95f490dd9ac935c31906b00c
efac8fe709226c913afbb79


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vmware minesweeper jobs failing?

2014-01-28 Thread Gary Kotton
Hi,
Lats week there were some infra problems. These have been addressed and
minesweeper is back in action.
Thanks
Gary

On 1/25/14 4:58 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

Seeing a few patches where vmware minesweeper is saying the build
failed, but looks like an infra issue? An example:

https://urldefense.proofpoint.com/v1/url?u=http://208.91.1.172/logs/nova/6
9046/1/console.txtk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6h
goMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=JPltzh9ECwHiCNsFh5JfzMLgzSkhXMe3%2BmY
6PnXr1KM%3D%0As=3521236ab51271ea722cb51a5ca6be02ed5d7d143abfe3176026fdfe4
b227f19

-- 

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=JPltzh9ECwHiCNsFh5Jfz
MLgzSkhXMe3%2BmY6PnXr1KM%3D%0As=ba8f2962e62ce0d9bdc32ad802e6aa5cd11497509
ddb02812d2c5666adb351e5


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Sean Dague
On 01/27/2014 09:57 PM, Christopher Yeoh wrote:
 On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
  Hi Sean,
 
  I'm currently working on moving away from the built-in logging
 to use log_config=filename and the python logging framework so
 that we can start shipping to logstash/sentry/insert other useful
 tool here.
 
  I'd be very interested in getting involved in this, especially
 from a why do we have log messages that are split across multiple
 lines perspective!
 
 Do we have many that aren't either DEBUG or TRACE? I thought we were
 pretty clean there.
 
  Cheers,
 
  Matt
 
  P.S. FWIW, I'd also welcome details on what the Audit level
 gives us that the others don't... :)
 
 Well as far as I can tell the AUDIT level was a prior drive by
 contribution that's not being actively maintained. Honestly, I think we
 should probably rip it out, because I don't see any in tree tooling to
 use it, and it's horribly inconsistent.
 
 
 For the uses I've seen of it in the nova api code INFO would be
 perfectly fine in place of AUDIT.
 
 I'd be happy to help out with patches to cleanup the logging in n-api.
 
 One other thing to look at - I've noticed with logs is that when
 something like glanceclient code (just as an example) is called from nova,
 we can get ERROR level messages for say image not found when its
 actually perfectly expected that this will occur.
 I'm not sure if we should be changing the error level in glanceclient or
 just forcing any error logging in glanceclient when
 called from Nova to a lower level though.

It's now changed in glanceclient -
https://review.openstack.org/#/c/67744/ - it should be gone in the gate
logs, and will be gone for everyone once a new release is out.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Savanna 2014.1.b2 (Icehouse-2) dev milestone available

2014-01-28 Thread Sergey Lukjanov
Hi Matt,

thank you for pushing Savanna packages to RDO and F20, that's really great!

P.S. fixes for stevedore dep - https://review.openstack.org/#/c/69496/ and
https://review.openstack.org/#/c/69571/


On Tue, Jan 28, 2014 at 2:40 AM, Matthew Farrellee m...@redhat.com wrote:

 On 01/23/2014 11:59 AM, Sergey Lukjanov wrote:

 Hi folks,

 the second development milestone of Icehouse cycle is now available for
 Savanna.

 Here is a list of new features and fixed bug:

 https://launchpad.net/savanna/+milestone/icehouse-2

 and here you can find tarballs to download it:

 http://tarballs.openstack.org/savanna/savanna-2014.1.b2.tar.gz
 http://tarballs.openstack.org/savanna-dashboard/savanna-
 dashboard-2014.1.b2.tar.gz
 http://tarballs.openstack.org/savanna-image-elements/
 savanna-image-elements-2014.1.b2.tar.gz
 http://tarballs.openstack.org/savanna-extra/savanna-extra-
 2014.1.b2.tar.gz

 There are 15 blueprint implemented, 37 bugs fixed during the milestone.
 It includes savanna, savanna-dashboard, savanna-image-element and
 savanna-extra sub-projects. In addition python-savannaclient 0.4.1 that
 was released early this week supports all new features introduced in
 this savanna release.

 Please, note that the next milestone, icehouse-3, is scheduled for
 March, 6th.

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.


 rdo packages -

 el6 -
 savanna - http://koji.fedoraproject.org/koji/buildinfo?buildID=494307
 savanna-dashboard - http://koji.fedoraproject.org/
 koji/buildinfo?buildID=494286

 f20 -
 savanna - https://admin.fedoraproject.org/updates/openstack-savanna-
 2014.1.b2-3.fc20
 savanna-dashboard - https://admin.fedoraproject.org/updates/python-django-
 savanna-2014.1.b2-1.fc20

 notes -
  . you need paramiko = 1.10.1 (http://koji.fedoraproject.
 org/koji/buildinfo?buildID=492749)
  . you need stevedore = 0.13 (http://koji.fedoraproject.
 org/koji/buildinfo?buildID=494300) (https://bugs.launchpad.net/
 savanna/+bug/1273459)

 best,


 matt


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] why swift-internal:// ?

2014-01-28 Thread Sergey Lukjanov
Matt, thanks for catching this.

BTW That's an interesting idea of supporting different tenants.


On Fri, Jan 24, 2014 at 11:04 PM, Matthew Farrellee m...@redhat.com wrote:

 thanks for all the feedback folks.. i've registered a bp for this...

 https://blueprints.launchpad.net/savanna/+spec/swift-url-proto-cleanup


 On 01/24/2014 11:30 AM, Sergey Lukjanov wrote:

 Looks like we need to review prefixes and cleanup them. After the first
 look I'd like the idea of using common prefix for swift data.


 On Fri, Jan 24, 2014 at 7:05 PM, Trevor McKay tmc...@redhat.com
 mailto:tmc...@redhat.com wrote:

 Matt et al,

Yes, swift-internal was meant as a marker to distinguish it from
 swift-external someday. I agree, this could be indicated by setting
 other fields.

 Little bit of implementation detail for scope:

In the current EDP implementation, SWIFT_INTERNAL_PREFIX shows up
 in
 essentially two places.  One is validation (pretty easy to change).

The other is in Savanna's binary_retrievers module where, as others
 suggested, the auth url (proto, host, port, api) and admin tenant from
 the savanna configuration are used with the user/passw to make a
 connection through the swift client.

Handling of different types of job binaries is done in
 binary_retrievers/dispatch.py, where the URL determines the treatment.
 This could easily be extended to look at other indicators.

 Best,

 Trev

 On Fri, 2014-01-24 at 07:50 -0500, Matthew Farrellee wrote:
   andrew,
  
   what about having swift:// which defaults to the configured
 tenant and
   auth url for what we now call swift-internal, and we allow for user
   input to change tenant and auth url for what would be
 swift-external?
  
   in fact, we may need to add the tenant selection in icehouse. it's
 a
   pretty big limitation to only allow a single tenant.
  
   best,
  
  
   matt
  
   On 01/23/2014 11:15 PM, Andrew Lazarev wrote:
Matt,
   
For swift-internal we are using the same keystone (and identity
 protocol
version) as for savanna. Also savanna admin tenant is used.
   
Thanks,
Andrew.
   
   
On Thu, Jan 23, 2014 at 6:17 PM, Matthew Farrellee
 m...@redhat.com mailto:m...@redhat.com
mailto:m...@redhat.com mailto:m...@redhat.com wrote:
   
what makes it internal vs external?
   
swift-internal needs user  pass
   
swift-external needs user  pass  ?auth url?
   
best,
   
   
matt
   
On 01/23/2014 08:43 PM, Andrew Lazarev wrote:
   
Matt,
   
I can easily imagine situation when job binaries are
 stored in
external
HDFS or external SWIFT (like data sources). Internal and
external swifts
are different since we need additional credentials.
   
Thanks,
Andrew.
   
   
On Thu, Jan 23, 2014 at 5:30 PM, Matthew Farrellee
m...@redhat.com mailto:m...@redhat.com
 mailto:m...@redhat.com mailto:m...@redhat.com
mailto:m...@redhat.com mailto:m...@redhat.com
 mailto:m...@redhat.com mailto:m...@redhat.com wrote:
   
 trevor,
   
 job binaries are stored in swift or an internal
 savanna db,
 represented by swift-internal:// and savanna-db://
respectively.
   
 why swift-internal:// and not just swift://?
   
 fyi, i see mention of a potential future version
 of savanna w/
 swift-external://
   
 best,
   
   
 matt
   
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.
 mailto:OpenStack-dev@lists.__openstack.org http://openstack.org
mailto:OpenStack-dev@lists.openstack.org

 mailto:OpenStack-dev@lists.openstack.org
   
 http://lists.openstack.org/cgi-bin/mailman/listinfo/
 openstack-dev
   
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
   
 http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
 openstack-dev
   
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
   
_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Sergey Lukjanov
Hi Sean,

it's great that you're catching this up.

I'd like to participate. I don't know how much time I'll be able to
dedicate on it, but at least I'm ready for reviews and pushing it to
Savanna.

Thanks!


On Tue, Jan 28, 2014 at 3:21 PM, Sean Dague s...@dague.net wrote:

 On 01/27/2014 09:57 PM, Christopher Yeoh wrote:
  On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
   Hi Sean,
  
   I'm currently working on moving away from the built-in logging
  to use log_config=filename and the python logging framework so
  that we can start shipping to logstash/sentry/insert other useful
  tool here.
  
   I'd be very interested in getting involved in this, especially
  from a why do we have log messages that are split across multiple
  lines perspective!
 
  Do we have many that aren't either DEBUG or TRACE? I thought we were
  pretty clean there.
 
   Cheers,
  
   Matt
  
   P.S. FWIW, I'd also welcome details on what the Audit level
  gives us that the others don't... :)
 
  Well as far as I can tell the AUDIT level was a prior drive by
  contribution that's not being actively maintained. Honestly, I think
 we
  should probably rip it out, because I don't see any in tree tooling
 to
  use it, and it's horribly inconsistent.
 
 
  For the uses I've seen of it in the nova api code INFO would be
  perfectly fine in place of AUDIT.
 
  I'd be happy to help out with patches to cleanup the logging in n-api.
 
  One other thing to look at - I've noticed with logs is that when
  something like glanceclient code (just as an example) is called from
 nova,
  we can get ERROR level messages for say image not found when its
  actually perfectly expected that this will occur.
  I'm not sure if we should be changing the error level in glanceclient or
  just forcing any error logging in glanceclient when
  called from Nova to a lower level though.

 It's now changed in glanceclient -
 https://review.openstack.org/#/c/67744/ - it should be gone in the gate
 logs, and will be gone for everyone once a new release is out.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Sergey Lukjanov
FYI it was added to the project meeting agenda -
https://wiki.openstack.org/wiki/Meetings/ProjectMeeting


On Tue, Jan 28, 2014 at 3:42 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Hi Sean,

 it's great that you're catching this up.

 I'd like to participate. I don't know how much time I'll be able to
 dedicate on it, but at least I'm ready for reviews and pushing it to
 Savanna.

 Thanks!


 On Tue, Jan 28, 2014 at 3:21 PM, Sean Dague s...@dague.net wrote:

 On 01/27/2014 09:57 PM, Christopher Yeoh wrote:
  On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
   Hi Sean,
  
   I'm currently working on moving away from the built-in logging
  to use log_config=filename and the python logging framework so
  that we can start shipping to logstash/sentry/insert other useful
  tool here.
  
   I'd be very interested in getting involved in this, especially
  from a why do we have log messages that are split across multiple
  lines perspective!
 
  Do we have many that aren't either DEBUG or TRACE? I thought we were
  pretty clean there.
 
   Cheers,
  
   Matt
  
   P.S. FWIW, I'd also welcome details on what the Audit level
  gives us that the others don't... :)
 
  Well as far as I can tell the AUDIT level was a prior drive by
  contribution that's not being actively maintained. Honestly, I
 think we
  should probably rip it out, because I don't see any in tree tooling
 to
  use it, and it's horribly inconsistent.
 
 
  For the uses I've seen of it in the nova api code INFO would be
  perfectly fine in place of AUDIT.
 
  I'd be happy to help out with patches to cleanup the logging in n-api.
 
  One other thing to look at - I've noticed with logs is that when
  something like glanceclient code (just as an example) is called from
 nova,
  we can get ERROR level messages for say image not found when its
  actually perfectly expected that this will occur.
  I'm not sure if we should be changing the error level in glanceclient or
  just forcing any error logging in glanceclient when
  called from Nova to a lower level though.

 It's now changed in glanceclient -
 https://review.openstack.org/#/c/67744/ - it should be gone in the gate
 logs, and will be gone for everyone once a new release is out.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [stable] stevedore 0.14

2014-01-28 Thread Alan Pevec
2014-01-27 Doug Hellmann doug.hellm...@dreamhost.com:
 I have just released a new version of stevedore, 0.14, which includes a
 change to stop checking version numbers of dependencies for plugins. This
 should eliminate one class of problems we've seen where we get conflicting
 requirements to install, and the libraries are compatible, but the way
 stevedore was using pkg_resources was causing errors when the plugins were
 loaded.

Thanks, that will be useful especial for stable releases. But looks
like this broke Nova unit tests on stable/*
 
http://lists.openstack.org/pipermail/openstack-stable-maint/2014-January/002055.html

Master is not affected, here's latest successful run few minutes ago
 http://logs.openstack.org/11/69411/2/check/gate-nova-python27/8252be2/

Shall we pin stevedore on stable/* or fix the tests on stable/* ?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [stable] stevedore 0.14

2014-01-28 Thread Alan Pevec
 ... or fix the tests on stable/* ?

That would be:
https://review.openstack.org/#/q/I5063c652c705fd512f90ff3897a4c590f7ba7c02,n,z
and is already proposed for Havana.
Sean, please submit it for stable/grizzly too.

Thanks,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [stable] stevedore 0.14

2014-01-28 Thread Sean Dague
On 01/28/2014 07:23 AM, Alan Pevec wrote:
 ... or fix the tests on stable/* ?
 
 That would be:
 https://review.openstack.org/#/q/I5063c652c705fd512f90ff3897a4c590f7ba7c02,n,z
 and is already proposed for Havana.
 Sean, please submit it for stable/grizzly too.

Doing now, I had done it locally but apparently forgot to run git review
last night.

https://review.openstack.org/69584

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] icehouse-3 deadlines

2014-01-28 Thread Russell Bryant
Greetings,

I wanted to draw some attention to deadlines we will be using in Nova
for icehouse-3 to help trim our roadmap down to what we think can get
in.  We currently have 153 blueprints targeted to icehouse-3, which is
far from realistic.

https://launchpad.net/nova/+milestone/icehouse-3


Deadline #1) Blueprint approval deadline

All blueprints *must* be approved by February 4 (1 week from today).
Otherwise, they will be deferred to Juno.

If your blueprint is in New or Pending Approval, then it's queued up
for review and you're fine.  If it's in some other state, such as
Review or Discussion, then it's waiting on action from you before it
can be approved.  Please change the state to Pending Approval when
you're ready for someone to look at it again.


Deadline #2) Work Started

Shortly after the blueprint approval deadline, any blueprint still
marked as Not Started will be deferred to Juno.  Please make sure your
blueprints reflect the true current state.


Deadline #3) Feature Proposal Deadline

As proposed here:


http://lists.openstack.org/pipermail/openstack-dev/2014-January/025292.html

We're currently planning on requiring code for blueprints to be posted
for review by February 18th.


Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] bp proposal: quotas on users and projects per domain

2014-01-28 Thread Florent Flament
Hi Jamie,

Indeed, it is more important to be able to set quotas on every
resource in the context of public clouds, than in the context of
private clouds. With public cloud, we cannot assume that the user will
not (deliberately or not) create millions of users if he can.

I agree that there are several ways to implement quotas (e.g. in a
dedicated service or on the backend). From my point of view, it makes
sense to have it implemented in Keystone, since it manages these
resources, as it is done with other services.

Also, I understand that this may not be the priority for Keystone
right now.

Florent Flament


- Original Message -
From: Jamie Lennox jamielen...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, January 28, 2014 1:59:41 AM
Subject: Re: [openstack-dev] [Keystone] bp proposal: quotas on users and 
projects per domain



- Original Message -
 From: Florent Flament florent.flament-...@cloudwatt.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, 24 January, 2014 8:07:28 AM
 Subject: Re: [openstack-dev] [Keystone] bp proposal: quotas on users and 
 projects per domain
 
 I understand that not everyone may be interested in such feature.
 
 On the other hand, some (maybe shallow) Openstack users may be
 interested in setting quotas on users or projects. Also, this feature
 wouldn't do any harm to the other users who wouldn't use it.
 
 If some contributors are willing to spend some time in adding this
 feature to Openstack, is there any reason not to accept it ?

I have in general no problem with users/projects/domains/etc being quota-ed
for a business decision (and i don't work for a provider) but as part of a more
global initiative that all resource types in OpenStack can be quotaed and this
would be managed by some other service (This i think would be a difficult
service to write). 

I don't see the point in implementing this directly as a keystone feature.
As Dolph mentioned these are not resource heavy concepts that we have a 
practical 
need to limit. In most situations i imagine service providers who want this
have means to achieve it via the backend they use store to. 

Note that the idea of storing quota data in keystone has come up before
and has generally never gained much traction. 

Jamie

 On Thu, 2014-01-23 at 14:55 -0600, Dolph Mathews wrote:
  
  On Thu, Jan 23, 2014 at 9:59 AM, Florent Flament
  florent.flament-...@cloudwatt.com wrote:
  Hi,
  
  
  Although it is true that projects and users don't consume a
  lot of resources, I think that there may be cases where
  setting quotas (possibly large) may be useful.
  
  
  
  For instance, a cloud provider may wish to prevent domain
  administrators to mistakingly create an infinite number of
  users and/or projects, by calling APIs in a bugging loop.
  
  
  
  That sounds like it would be better solved by API rate limiting, not
  quotas.
   
  
  
  
  Moreover, if quotas can be disabled, I don't see any reason
  not to allow cloud operators to set quotas on users and/or
  projects if they wishes to do so for whatever marketing reason
  (e.g. charging more to allow more users or projects).
  
  
  
  That's the shallow business decision I was alluding to, which I don't
  think we have any reason to support in-tree.
   
  
  
  
  Regards,
  
  Florent Flament
  
  
  
  
  
  __
  From: Dolph Mathews dolph.math...@gmail.com
  To: OpenStack Development Mailing List (not for usage
  questions) openstack-dev@lists.openstack.org
  Sent: Thursday, January 23, 2014 3:09:51 PM
  Subject: Re: [openstack-dev] [Keystone] bp proposal: quotas on
  users and projects per domain
  
  
  
  ... why? It strikes me as a rather shallow business decision
  to limit the number of users or projects in a system, as
  neither are actually cost-consuming resources.
  
  
  On Thu, Jan 23, 2014 at 6:43 AM, Matthieu Huin
  matthieu.h...@enovance.com wrote:
  Hello,
  
  I'd be interested in opinions and feedback on the
  following blueprint:
  
  https://blueprints.launchpad.net/keystone/+spec/tenants-users-quotas
  
  The idea is to add a mechanism preventing the creation
  of users or projects once a quota per domain is met. I
  believe this could be interesting for cloud providers
  who delegate administrative rights under domains to
  

Re: [openstack-dev] [climate] Future 0.2 release scope discussion

2014-01-28 Thread Sergey Lukjanov
I think that firstly you should define the policy / approach for choosing
the date for the next release. At least it should be clear which is more
important - release data or scope.


On Thu, Jan 23, 2014 at 9:07 PM, Dina Belova dbel...@mirantis.com wrote:

 Hello folks :)

 We've started accumulating ideas for 0.2 release. Here is link to Etherpad:

 https://etherpad.openstack.org/p/climate-0.2

 You are welcome to suggest ideas :)

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-28 Thread Sergey Lukjanov
Additionally, I think that we should explicitly specify the need to ensure
that all outputs doesn't contain any sensitive information like credentials.


On Tue, Jan 28, 2014 at 4:06 PM, Alexander Ignatov aigna...@mirantis.comwrote:

 EDP internal I meant current EDP specific code. And since job configs
 are job-specific
  I'd prefer to have urls containing /jobs or at least /edp for that.

 Regards,
 Alexander Ignatov



 On 24 Jan 2014, at 23:20, Matthew Farrellee m...@redhat.com wrote:

  what do you consider EDP internal, and how does it relate to the v1.1
 or v2 API?
 
  i'm ok with making it plugin independent. i'd just suggest moving it out
 of /jobs and to something like /extra/config-hints/{type}, maybe along with
 /extra/validations/config.
 
  best,
 
 
  matt
 
  On 01/22/2014 06:25 AM, Alexander Ignatov wrote:
  Current EDP config-hints are not only plugin specific. Several types of
 jobs
  must have certain key/values and without it job will fail. For instance,
  MapReduce (former Jar) job type requires Mapper/Reducer classes
 parameters
  to be set[1]. Moreover, for such kind of jobs we already have separated
  configuration defaults [2]. Also initial versions of patch implementing
  config-hints contained plugin-independent defaults for all each job
 types [3].
  I remember we postponed decision about which configs are commmon for all
  plugins and agreed to show users all vanilla-specific defaults. That's
 why now
  we have several TODOs in the code about config-hints should be
 plugin-specific.
 
  So I propose to leave config-hints REST call in EDP internal and make it
  plugin-independent (or job-specific) by removing of parsing all
 vanilla-specific
  defaults and define small list of configs which is definitely common
 for each type of jobs.
  The first things come to mind:
  - For MapReduce jobs it's already defined in [1]
  - Configs like number of map and reduce tasks are common for all type
 of jobs
  - At least user always has an ability to set any key/value(s) as
 params/arguments for job
 
 
  [1]
 http://docs.openstack.org/developer/savanna/userdoc/edp.html#workflow
  [2]
 https://github.com/openstack/savanna/blob/master/savanna/service/edp/resources/mapred-job-config.xml
  [3] https://review.openstack.org/#/c/45419/10
 
  Regards,
  Alexander Ignatov
 
 
 
  On 20 Jan 2014, at 22:04, Matthew Farrellee m...@redhat.com wrote:
 
  On 01/20/2014 12:50 PM, Andrey Lazarev wrote:
  Inlined.
 
 
  On Mon, Jan 20, 2014 at 8:15 AM, Matthew Farrellee m...@redhat.com
  mailto:m...@redhat.com wrote:
 
 (inline, trying to make this readable by a text-only mail client
 that doesn't use tabs to indicate quoting)
 
 On 01/20/2014 02:50 AM, Andrey Lazarev wrote:
 
  --
  FIX - @rest.get('/jobs/config-hints/job_type') -
 should move to
  GET /plugins/plugin_name/plugin_version,
 similar to
  get_node_processes
  and get_required_image_tags
  --
  Not sure if it should be plugin specific right now.
 EDP
 uses it
  to show some
  configs to users in the dashboard. it's just a
 cosmetic
 thing.
  Also when user
  starts define some configs for some job he might not
 define
  cluster yet and
  thus plugin to run this job. I think we should leave
 it
 as is
  and leave only
  abstract configs like Mapper/Reducer class and allow
 users to
  apply any
  key/value configs if needed.
 
 
  FYI, the code contains comments suggesting it should be
 plugin specific.
 
 
 https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179
 
 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179
 
 
 
 https://github.com/openstack/__savanna/blob/master/savanna/__service/edp/workflow_creator/__workflow_factory.py#L179
 
 https://github.com/openstack/savanna/blob/master/savanna/service/edp/workflow_creator/workflow_factory.py#L179
 
 
  IMHO, the EDP should have no plugin specific dependencies.
 
  If it currently does, we should look into why and see if
 we
 can't
  eliminate this entirely.
 
 [AL] EDP uses plugins in two ways:
 1. for HDFS user
 2. for config hints
 I think both items should not be plugin specific on EDP API
 level. But
 implementation should go to plugin and call plugin API for
 result.
 
 
 In fact they are both plugin specific. The user is forced to click
 through a plugin selection (when launching a job on transient
 cluster) or the plugin selection has already occurred (when
 launching a 

Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Jason Rist
I thought we were avoiding using overcloud and undercloud within the UI?

-J

On 01/28/2014 03:04 AM, Tzu-Mainn Chen wrote:
 I've spent some time thinking about this, and I have a clarification.
 
 I don't like the use of the word 'deployment', because it's not exact
 enough for me.  Future plans for the tuskar-ui include management of the
 undercloud as well, and at that point, 'deployment role' becomes vague, as
 it could also logically apply to the undercloud.
 
 For that reason, I think we should call it an 'overcloud deployment role',
 or 'overcloud role' for short.
 
 That being said, I think that the UI could get away with just displaying
 'Role', as presumably the user would be in a page with enough context to
 make it clear that he's in the overcloud section.
 
 
 Mainn
 
 - Original Message -
 I'd argue that we should call it 'overcloud role' - at least from the
 modeling
 point of view - since the tuskar-api calls a deployment an overcloud.

 But I like the general direction of the term-renaming!

 Mainn




-- 
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
+1.919.754.4048
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-v meeting

2014-01-28 Thread Peter Pouliot
Hi everyone,

Too many of us are unable to make the meeting today.

So I'm going to cancel for today. We resume the regular hyper-v  meeting next 
week.

P
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [TripleO] adding process/service monitoring

2014-01-28 Thread Clint Byrum
Excerpts from Richard Su's message of 2014-01-27 17:59:34 -0800:
 Hi,
 
 I have been looking into how to add process/service monitoring to
 tripleo. Here I want to be able to detect when an openstack dependent
 component that is deployed on an instance has failed. And when a failure
 has occurred I want to be notified and eventually see it in Tuskar.
 
 Ceilometer doesn't handle this particular use case today. So I have been
 doing some research and there are many options out there that provides
 process checks: nagios, sensu, zabbix, and monit. I am a bit wary of
 pulling one of these options into tripleo. There is some increased
 operational and maintenance costs when pulling in each of them. And
 physical device monitoring is currently in the works for Ceilometer
 lessening the need for some of the other abilities that an another
 monitoring tool would provide.
 
 For the particular use case of monitoring processes/services, at a high
 level, I am considering writing a simple daemon to perform the check.
 Checks and failures are written out as messages to the notification bus.
 Interested parties like Tuskar or Ceilometer can subscribe to these
 messages.
 
 In general does this sound like a reasonable approach?

Writing a new one, no. But using notifications in OpenStack: yes!

I suggest finding the simplest one possible and teaching it to send
OpenStack notifications.

 
 There is also the question of how to configure or figure out which
 processes we are interested in monitoring. I need to do more research
 here but I'm considering either looking at the elements listed by
 diskimage-builder or by looking at the orc post-configure.d scripts to
 find service that are restarted.
 

There are basically two things you need to look for: things listening,
and things connected to rabbitmq/qpid.

So one crazy way to find things to monitor is to look at netstat or ss
and just monitor processes doing one of those things. I believe
assimilation monitoring's nanoprobe daemon already has the listening
part done:

http://techthoughts.typepad.com/managing_computers/2012/10/zero-configuration-discovery-and-server-monitoring-in-the-assimilation-monitoring-project.html

Also you may want to do two orc scripts in post-configure.d:

00-disruption-coming-stop-process-monitor
99-all-clear-start-process-monitor

Anyway, as Robert says, just keep it modular so that orgs that already
have a rich set of tools for this will be able to replace it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Alexander Tivelkov
Very interested, thanks a lot for this topic.
Will work on bringing all of this to Murano

--
Regards,
Alexander Tivelkov


On Tue, Jan 28, 2014 at 6:45 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 FYI it was added to the project meeting agenda -
 https://wiki.openstack.org/wiki/Meetings/ProjectMeeting


 On Tue, Jan 28, 2014 at 3:42 PM, Sergey Lukjanov 
 slukja...@mirantis.comwrote:

 Hi Sean,

 it's great that you're catching this up.

 I'd like to participate. I don't know how much time I'll be able to
 dedicate on it, but at least I'm ready for reviews and pushing it to
 Savanna.

 Thanks!


 On Tue, Jan 28, 2014 at 3:21 PM, Sean Dague s...@dague.net wrote:

 On 01/27/2014 09:57 PM, Christopher Yeoh wrote:
  On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
   Hi Sean,
  
   I'm currently working on moving away from the built-in logging
  to use log_config=filename and the python logging framework so
  that we can start shipping to logstash/sentry/insert other useful
  tool here.
  
   I'd be very interested in getting involved in this, especially
  from a why do we have log messages that are split across multiple
  lines perspective!
 
  Do we have many that aren't either DEBUG or TRACE? I thought we
 were
  pretty clean there.
 
   Cheers,
  
   Matt
  
   P.S. FWIW, I'd also welcome details on what the Audit level
  gives us that the others don't... :)
 
  Well as far as I can tell the AUDIT level was a prior drive by
  contribution that's not being actively maintained. Honestly, I
 think we
  should probably rip it out, because I don't see any in tree
 tooling to
  use it, and it's horribly inconsistent.
 
 
  For the uses I've seen of it in the nova api code INFO would be
  perfectly fine in place of AUDIT.
 
  I'd be happy to help out with patches to cleanup the logging in n-api.
 
  One other thing to look at - I've noticed with logs is that when
  something like glanceclient code (just as an example) is called from
 nova,
  we can get ERROR level messages for say image not found when its
  actually perfectly expected that this will occur.
  I'm not sure if we should be changing the error level in glanceclient
 or
  just forcing any error logging in glanceclient when
  called from Nova to a lower level though.

 It's now changed in glanceclient -
 https://review.openstack.org/#/c/67744/ - it should be gone in the gate
 logs, and will be gone for everyone once a new release is out.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Tzu-Mainn Chen
Yep, although the reason why - that no end-user will know what these terms mean 
-
has never been entirely convincing to me.  But even if we don't use the word
'overcloud', I think we should use *something*.  Deployment is just so vague
that without some context, it could refer to anything.

As a side note, the original terminology thread ended with a general upstream
consensus that we should call things what they are in OpenStack.  That's why
the 'deployment' model is actually called 'overcloud' in the UI/api; others
strongly favored using that term to make it clear to developers what we
were modeling.

Part of the difficulty here is the perception that developers and end-users
have different needs when it comes to terminology.


Mainn

- Original Message -
 I thought we were avoiding using overcloud and undercloud within the UI?
 
 -J
 
 On 01/28/2014 03:04 AM, Tzu-Mainn Chen wrote:
  I've spent some time thinking about this, and I have a clarification.
  
  I don't like the use of the word 'deployment', because it's not exact
  enough for me.  Future plans for the tuskar-ui include management of the
  undercloud as well, and at that point, 'deployment role' becomes vague, as
  it could also logically apply to the undercloud.
  
  For that reason, I think we should call it an 'overcloud deployment role',
  or 'overcloud role' for short.
  
  That being said, I think that the UI could get away with just displaying
  'Role', as presumably the user would be in a page with enough context to
  make it clear that he's in the overcloud section.
  
  
  Mainn
  
  - Original Message -
  I'd argue that we should call it 'overcloud role' - at least from the
  modeling
  point of view - since the tuskar-api calls a deployment an overcloud.
 
  But I like the general direction of the term-renaming!
 
  Mainn
 
 
 
 
 --
 Jason E. Rist
 Senior Software Engineer
 OpenStack Management UI
 Red Hat, Inc.
 +1.919.754.4048
 Freenode: jrist
 github/identi.ca: knowncitizen
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread iKhan
I haven't started contributing to openstack, looks like this might be good
opportunity. Count me in. I had the same doubt about AUDIT level log in
cinder.


On Tue, Jan 28, 2014 at 8:16 PM, Alexander Tivelkov
ativel...@mirantis.comwrote:

 Very interested, thanks a lot for this topic.
 Will work on bringing all of this to Murano

 --
 Regards,
 Alexander Tivelkov


 On Tue, Jan 28, 2014 at 6:45 AM, Sergey Lukjanov 
 slukja...@mirantis.comwrote:

 FYI it was added to the project meeting agenda -
 https://wiki.openstack.org/wiki/Meetings/ProjectMeeting


 On Tue, Jan 28, 2014 at 3:42 PM, Sergey Lukjanov 
 slukja...@mirantis.comwrote:

 Hi Sean,

 it's great that you're catching this up.

 I'd like to participate. I don't know how much time I'll be able to
 dedicate on it, but at least I'm ready for reviews and pushing it to
 Savanna.

 Thanks!


 On Tue, Jan 28, 2014 at 3:21 PM, Sean Dague s...@dague.net wrote:

 On 01/27/2014 09:57 PM, Christopher Yeoh wrote:
  On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague s...@dague.net
  mailto:s...@dague.net wrote:
 
  On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
   Hi Sean,
  
   I'm currently working on moving away from the built-in logging
  to use log_config=filename and the python logging framework so
  that we can start shipping to logstash/sentry/insert other useful
  tool here.
  
   I'd be very interested in getting involved in this, especially
  from a why do we have log messages that are split across multiple
  lines perspective!
 
  Do we have many that aren't either DEBUG or TRACE? I thought we
 were
  pretty clean there.
 
   Cheers,
  
   Matt
  
   P.S. FWIW, I'd also welcome details on what the Audit level
  gives us that the others don't... :)
 
  Well as far as I can tell the AUDIT level was a prior drive by
  contribution that's not being actively maintained. Honestly, I
 think we
  should probably rip it out, because I don't see any in tree
 tooling to
  use it, and it's horribly inconsistent.
 
 
  For the uses I've seen of it in the nova api code INFO would be
  perfectly fine in place of AUDIT.
 
  I'd be happy to help out with patches to cleanup the logging in n-api.
 
  One other thing to look at - I've noticed with logs is that when
  something like glanceclient code (just as an example) is called from
 nova,
  we can get ERROR level messages for say image not found when its
  actually perfectly expected that this will occur.
  I'm not sure if we should be changing the error level in glanceclient
 or
  just forcing any error logging in glanceclient when
  called from Nova to a lower level though.

 It's now changed in glanceclient -
 https://review.openstack.org/#/c/67744/ - it should be gone in the gate
 logs, and will be gone for everyone once a new release is out.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,
Ibad Khan
9686594607
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Oslo Context and SecurityContext

2014-01-28 Thread Georgy Okrokvertskhov
Hi,

From my experience context is usually bigger then just a storage for user
credentials and specifics of request. Context usually defines an area
within the called method should act. Probably the class name RequestContext
is a bit confusing. The actual goal of the context should be defined by a
service design. If you have a lot of independent components you will
probably will ned to pass a lot of parameters to specify specifics of work,
so it is just more convenient to have dictionary like object which carry
all necessary information about contextual information. This context can be
used to pass information between different components of the service.



On Mon, Jan 27, 2014 at 4:27 PM, Angus Salkeld
angus.salk...@rackspace.comwrote:

 On 27/01/14 22:53 +, Adrian Otto wrote:

 On Jan 27, 2014, at 2:39 PM, Paul Montgomery 
 paul.montgom...@rackspace.com
 wrote:

  Solum community,

 I created several different approaches for community consideration
 regarding Solum context, logging and data confidentiality.  Two of these
 approaches are documented here:

 https://wiki.openstack.org/wiki/Solum/Logging

 A) Plain Oslo Log/Config/Context is in the Example of Oslo Log and Oslo
 Context section.

 B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the
 RequestContext class and adds some confidentiality functions is in the
 Example of Oslo Log and Oslo Context Combined with SecurityContext
 section.

 None of this code is production ready or tested by any means.  Please
 just
 examine the general architecture before I polish too much.

 I hope that this is enough information for us to agree on a path A or B.
 I honestly am not tied to either path very tightly but it is time that we
 reach a final decision on this topic IMO.

 Thoughts?


 I have a strong preference for using the SecurityContext approach. The
 main reason for my preference is outlined in the Pro/Con sections of the
 Wiki page. With the A approach, leakage of confidential information mint
 happen with *any* future addition of a logging call, a discipline which may
 be forgotten, or overlooked during future code reviews. The B approach
 handles the classification of data not when logging, but when placing the
 data into the SecurityContext. This is much safer from a long term
 maintenance perspective.


 I think we seperate this out into:

 1) we need to be security aware whenever we log information handed to
us by the user. (I totally agree with this general statement)

 2) should we log structured data, non structured data or use the
 notification mechanism (which is structured)
There have been some talks at summit about the potential merging of
the logging and notification api, I honestly don't know what
happened to that but have no problem with structured logging. We
should use the notification system so that ceilometer can take
advantage of the events.

 3) should we use a RequestContext in the spirit of the olso-incubator
   (and inherited from it too). OR one different from all other
   projects.

   IMHO we should just use oslo-incubator RequestContext. Remember the
   context is not a generic dumping ground for I want to log stuff so
   lets put it into the context. It is for user credentials and things
   directly associated with the request (like the request_id). I don't
   see why we need a generic dict style approach, this is more likely
   to result in programming error context.set_priv('userid', bla)
   instead of:
   context.set_priv('user_id', bla)

   I think my point is: We should very quickly zero in on the
   attributes we need in the context and they will seldom change.

   As far as security goes Paul has shown a good example of how to
   change the logging_context_format_string to achieve structured and
   secure logging of the context. oslo log module does not log whatever
   is in the context but only what is configured in the solum.conf (via
   logging_context_format_string). So I don't believe that the
   new/different RequestContext provides any improved security.



 -Angus




 Adrian
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Tuskar] Deployment Configuration in the UI

2014-01-28 Thread Jaromir Coufal

Hi folks,

based on overcloud.yaml file (thanks Rob for pointing me there), I put 
together attributes for deployment configuration which should appear in 
the UI. Can I ask for a help, to review the list if it is accurate and 
if not to correct it?


https://etherpad.openstack.org/p/tuskar-ui-config

Thanks a lot for helping
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-28 Thread Robert Li (baoli)
Hi,

For the second case, supposed that the PF is properly configured on the host, 
is it a matter of configuring it as you normally do with a regular ethernet 
interface to add it to the linux bridge or OVS?

--Robert

On 1/28/14 1:03 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Nrupal,
We definitely consider both these cases.
Agree with you that we should aim to support both.

BR,
Irena


From: Jani, Nrupal [mailto:nrupal.j...@intel.com]
Sent: Monday, January 27, 2014 11:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi,

There are two possibilities for the hybrid compute nodes

-  In the first case, a compute node has two NICs,  one SRIOV NIC  the 
other NIC for the VirtIO

-  In the 2nd case, Compute node has only one SRIOV NIC, where VFs are 
used for the VMs, either macvtap or direct assignment.  And the PF is used for 
the uplink to the linux bridge or OVS!!

My question to the team is whether we consider both of these deployments or not?

Thx,

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with ‘virtio’ vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and 

[openstack-dev] [Fwd: Change in openstack/neutron[master]: Kill dnsmasq and ns-meta softly]

2014-01-28 Thread Jay Pipes
This might just be the most creative commit message of the year.

-jay
---BeginMessage---
Salvatore Orlando has uploaded a new change for review.

Change subject: Kill dnsmasq and ns-meta softly
..

Kill dnsmasq and ns-meta softly

Strumming the process pain with a syscall
ending its pid with sudo kill
killing it softly with the TERM signal
killing it softly with the TERM signal
ending his whole life with sudo kill
killing it softly with the TERM signal

This is an investigation related to bug 1273386

Change-Id: I8f16336cfd074ece243c22216f039aa9ed6c1ad1
---
M etc/neutron/rootwrap.d/dhcp.filters
M etc/neutron/rootwrap.d/l3.filters
M neutron/agent/linux/dhcp.py
M neutron/agent/linux/external_process.py
M neutron/tests/unit/test_linux_dhcp.py
M neutron/tests/unit/test_linux_external_process.py
6 files changed, 14 insertions(+), 14 deletions(-)


  git pull ssh://review.openstack.org:29418/openstack/neutron 
refs/changes/79/69579/1
--
To view, visit https://review.openstack.org/69579
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: I8f16336cfd074ece243c22216f039aa9ed6c1ad1
Gerrit-PatchSet: 1
Gerrit-Project: openstack/neutron
Gerrit-Branch: master
Gerrit-Owner: Salvatore Orlando salv.orla...@gmail.com
---End Message---
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Jay Pipes
On Tue, 2014-01-28 at 10:02 -0500, Tzu-Mainn Chen wrote:
 Yep, although the reason why - that no end-user will know what these terms 
 mean -
 has never been entirely convincing to me.

Well, tenants would never see any of the Tuskar UI, so I don't think we
need worry about them. And if a deployer is enabling Tuskar -- and using
Tuskar/Triple-O for undercloud deployment -- then I would think that the
deployer would understand the concept/terminology of undercloud and
overcloud, since it's an essential concept in deploying with
Triple-O. :)

So, in short, I don't see a problem with using the terms undercloud and
overcloud.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] team meeting Friday 31 Jan 1400 UTC

2014-01-28 Thread Doug Hellmann
On Tue, Jan 28, 2014 at 3:55 AM, Flavio Percoco fla...@redhat.com wrote:

 On 27/01/14 14:57 -0500, Doug Hellmann wrote:

 The Oslo team has a few items we need to discuss, so I'm calling a
 meeting for
 this Friday, 31 Jan. Our normal slot is 1400 UTC Friday in
 #openstack-meeting.
 The agenda [1] includes 2 items (so far):

 1. log translations (see the other thread started today)
 2. parallelizing our tests


 We should also discuss the process to pull out packages from oslo. I
 mean, check if anything has changed in terms of stability since the
 summit and what our plans are for moving forward with this.


I added an item to discuss managing graduating code that is still in the
incubator (the rpc discussion this week brought it to the front of my
mind). Is that the same thing you mean?

Doug




 Cheers,
 flaper


  If you have anything else you would like to discuss, please add it to the
 agenda.

 See you Friday!
 Doug


 [1] https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-28 Thread Trevor McKay
Hello all,

In our first pass at EDP, the model for job settings was very consistent
across all of our job types. The execution-time settings fit into this
(superset) structure:

job_configs = {'configs': {}, # config settings for oozie and hadoop
   'params': {},  # substitution values for Pig/Hive
   'args': []}# script args (Pig and Java actions)

But we have some things that don't fit (and probably more in the
future):

1) Java jobs have 'main_class' and 'java_opts' settings
   Currently these are handled as additional fields added to the
structure above.  These were the first to diverge.

2) Streaming MapReduce (anticipated) requires mapper and reducer
settings (different than the mapred..class settings for
non-streaming MapReduce)

Problems caused by adding fields

The job_configs structure above is stored in the database. Each time we
add a field to the structure above at the level of configs, params, and
args, we force a change to the database tables, a migration script and a
change to the JSON validation for the REST api.

We also cause a change for python-savannaclient and potentially other
clients.

This kind of change seems bad.

Proposal: Borrow a page from Oozie and add savanna. configs
-
I would like to fit divergent job settings into the structure we already
have.  One way to do this is to leverage the 'configs' dictionary.  This
dictionary primarily contains settings for hadoop, but there are a
number of oozie.xxx settings that are passed to oozie as configs or
set by oozie for the benefit of running apps.

What if we allow savanna. settings to be added to configs?  If we do
that, any and all special configuration settings for specific job types
or subtypes can be handled with no database changes and no api changes.

Downside

Currently, all 'configs' are rendered in the generated oozie workflow.
The savanna. settings would be stripped out and processed by Savanna,
thereby changing that behavior a bit (maybe not a big deal)

We would also be mixing savanna. configs with config_hints for jobs,
so users would potentially see savanna. settings mixed with oozie
and hadoop settings.  Again, maybe not a big deal, but it might blur the
lines a little bit.  Personally, I'm okay with this.

Slightly different
--
We could also add a 'savanna-configs': {} element to job_configs to
keep the configuration spaces separate.

But, now we would have 'savanna-configs' (or another name), 'configs',
'params', and 'args'.  Really? Just how many different types of values
can we come up with? :)

I lean away from this approach.

Related: breaking up the superset
-

It is also the case that not every job type has every value type.

 Configs   ParamsArgs
HiveY YN
Pig Y YY
MapReduce   Y NN
JavaY NY

So do we make that explicit in the docs and enforce it in the api with
errors?

Thoughts? I'm sure there are some :)

Best,

Trevor



  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-28 Thread Jani, Nrupal
My comments inline below.

Nrupal.

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Tuesday, January 28, 2014 8:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi,

For the second case, supposed that the PF is properly configured on the host, 
is it a matter of configuring it as you normally do with a regular ethernet 
interface to add it to the linux bridge or OVS?
[nrj] yes. While technically it is possible, we as a team can decide about the 
final recommendation:)  Given that VFs are going to be used for the 
high-performance VMs, mixing VMs with virtio  VFs may not be a good option.  
Initially we can use PF interface for the management traffic and/or VF 
configuration!!

--Robert

On 1/28/14 1:03 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Nrupal,
We definitely consider both these cases.
Agree with you that we should aim to support both.

BR,
Irena


From: Jani, Nrupal [mailto:nrupal.j...@intel.com]
Sent: Monday, January 27, 2014 11:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi,

There are two possibilities for the hybrid compute nodes

-  In the first case, a compute node has two NICs,  one SRIOV NIC  the 
other NIC for the VirtIO

-  In the 2nd case, Compute node has only one SRIOV NIC, where VFs are 
used for the VMs, either macvtap or direct assignment.  And the PF is used for 
the uplink to the linux bridge or OVS!!

My question to the team is whether we consider both of these deployments or not?

Thx,

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-28 Thread Jani, Nrupal
Thx Irena.

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 10:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Jani, Nrupal
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Nrupal,
We definitely consider both these cases.
Agree with you that we should aim to support both.

BR,
Irena


From: Jani, Nrupal [mailto:nrupal.j...@intel.com]
Sent: Monday, January 27, 2014 11:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi,

There are two possibilities for the hybrid compute nodes

-  In the first case, a compute node has two NICs,  one SRIOV NIC  the 
other NIC for the VirtIO

-  In the 2nd case, Compute node has only one SRIOV NIC, where VFs are 
used for the VMs, either macvtap or direct assignment.  And the PF is used for 
the uplink to the linux bridge or OVS!!

My question to the team is whether we consider both of these deployments or not?

Thx,

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image image-uuid --nic net-id=net-uuid vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 

[openstack-dev] [Swift] release 1.12.0

2014-01-28 Thread John Dickinson
Today I'm happy to announce that we have released Swift 1.12.0. As
always, this is a stable release and you can upgrade to this version
of Swift with no customer downtime.

You can download the code for this release at
https://launchpad.net/swift/icehouse/1.12.0 or bug your package
provider for the updated version.

I've noticed that OpenStack Swift releases tend to cluster around
certain themes. This release is no different. While we've added some
nice end-user updates to the project, this release has a ton of good
stuff for cluster operators.

I'll highlight a few of the major improvements below, but I encourage
you to read the entire change log at
https://github.com/openstack/swift/blob/master/CHANGELOG.

## Security update

**CVE-2014-006**

Fixed CVE-2014-0006 to avoid a potential timing attack with temp url.
Key validation previously was not using a constant-time string
compare, and therefore it may have been possible for an attacker to
guess tempurl keys if the object name was known and tempurl had been
enabled for that Swift user account. The tempurl key validation now
uses a constant-time string compare to close this potential attack
vector.

## Major End-User Features

**New information added to /info**

We added discoverable capabilities via the /info endpoint in a recent
release. In this release we have added all of the general cluster
constraints to the /info response. This means that a client can
discover the cluster limits on names, metadata, and object sizes.
We've also added information about the support temp url methods and
large object constraints in the cluster.

**Last-Modified header values**

The Last-Modified header value returned will now be the object's
timestamp rounded up to the next second. This allows subsequent
requests with If-[un]modified-Since to use the Last-Modified value as
expected.

## Major Deployer Features

**Generic means for persisting system metadata**

Swift now supports system-level metadata on accounts and containers.
System metadata provides a means to store internal custom metadata
with associated Swift resources in a safe and secure fashion without
actually having to plumb custom metadata through the core swift
servers. The new gatekeeper middleware prevents this system metadata
from leaking into the request or being set by a client.

**Middleware changes**

As mentioned above, there is a new gatekeeper middleware to guard
the system metadata. In order to ensure that system metadata doesn't
leak into the response, the gatekeeper middleware will be
automatically inserted near the beginning of the proxy pipeline if it
is not explicitly referenced. Similarly, the catch_errors middleware
is also forced to the front of the proxy pipeline if it is not
explicitly referenced. Note that for either of these middlewares, if
they are already in the proxy pipeline, Swift will not reorder the
pipeline.

**New container sync configuration option**

Container sync has new options to better support syncing containers
across multiple clusters without the end-user needing to know he
required endpoint. See
http://swift.openstack.org/overview_container_sync.html for full
information.

**Bulk middleware config default changed**

The bulk middleware allows the client to send a large body of work to
the cluster with just one request. Since this work may take a while to
return, Swift can periodically send back whitespace before the actual
response data in order to keep the client connection alive. The config
parameter to set the minimum frequency of these whitespace characters
is set by the yield_frequency value. The default value was lowered
from 60 seconds to 10 seconds. This change does not affect
deployments, and there is no migration process needed.

**Raise RLIMIT_NPROC**

In order to support denser storage systems, Swift processes will not
attempt to set the RLIMIT_NPROC value to 8192

**Server exit codes**

Swift processes will now exist with non-zero exist codes on config errors

**Quarantine logs**

Swift will now log at warn level when an object is quarantined

## Community growth

This release of Swift is the work of twenty-three devs includes eight
first-time contributors to the project:

* Morgan Fainberg
* Zhang Jinnan
* Kiyoung Jung
* Steve Kowalik
* Sushil Kumar
* Cristian A Sanchez
* Jeremy Stanley
* Yuriy Taraday

Thank you to everyone who contributes code, promotes the project, and
facilitates the community. Your contributions are what make this
project successful. 


--John





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana Release V3 Extensions and new features to quota

2014-01-28 Thread Vishvananda Ishaya
Hi Vinod,

Sorry for the top post, but there is a lot that needs to be done across 
projects to make the idea of domains and trees actually work. One of the issues 
which you mention below is the idea of quotas. I was just having a discussion 
with some folks in IRC about this very issue, and there are quite a few people 
who would like to help with this. I’m going to send out another email to the 
list entitled “Hierarchicical Multitenancy Discussion” in a bit on this topic.

Vish

On Jan 20, 2014, at 3:17 AM, Vinod Kumar Boppanna 
vinod.kumar.boppa...@cern.ch wrote:

 Hi,
 
 My name is Vinod Kumar Boppanna and I was testing the quota part in the
 OpenStack Havana Release. I had installed the Havana Release in a single
 VM through RDO process. During testing, i used the AUTH_URL as
 
 OS_AUTH_URL=http://ip_address:35357/v2.0/
 
 Because of this, the nova is using the following v2 attributes for the quotas
 
 compute_extension:quotas:show: ,
 compute_extension:quotas:update: rule:admin_api,
 compute_extension:quotas:delete: rule:admin_api,
 
 But there are other quota attributes available for v3 and they are
 
 compute_extension:v3:os-quota-sets:discoverable: ,
 compute_extension:v3:os-quota-sets:show: ,
 compute_extension:v3:os-quota-sets:update: rule:admin_api,
 compute_extension:v3:os-quota-sets:delete: rule:admin_api,
 compute_extension:v3:os-quota-sets:detail: rule:admin_api,
 
 My question is how can i use the V3 extensions. I mean, whether i can
 use them by changing the AUTH_URL as
 
 OS_AUTH_URL=http://ip_address:35357/v3.0/ (but this didn't worked).
 
 I also have a doubt whether RDO process installed the Havana setup with V3
 extensions or just V2 extensions?
 
 I could test all the existing quota features with respect to tenant and the 
 users in a tenant.
 During this, i had observed the following things
 
 1. Weak Notifications - Let’s say that a user is added as a member of a 
 project and he had created an
 instance in that project. When he logs in to the dashboard he can see 
 that an
 instance has been created by him. Now, the administrator removed his 
 membership
 from the “project”. Now when user logs in, he will not be able to see the
 instance that he created earlier. But the instance still exists and the 
 user can log onto it.
 But if administrator adds him back to the project, then the user is able 
 to see again the same instance. 
 
 2. By default the policy.json file allows any user in a project to destroy an 
 instance created by another user 
in the same project
 
 3. I couldn't find a link or page in the dashboard where i can set the
 quota limits of a user in a project. I could do for a project, but not
for a User. I did set the quota limits for the user using nova commands.
 
 4. When i see instances that have created by users in a project, it
  does not show who has created that instance. For eg: if a project has
  2 users and each user created 1 instance of VM each, then in the
 Instances link, the dashboard show both the instances with their name
 and details. But it does now show who has created which VM.
 
 5. When a VM is created, it normally allows SSH login using the key
pair generated by the user. But the console link provided in the
dashboard only allows login through password. So, i have to atleast
once login to the VM through command line using the key, sets the root
password (because during the VM creation, i am not asked to enter the
  root password) and then use the console provided in the dashboard.
 
 We also had a short discussion here (at CERN) to take the quota features 
 further.
 Among these features, the first one we would like to have is
 
 Define roles like Admin (which is already there), Domain Admin and
 Project Admin.  The Admin can define different domains in the cloud
 and also assign a person as Domain Admin to each domain respectively.
 Also, the Admin will define quota to each Domain.
 
 The Domain Admin role for a person in a Domain allows him/her to define
 the Projects/Tenants in that domain and also define a person as Project
 Admin to each project in that domain respectively.  This person will also
 define Quota for each project with the condition that the sum of quota
 limits of all projects should be less than or equal to its domain quota
 limits.
 
 The Project Admin can add users to each project and also define quota
 for each user respectively.
 
 We are thinking of first having this sort of tree hierarchy where the
 parent can manage all the things beneath them.
 
 I think for this, we need to have the following things in OpenStack
 1. Allow to define roles (this is already there)
 2. Define the meaning of these roles in the policy.json file of nova
 3. Need to add little bit of code to understand this hierarchy and allow
 the functionalities explained above.
 
 Once we have this, we can then think of quota delegation.
 
 Any comments, please let me know...
 
 Regards,
 Vinod Kumar 

Re: [openstack-dev] [Reminder] - Gate Blocking Bug Day on Monday Jan 27th

2014-01-28 Thread Matt Riedemann



On 1/24/2014 2:29 PM, Sean Dague wrote:

Correction, Monday Jan 27th.

My calendar widget was apparently still on May for summit planning...

On 01/24/2014 07:40 AM, Sean Dague wrote:

It may feel like it's been gate bug day all the days, but we would
really like to get people together for gate bug day on Monday, and get
as many people, including as many PTLs as possible, to dive into issues
that we are hitting in the gate.

We have 2 goals for the day.

** Fingerprint all the bugs **

As of this second, we have fingerprints matching 73% of gate failures,
that tends to decay over time, as new issues are introduced, and old
ones are fixed. We have a hit list of issues here -
http://status.openstack.org/elastic-recheck/data/uncategorized.html

Ideally we want to get and keep the categorization rate up past 90%.
Basically the process is dive into a failed job, look at how it failed,
register a bug (or find an existing bug that was registered), and build
and submit a finger print.

** Tackle the Fingerprinted Bugs **

The fingerprinted bugs - http://status.openstack.org/elastic-recheck/
are now sorted by the # of hits we've gotten in the last 24hrs across
all queues, so that we know how much immediate pain this is causing us.

We'll do this on the #openstack-gate IRC channel, which I just created.
We'll be helping people through what's required to build fingerprints,
trying to get lots of eyes on the existing bugs, and see how many of
these remaining races we can drive out.

Looking forward to Monday!

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



For those that haven't compared numbers yet, before the bug day 
yesterday the percentage of uncategorized bugs was 73% and now it's 
96.4%, so fingerprinting is better.


I'll leave it up to Sean to provide a more executive-level summary if 
one is needed. :)


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Everett Toews
Hi Sean,

Could 1.1.1 Every Inbound WSGI request should be logged Exactly Once be used 
to track API call data in order to discover which API calls are being made most 
frequently?

It certainly seems like it could but I want to confirm. I ask because this came 
up as B Get aggregate API call data from companies willing to share it. in 
the user survey discussion [1].

Thanks,
Everett

[1] http://lists.openstack.org/pipermail/user-committee/2014-January/000214.html


On Jan 27, 2014, at 7:07 AM, Sean Dague wrote:

 Back at the beginning of the cycle, I pushed for the idea of doing some
 log harmonization, so that the OpenStack logs, across services, made
 sense. I've pushed a proposed changes to Nova and Keystone over the past
 couple of days.
 
 This is going to be a long process, so right now I want to just focus on
 making INFO level sane, because as someone that spends a lot of time
 staring at logs in test failures, I can tell you it currently isn't.
 
 https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
 written down so far, comments welcomed.
 
 We kind of need to solve this set of recommendations once and for all up
 front, because negotiating each change, with each project, isn't going
 to work (e.g - https://review.openstack.org/#/c/69218/)
 
 What I'd like to find out now:
 
 1) who's interested in this topic?
 2) who's interested in helping flesh out the guidelines for various log
 levels?
 3) who's interested in helping get these kinds of patches into various
 projects in OpenStack?
 4) which projects are interested in participating (i.e. interested in
 prioritizing landing these kinds of UX improvements)
 
 This is going to be progressive and iterative. And will require lots of
 folks involved.
 
   -Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fwd: Change in openstack/neutron[master]: Kill dnsmasq and ns-meta softly]

2014-01-28 Thread Edgar Magana
No doubt about it!!!

Edgar

On 1/28/14 8:45 AM, Jay Pipes jaypi...@gmail.com wrote:

This might just be the most creative commit message of the year.

-jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Scott Devoid

 For the uses I've seen of it in the nova api code INFO would be perfectly
 fine in place of AUDIT.


We've found the AUDIT logs in nova useful for tracking which user initiated
a particular request (e.g. delete this instance). AUDIT had a much better
signal to noise ratio than INFO or DEBUG. Although this seems to have
changed since Essex. For example nova-compute spits out
AUDIT nova.compute.resource_tracker messages every minute even if there
are no changes :-/

~ Scott


On Tue, Jan 28, 2014 at 11:11 AM, Everett Toews everett.to...@rackspace.com
 wrote:

 Hi Sean,

 Could 1.1.1 Every Inbound WSGI request should be logged Exactly Once be
 used to track API call data in order to discover which API calls are being
 made most frequently?

 It certainly seems like it could but I want to confirm. I ask because this
 came up as B Get aggregate API call data from companies willing to share
 it. in the user survey discussion [1].

 Thanks,
 Everett

 [1]
 http://lists.openstack.org/pipermail/user-committee/2014-January/000214.html


 On Jan 27, 2014, at 7:07 AM, Sean Dague wrote:

  Back at the beginning of the cycle, I pushed for the idea of doing some
  log harmonization, so that the OpenStack logs, across services, made
  sense. I've pushed a proposed changes to Nova and Keystone over the past
  couple of days.
 
  This is going to be a long process, so right now I want to just focus on
  making INFO level sane, because as someone that spends a lot of time
  staring at logs in test failures, I can tell you it currently isn't.
 
  https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
  written down so far, comments welcomed.
 
  We kind of need to solve this set of recommendations once and for all up
  front, because negotiating each change, with each project, isn't going
  to work (e.g - https://review.openstack.org/#/c/69218/)
 
  What I'd like to find out now:
 
  1) who's interested in this topic?
  2) who's interested in helping flesh out the guidelines for various log
  levels?
  3) who's interested in helping get these kinds of patches into various
  projects in OpenStack?
  4) which projects are interested in participating (i.e. interested in
  prioritizing landing these kinds of UX improvements)
 
  This is going to be progressive and iterative. And will require lots of
  folks involved.
 
-Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net / sean.da...@samsung.com
  http://dague.net
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [TripleO] adding process/service monitoring

2014-01-28 Thread Ladislav Smola

Hello,

excellent, this is exactly what we need in Tuskar. :-)

Might be good to monitor it via SNMPD. As this daemon will be
already running on each node. And I see it should be possible, though
not very popular.

Then it would be nice to have the data stored in Ceilometer, as
it provides generic backed for storing samples and querying them.
(would be nice to have history of those samples) It should be enough
to sent it in correct format to notification bus and Ceilometer will 
store it.

For now, Tuskar would just grab it from Ceilometer.

The problem here is that every node can have different services running
so you would have to write some smart inspector that would know what
is running where. We have been talking about exposing these kind of
information in Glance, so it would return you list of services for image.
Then you would get list of nodes for image and you can poll them via SNMP.
This could be probably inspector of central agent, same approach as for
getting the baremetal metrics.

Does it sound reasonable? Or you see some critical flaws in this 
approach? :-)


Kind Regards,
Ladislav



On 01/28/2014 02:59 AM, Richard Su wrote:

Hi,

I have been looking into how to add process/service monitoring to
tripleo. Here I want to be able to detect when an openstack dependent
component that is deployed on an instance has failed. And when a failure
has occurred I want to be notified and eventually see it in Tuskar.

Ceilometer doesn't handle this particular use case today. So I have been
doing some research and there are many options out there that provides
process checks: nagios, sensu, zabbix, and monit. I am a bit wary of
pulling one of these options into tripleo. There is some increased
operational and maintenance costs when pulling in each of them. And
physical device monitoring is currently in the works for Ceilometer
lessening the need for some of the other abilities that an another
monitoring tool would provide.

For the particular use case of monitoring processes/services, at a high
level, I am considering writing a simple daemon to perform the check.
Checks and failures are written out as messages to the notification bus.
Interested parties like Tuskar or Ceilometer can subscribe to these
messages.

In general does this sound like a reasonable approach?

There is also the question of how to configure or figure out which
processes we are interested in monitoring. I need to do more research
here but I'm considering either looking at the elements listed by
diskimage-builder or by looking at the orc post-configure.d scripts to
find service that are restarted.

I welcome your feedback and suggestions.

- Richard Su

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-28 Thread Jay Dobies



On 01/28/2014 11:42 AM, Jay Pipes wrote:

On Tue, 2014-01-28 at 10:02 -0500, Tzu-Mainn Chen wrote:

Yep, although the reason why - that no end-user will know what these terms mean 
-
has never been entirely convincing to me.


Well, tenants would never see any of the Tuskar UI, so I don't think we
need worry about them. And if a deployer is enabling Tuskar -- and using
Tuskar/Triple-O for undercloud deployment -- then I would think that the
deployer would understand the concept/terminology of undercloud and
overcloud, since it's an essential concept in deploying with
Triple-O. :)

So, in short, I don't see a problem with using the terms undercloud and
overcloud.

Best,
-jay


+1, I was going to say the same thing. Someone installing and using 
Tuskar will have to be sold on the concept of it, and I'm not sure how 
we'd describe what it does without using those terms.








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PCI Request, associating a device to its original request

2014-01-28 Thread Robert Li (baoli)
Hi Yongli,

In today's IRC meeting, we discussed this a little bit. I think the answer 
probably lies in the definition of the PCI request. In the current 
implementation of _translate_alias_to_requests(), a new property (assume it's 
called requestor_id) maybe added to the PCI request. And this is how it works:
   -- add a requestor_id to the request spec, and return it to the 
caller
   -- when a device is allocated, a mapping requester_id to the pci 
device can be established
   -- the requestor later can retrieve the device by calling something 
like get_pci_device(requestor_id)

The requestor id could be a UUID that is generated by the python uuid module.

In the neutron SRIOV case, we need to create a PCI request per nic (or per 
requested_network). Therefore, the generic PCI may provide an API so that we 
can do so. Such API may look like:
  create_pci_request(count, request_spec), The request_spec is key 
value pairs. It returns a requestor_id.

Also if PCI flavor API is available later, I guess we can have an API like:
  create_pci_request_from_pci_flavor(count, pci_flavor). It returns 
a requestor_id as well.

Let me know what you think.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-28 Thread Vishvananda Ishaya
Hi Everyone,

I apologize for the obtuse title, but there isn't a better succinct term to 
describe what is needed. OpenStack has no support for multiple owners of 
objects. This means that a variety of private cloud use cases are simply not 
supported. Specifically, objects in the system can only be managed on the 
tenant level or globally.

The key use case here is to delegate administration rights for a group of 
tenants to a specific user/role. There is something in Keystone called a 
“domain” which supports part of this functionality, but without support from 
all of the projects, this concept is pretty useless.

In IRC today I had a brief discussion about how we could address this. I have 
put some details and a straw man up here:

https://wiki.openstack.org/wiki/HierarchicalMultitenancy

I would like to discuss this strawman and organize a group of people to get 
actual work done by having an irc meeting this Friday at 1600UTC. I know this 
time is probably a bit tough for Europe, so if we decide we need a regular 
meeting to discuss progress then we can vote on a better time for this meeting.

https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting

Please note that this is going to be an active team that produces code. We will 
*NOT* spend a lot of time debating approaches, and instead focus on making 
something that works and learning as we go. The output of this team will be a 
MultiTenant devstack install that actually works, so that we can ensure the 
features we are adding to each project work together.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extending keystone identity

2014-01-28 Thread Simon Perfer
Thanks again, Dolph.
First, is there some good documentation on how to write a custom driver? I'm 
wondering specifically about how a keystone user-list is mapped to a specific 
function in identity/backend/mydriver.py. I suppose this mapping is why I was 
getting the 500 error about the action not being implemented.
Secondly, before poking around with writing a custom driver, I was decided to 
simply inherit ldap.Identity, as follows:








class Identity(ldap.Identity):
def __init__(self):
super(Identity, self).__init__()
LOG.debug('My authentication module loaded')


def authenticate(self, user_id, password):
LOG.debug('in auth function')

When I get a list of users, I never get the debug output. Further, I removed 
the authenticate method from the Identity class in ldap.py and list-users STILL 
worked. Unsure how this is possible. It seems we're never hitting the 
authenticate method, which is why overriding it in my custom driver doesn't 
make much of a difference in reaching my goal for local users.
Is there another method I'm supposed to be overriding?
I appreciate the help -- I know these are likely silly questions to seasoned 
keystone developers.

From: dolph.math...@gmail.com
Date: Mon, 27 Jan 2014 22:35:18 -0600
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] extending keystone identity

From your original email, it sounds like you want to extend the existing LDAP 
identity driver implementation, rather than writing a custom driver from 
scratch, which is what you've written. The TemplatedCatalog driver sort of 
follows that pattern with the KVS catalog driver, although it's not a 
spectacular example.



On Mon, Jan 27, 2014 at 9:11 PM, Simon Perfer simon.per...@hotmail.com wrote:





I dug a bit more and found this in the logs:








(keystone.common.wsgi): 2014-01-27 19:07:13,851 WARNING The action you have 
requested has not been implemented.


Despite basing my (super simple) code on the SQL or LDAP backends, I must be 
doing something wrong.




-- I've placed my backend code in 
/usr/share/pyshared/keystone/identity/backends/nicira.py or 
/usr/share/pyshared/keystone/common/nicira.py




-- I DO see the my authenticate module loaded in the log


I would appreciate any help in figuring out what I'm missing. Thanks!





















From: simon.per...@hotmail.com
To: openstack-dev@lists.openstack.org


Date: Mon, 27 Jan 2014 21:58:43 -0500
Subject: Re: [openstack-dev] extending keystone identity




Dolph, I appreciate the response and pointing me in the right direction.
Here's what I have so far:
imports here







CONF = config.CONF

LOG = logging.getLogger(__name__)




class Identity(identity.Driver):

def __init__(self):

super(Identity, self).__init__()

LOG.debug('My authentication module loaded')




def authenticate(self, user_id, password, domain_scope=None):

LOG.debug('in authenticate method')


When I request a user-list via the python-keystoneclient, we never make it into 
the authenticate method (as is evident by the missing debug log).




Any thoughts on why I'm not hitting this method?



From: dolph.math...@gmail.com
Date: Mon, 27 Jan 2014 18:14:50 -0600


To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] extending keystone identity

_check_password() is a private/internal API, so we make no guarantees about 
it's stability. Instead, override the public authenticate() method with 
something like this:


def authenticate(self, user_id, password, domain_scope=None):

if user_id in SPECIAL_LIST_OF_USERS:   # compare against value 
from keystone.conf   passelse:return 
super(CustomIdentityDriver, self).authenticate(user_id, password, domain_scope)




On Mon, Jan 27, 2014 at 3:27 PM, Simon Perfer simon.per...@hotmail.com wrote:





I'm looking to create a simple Identity driver that will look at usernames. A 
small number of specific users should be authenticated by looking at a 
hard-coded password in keystone.conf, while any other users should fall back to 
LDAP authentication.




I based my original driver on what's found here:
http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/




As can be seen in the github code 
(https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py),
 there's a _check_password() method which is supposedly called at some point.




I've based my driver on this ldapauth.py file, and created an Identity class 
which subclasses sql.Identity. Here's what I have so far:








CONF = config.CONF
LOG = logging.getLogger(__name__)


class Identity(sql.Identity):
def __init__(self):
super(Identity, self).__init__()
LOG.debug('My authentication module loaded')






def _check_password(self, password, user_ref):
LOG.debug('Authenticating via my custom hybrid authentication')



Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-28 Thread Chmouel Boudjnah
On Tue, Jan 28, 2014 at 7:35 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 The key use case here is to delegate administration rights for a group of
 tenants to a specific user/role. There is something in Keystone called a
 domain which supports part of this functionality, but without support
 from all of the projects, this concept is pretty useless.



FYI: swift and keystoneauth allow to have cross project ACLs

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Sean Dague
On 01/28/2014 12:41 PM, Scott Devoid wrote:
 For the uses I've seen of it in the nova api code INFO would be
 perfectly fine in place of AUDIT.
 
 
 We've found the AUDIT logs in nova useful for tracking which user
 initiated a particular request (e.g. delete this instance). AUDIT had a
 much better signal to noise ratio than INFO or DEBUG. Although this
 seems to have changed since Essex. For example nova-compute spits out
 AUDIT nova.compute.resource_tracker messages every minute even if
 there are no changes :-/

A big part of my interest here is to make INFO a useful informational
level for operators. That means getting a bunch of messages out of it
that don't belong.

We should be logging user / tenant on every wsgi request, so that should
be parsable out of INFO. If not, we should figure out what is falling
down there.

Follow on question: do you primarily use the EC2 or OSAPI? As there are
some current short comings on the EC2 logging, and figuring out
normalizing those would be good as well.

-Sean

 
 ~ Scott
 
 
 On Tue, Jan 28, 2014 at 11:11 AM, Everett Toews
 everett.to...@rackspace.com mailto:everett.to...@rackspace.com wrote:
 
 Hi Sean,
 
 Could 1.1.1 Every Inbound WSGI request should be logged Exactly
 Once be used to track API call data in order to discover which API
 calls are being made most frequently?
 
 It certainly seems like it could but I want to confirm. I ask
 because this came up as B Get aggregate API call data from
 companies willing to share it. in the user survey discussion [1].
 
 Thanks,
 Everett
 
 [1]
 
 http://lists.openstack.org/pipermail/user-committee/2014-January/000214.html
 
 
 On Jan 27, 2014, at 7:07 AM, Sean Dague wrote:
 
  Back at the beginning of the cycle, I pushed for the idea of doing
 some
  log harmonization, so that the OpenStack logs, across services, made
  sense. I've pushed a proposed changes to Nova and Keystone over
 the past
  couple of days.
 
  This is going to be a long process, so right now I want to just
 focus on
  making INFO level sane, because as someone that spends a lot of time
  staring at logs in test failures, I can tell you it currently isn't.
 
  https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
  written down so far, comments welcomed.
 
  We kind of need to solve this set of recommendations once and for
 all up
  front, because negotiating each change, with each project, isn't going
  to work (e.g - https://review.openstack.org/#/c/69218/)
 
  What I'd like to find out now:
 
  1) who's interested in this topic?
  2) who's interested in helping flesh out the guidelines for
 various log
  levels?
  3) who's interested in helping get these kinds of patches into various
  projects in OpenStack?
  4) which projects are interested in participating (i.e. interested in
  prioritizing landing these kinds of UX improvements)
 
  This is going to be progressive and iterative. And will require
 lots of
  folks involved.
 
-Sean
 
  --
  Sean Dague
  Samsung Research America
  s...@dague.net mailto:s...@dague.net / sean.da...@samsung.com
 mailto:sean.da...@samsung.com
  http://dague.net
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday January 28th at 19:00 UTC

2014-01-28 Thread Elizabeth Krumbach Joseph
On Mon, Jan 27, 2014 at 11:04 AM, Elizabeth Krumbach Joseph
l...@princessleia.com wrote:
 The OpenStack Infrastructure (Infra) team is hosting our weekly
 meeting tomorrow, Tuesday January 28th, at 19:00 UTC in
 #openstack-meeting

Thanks to everyone who joined us, meeting minutes and logs now available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-01-28-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-01-28-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-01-28-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reviewday data download problem

2014-01-28 Thread Elizabeth Krumbach Joseph
On Mon, Jan 27, 2014 at 11:41 AM, Brant Knudson b...@acm.org wrote:
 A few days ago, a change I submitted to reviewday to generate JSON results
 for easy consumption by an application merged. I was hoping that this could
 be used with next-review to help me prioritize reviews.

 So I was expecting to now be able to go to the URL and get the .json file,
 like this:
  curl http://status.openstack.org/reviews/reviewday.json

 Unfortunately I'm getting a 403 Forbidden error saying I don't have
 permission. I think the script is working and generating reviewday.json
 because if I use a different filename I get 404 Not Found instead.

 I probably made some assumptions that I shouldn't have about how the
 status.openstack.org web site works. Is there something else I can change
 myself to open up access to the file, or someone I can contact that can
 update the config?

This is a pretty simple fix in
modules/openstack_project/templates/status.vhost.erb in the
openstack-infra/config repo, tracking progress
https://bugs.launchpad.net/openstack-ci/+bug/1273833 and will link
review to this bug once it's up.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-28 Thread Doug Hellmann
There are several reviews related to adding VMware interface code to the
oslo-incubator so it can be shared among projects (start at
https://review.openstack.org/#/c/65075/7 if you want to look at the code).

I expect this code to be fairly stand-alone, so I wonder if we would be
better off creating an oslo.vmware library from the beginning, instead of
bringing it through the incubator.

Thoughts?

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-28 Thread Justin Santa Barbara
Thanks John - combining with the existing effort seems like the right
thing to do (I've reached out to Claxton to coordinate).  Great to see
that the larger issues around quotas / write-once have already been
agreed.

So I propose that sharing will work in the same way, but some values
are visible across all instances in the project.  I do not think it
would be appropriate for all entries to be shared this way.  A few
options:

1) A separate endpoint for shared values
2) Keys are shared iff  e.g. they start with a prefix, like 'peers_XXX'
3) Keys are set the same way, but a 'shared' parameter can be passed,
either as a query parameter or in the JSON.

I like option #3 the best, but feedback is welcome.

I think I will have to store the value using a system_metadata entry
per shared key.  I think this avoids issues with concurrent writes,
and also makes it easier to have more advanced sharing policies (e.g.
when we have hierarchical projects)

Thank you to everyone for helping me get to what IMHO is a much better
solution than the one I started with!

Justin




On Tue, Jan 28, 2014 at 4:38 AM, John Garbutt j...@johngarbutt.com wrote:
 On 27 January 2014 14:52, Justin Santa Barbara jus...@fathomdb.com wrote:
 Day, Phil wrote:


  We already have a mechanism now where an instance can push metadata as
  a way of Windows instances sharing their passwords - so maybe this
  could
  build on that somehow - for example each instance pushes the data its
  willing to share with other instances owned by the same tenant ?
 
  I do like that and think it would be very cool, but it is much more
  complex to
  implement I think.

 I don't think its that complicated - just needs one extra attribute stored
 per instance (for example into instance_system_metadata) which allows the
 instance to be included in the list


 Ah - OK, I think I better understand what you're proposing, and I do like
 it.  The hardest bit of having the metadata store be full read/write would
 be defining what is and is not allowed (rate-limits, size-limits, etc).  I
 worry that you end up with a new key-value store, and with per-instance
 credentials.  That would be a separate discussion: this blueprint is trying
 to provide a focused replacement for multicast discovery for the cloud.

 But: thank you for reminding me about the Windows password though...  It may
 provide a reasonable model:

 We would have a new endpoint, say 'discovery'.  An instance can POST a
 single string value to the endpoint.  A GET on the endpoint will return any
 values posted by all instances in the same project.

 One key only; name not publicly exposed ('discovery_datum'?); 255 bytes of
 value only.

 I expect most instances will just post their IPs, but I expect other uses
 will be found.

 If I provided a patch that worked in this way, would you/others be on-board?

 I like that idea. Seems like a good compromise. I have added my review
 comments to the blueprint.

 We have this related blueprints going on, setting metadata on a
 particular server, rather than a group:
 https://blueprints.launchpad.net/nova/+spec/metadata-service-callbacks

 It is limiting things using the existing Quota on metadata updates.

 It would be good to agree a similar format between the two.

 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ec2-api]cmd euca-describle-address will return error. someone can help me?

2014-01-28 Thread Vishvananda Ishaya
Hi!

Nice find, this is a bug. I have reported it along with instructions for fixing 
here:

https://bugs.launchpad.net/nova/+bug/1273837

Vish

On Jan 21, 2014, at 6:20 PM, li zheming lizhemin...@gmail.com wrote:

 hi all:
I used euca2tool3.1.10 to test ec2 api.but when I do cmd 
 euca-describle-address, it return error, like this:
error(notImplementedError):unknown error occured
 
 my environment:
 two float IPs:
 200.200.130.3- bingding a instance
200.200.130.4-no bingding

if  I do cmd euca-describle-addres 200.200.130.4, it return OK.
if  I do cmd euca-describle-addres 200.200.130.4, it return:
error(notImplementedError):unknown error occured
 if  I do cmd euca-describle-addres, it return:
error(notImplementedError):unknown error occured
 
  so I think it is error with floatIP which is bingding in a instance.
  I find the code about this:
  nova/api/ec2/cloud.py
  def _format_address(self, context, floating_ip):
 ec2_id = None
 if floating_ip['fixed_ip_id']:
 fixed_id = floating_ip['fixed_ip_id']
 fixed = self.network_api.get_fixed_ip(context, fixed_id)
 if fixed['instance_uuid'] is not None:
 ec2_id = ec2utils.id_to_ec2_inst_id(fixed['instance_uuid'])
 address = {'public_ip': floating_ip['address'],
'instance_id': ec2_id}
 if context.is_admin:
 details = %s (%s) % (address['instance_id'],
floating_ip['project_id'])
 address['instance_id'] = details
 return address
   
  if floatIP which bingding instance, it will enter in the red code. it 
 will
 enter function  .get_fixed_ip(context, fixed_id), but in get_finxd_ip:
   nova/network/neutronv2/api.py:
  def get_fixed_ip(self, context, id):
 Get a fixed ip from the id.
 raise NotImplementedError()
  
  it raise exception NotImplementedError. 
 
   so I have two questions:
 1. the method of the test is  OK?whether I do cmd error?
 2. whether the neutron client unsupport get_fixed_ip by id?
 
 thanks!
 lizheming
 
   
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extending keystone identity

2014-01-28 Thread Adam Young
Use two separate domains for them. Make the userids be uuid@domainid  
to be able distinguish one from the other.



On 01/27/2014 04:27 PM, Simon Perfer wrote:
I'm looking to create a simple Identity driver that will look at 
usernames. A small number of specific users should be authenticated by 
looking at a hard-coded password in keystone.conf, while any other 
users should fall back to LDAP authentication.


I based my original driver on what's found here:

http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/

As can be seen in the github code 
(https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py), 
there's a _check_password() method which is supposedly called at some 
point.


I've based my driver on this ldapauth.py file, and created an Identity 
class which subclasses sql.Identity. Here's what I have so far:


CONF = config.CONF

LOG = logging.getLogger(__name__) Roles should also be scopeed-able


class Identity(sql.Identity):

def __init__(self):

super(Identity, self).__init__()

LOG.debug('My authentication module loaded')


def _check_password(self, password, user_ref):

LOG.debug('Authenticating via my custom hybrid authentication')


username = user_ref.get('name')

LOG.debug('Username = %s' % username)


I can see from the syslog output that we never enter the 
_check_password() function.



Can someone point me in the right direction regarding which function 
calls the identity driver? Also, what is the entry function in the 
identity drivers? Why wouldn't check_password() be called, as we see 
in the github / blog example above?


THANKS!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] - Cloud federation on top of the Apache

2014-01-28 Thread Adam Young

On 01/27/2014 12:26 PM, Marek Denis wrote:

Dear all,

We have Identity Provider and mapping CRUD operations already merged, 
so it's a good point to prepare Keystone and Apache to handle SAML (as 
a starter) requests/responses.
For the next OpenStack release it'd be the Apache that handles SAML 
communication. In order to force SAML authentication, an admin defines 
so called 'protected resources', hidden behind certain URLs. In order 
to get access to aforementioned resources a SAML authenticated session 
is required. In terms of Keystone and federation this 'resource' would 
be just a token, ready to be later used against other OpenStack 
services. For obvious reasons we cannot make mod_shib watch all the 
Keystone URLs clients can access, so I think a dedicated URL should be 
used. That's right, a client who'd want to grab a token upon SAML 
authn would need to hit 
https://keystone:5000/v3/OS-FEDERATION/tokens/identity_provider/idp/protocol/protocol 
.Such a URL would albo be kind of dynamic, because this would later 
let Keystone distinguish* what (already registered) IdP and federation 
protocol (SAML, OpenID, etc) is going to be used .


A simplified workflow could look like this:


Pre-req: Apache frontend is  configured to protect URLs matching regex 
/OS-FEDERATION/tokens/identity_provider/(.*?)/protocol/(.*?)


1) In order to get a valid token upon federated authentication a 
client enters protected resource, for instance 
https://keystone:5000/v3/OS-FEDERATION/tokens/identity_provider/{idp}/protocol/{protocol}
2) After the client is authenticated (with ECP/similar extension) the 
request enters Keystone public pipeline.
3) Apache modules  store parsed parameters from a SAML assertion in a 
wsgi environment,
4) A class inheriting from wsgi.Middleware checks whether the 
REQUEST_URL (or similar) environment variable matches aforementioned 
regexp (e.g. /OS-FEDERATION/tokens/identity_provider/.*?/protocol/.*?) 
and if the match is positive, fetches env parameters starting with 
certain value (a prefix configurable in the keystone.conf, say 'ADFS_' 
). The parameters are stored as a dictionary and passed in a 
structure, later available to other filters/middleware objects in the 
pipeline (TO BE CONFIRMED, MAYBE REWRITING PARAMS IS NOT REQUIRED).
5) keystone/contrib/federation/routers.py has defined URL routes and 
fires keystone/contrib/federation/controllers.py methods that fetch 
IdP, protocol entities as well as the corresponding mapping entity 
with the mapping rules included. The rules are applied  on the 
assertion parameters and list of local users/groups is issued. The 
OpenStack token is generated, stored in the DB and returned to the 
user (formed as a valid JSON response).

6) The token can now be used for next operations on the OpenStack cloud.


*)
At first I though the dynamic URLs 
(OS-FEDERATION/tokens/identity_provider/(.*?)/protocol/(.*?)) could be 
replaced with a one static, and information about the IdP and protocol 
could be sent as a HTTP POST input, but from what I have already 
noticed after the client is redirected to the IdP (and to the SP 
again) the initial input is lost.



I am looking forward to hear feedback from you.

Thanks,




This sounds sane.  We'd like to keep the Param rewriting as an option, 
but if it breaks things, could be done phase2.


To be clear, are you going to use mod_mellon as the Apache Auth module?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-01-28 Thread Stefano Maffulli
A few minutes ago we sent the first batch of invites to people who
contributed to any of the official OpenStack programs[1] from 00:00 UTC
on April 4, 2014 (Grizzly release day) until present.

We'll send more invites *after each milestone* from now on and until
feature freeze (March 6th, according to release schedule[2])

IMPORTANT CHANGE

Contrary to previous times, the code is a *$600 discount*. If you don't
use it before March 22, when registration prices will increase, *you
will be charged*.

 Use it! Now!

And apply for the Travel Support Program if you need to:
https://wiki.openstack.org/wiki/Travel_Support_Program

Cheers,
stef

[1]
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
[2] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Migration to keystone v3 API questions

2014-01-28 Thread Adam Young

On 01/23/2014 06:21 AM, Steven Hardy wrote:

Hi all,

I've recently been working on migrating the heat internal interfaces to use
the keystone v3 API exclusively[1].

This work has mostly been going well, but I've hit a couple of issues which
I wanted to discuss, so we agree the most appropriate workarounds:

1. keystoneclient v3 functionality not accessible when catalog contains a
v2 endppoint:


Please chime in here:

https://review.openstack.org/#/c/62801/





In my test environment my keystone endpoint looks like:

http://127.0.0.1:5000/v2.0

And I'd guess this is similar to the majority of real deployments atm?

So when creating a keystoneclient object I've been doing:

from keystoneclient.v3 import client as kc_v3
v3_endpoint = self.context.auth_url.replace('v2.0', 'v3')
client = kc_v3.Client(auth_url=v3_endpoint, ...

Which, assuming the keystone service has both v2 and v3 API's enabled
works, but any attempt to use v3 functionality fails with 404 because
keystoneclient falls back to using the v2.0 endpoint from the catalog.

So to work around this I do this:

client = kc_v3.Client(auth_url=v3_endpoint, endpoint=v3_endpoint, ...
client.authenticate()

Which results in the v3 features working OK.

So my questions are:
- Is this a reasonable workaround for production environments?
- What is the roadmap for moving keystone endpoints to be version agnostic?
- Is there work ongoing to make the client smarter in terms of figuring out
   what URL to use (version negotiation or substituting the appropriate path
   when we are in an environment with a legacy v2.0 endpoint..)

2. Client (CLI) support for v3 API

What is the status re porting keystoneclient to provide access to the v3
functionality on the CLI?

In particular, Heat is moving towards using domains to encapsulate the
in-instance users it creates[2], so administrators will require some way to
manage users in a non-default domain, e.g to get visibility of what Heat is
doing in that domain and debug in the event of any issues.

If anyone can provide any BP links or insight that would be much
appreciated!

Thanks,

Steve

[1] https://blueprints.launchpad.net/heat/+spec/keystone-v3-only
[2] https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fwd: Change in openstack/neutron[master]: Kill dnsmasq and ns-meta softly]

2014-01-28 Thread Salvatore Orlando
It might be creative, but it's a shame that it did not serve the purpose.
At least it confirmed the kernel bug was related to process termination in
network namespaces but was due to SIGKILL exlusively, as it occurred with
SIGTERM as well.

On the bright side, Mark has now pushed another patch which greatly reduces
the occurence of bug 1273386 [1]
We are also working with the ubuntu kernel team to assess whether a kernel
fix is needed.

Salvatore

[1] https://review.openstack.org/#/c/69653/


On 28 January 2014 18:13, Edgar Magana emag...@plumgrid.com wrote:

 No doubt about it!!!

 Edgar

 On 1/28/14 8:45 AM, Jay Pipes jaypi...@gmail.com wrote:

 This might just be the most creative commit message of the year.
 
 -jay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Brad Topol
So we are starting to add more cloud audit (aka CADF) support to 
OpenStack.  We have support in Nova and infrastructure added to Ceilometer 
and I am starting to add this capability to keystone.  This work is based 
on sending events to ceilometer.  If this is related to the audit work 
below I would like to be included. 

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Scott Devoid dev...@anl.gov
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   01/28/2014 12:47 PM
Subject:Re: [openstack-dev] Proposed Logging Standards



For the uses I've seen of it in the nova api code INFO would be perfectly 
fine in place of AUDIT.

We've found the AUDIT logs in nova useful for tracking which user 
initiated a particular request (e.g. delete this instance). AUDIT had a 
much better signal to noise ratio than INFO or DEBUG. Although this seems 
to have changed since Essex. For example nova-compute spits out 
AUDIT nova.compute.resource_tracker messages every minute even if there 
are no changes :-/

~ Scott


On Tue, Jan 28, 2014 at 11:11 AM, Everett Toews 
everett.to...@rackspace.com wrote:
Hi Sean,

Could 1.1.1 Every Inbound WSGI request should be logged Exactly Once be 
used to track API call data in order to discover which API calls are being 
made most frequently?

It certainly seems like it could but I want to confirm. I ask because this 
came up as B Get aggregate API call data from companies willing to share 
it. in the user survey discussion [1].

Thanks,
Everett

[1] 
http://lists.openstack.org/pipermail/user-committee/2014-January/000214.html


On Jan 27, 2014, at 7:07 AM, Sean Dague wrote:

 Back at the beginning of the cycle, I pushed for the idea of doing some
 log harmonization, so that the OpenStack logs, across services, made
 sense. I've pushed a proposed changes to Nova and Keystone over the past
 couple of days.

 This is going to be a long process, so right now I want to just focus on
 making INFO level sane, because as someone that spends a lot of time
 staring at logs in test failures, I can tell you it currently isn't.

 https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
 written down so far, comments welcomed.

 We kind of need to solve this set of recommendations once and for all up
 front, because negotiating each change, with each project, isn't going
 to work (e.g - https://review.openstack.org/#/c/69218/)

 What I'd like to find out now:

 1) who's interested in this topic?
 2) who's interested in helping flesh out the guidelines for various log
 levels?
 3) who's interested in helping get these kinds of patches into various
 projects in OpenStack?
 4) which projects are interested in participating (i.e. interested in
 prioritizing landing these kinds of UX improvements)

 This is going to be progressive and iterative. And will require lots of
 folks involved.

   -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extending keystone identity

2014-01-28 Thread Simon Perfer
Thanks Adam. We played around with domains without success. There's a rather 
complex reason why given our existing OpenStack environment.
I'm still hoping that it will be simple enough to extend an existing driver. 
I'd also love to learn how to code my own driver for some more complex 
authentication projects we have coming down the pipe.
Date: Tue, 28 Jan 2014 15:42:29 -0500
From: ayo...@redhat.com
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] extending keystone identity


  

  
  
Use two separate domains for them. 
  Make the userids be uuid@domainid  to be able distinguish one
  from the other.

  

  

  On 01/27/2014 04:27 PM, Simon Perfer wrote:



  
  
I'm looking to create a
simple Identity driver that will look at usernames. A small
number of specific users should be authenticated by looking
at a hard-coded password in keystone.conf, while any other
users should fall back to LDAP authentication.



I based my original driver on what's found here:



http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/



As can be seen in the github code 
(https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py),
  there's a _check_password() method which is supposedly called
  at some point.



I've based my driver on this ldapauth.py file, and created
  an Identity class which subclasses sql.Identity. Here's what I
  have so far:




  CONF = config.CONF
  LOG = logging.getLogger(__name__) Roles should
also be scopeed-able
  

  
  class Identity(sql.Identity):
  def __init__(self):
  super(Identity, self).__init__()
  LOG.debug('My authentication module
loaded')
  

  
  def _check_password(self, password,
user_ref):
  LOG.debug('Authenticating via my custom
hybrid authentication')
  

  
  username = user_ref.get('name')
  
  
  LOG.debug('Username = %s' % username)
  

  
  I can see from the syslog output that we never
enter the _check_password() function.




Can someone point me in the right direction regarding which
  function calls the identity driver? Also, what is the entry
  function in the identity drivers? Why wouldn't
  check_password() be called, as we see in the github / blog
  example above?



THANKS!
  
  

  
  

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-28 Thread Carl Baldwin
I think I agree.  The new check isn't adding much value and we could
debate for a long time whether /30 is useful and should be disallowed
or not.  There are bigger fish to fry.

Carl

On Fri, Jan 24, 2014 at 10:43 AM, Paul Ward wpw...@us.ibm.com wrote:
 Given your obviously much more extensive understanding of networking than
 mine, I'm starting to move over to the we shouldn't make this fix camp.
 Mostly because of this:

 CARVER, PAUL pc2...@att.com wrote on 01/23/2014 08:57:10 PM:



 Putting a friendly helper in Horizon will help novice users and
 provide a good example to anyone who is developing an alternate UI
 to invoke the Neutron API. I’m not sure what the benefit is of
 putting code in the backend to disallow valid but silly subnet
 masks. I include /30, /31, AND /32 in the category of “silly” subnet
 masks to use on a broadcast medium. All three are entirely
 legitimate subnet masks, it’s just that they’re not useful for end
 host networks.

 My mindset has always been that we should programmatically prevent things
 that are definitively wrong.  Of which, these netmasks apparently are not.
 So it would seem we should leave neutron server code alone under the
 assumption that those using CLI to create networks *probably* know what
 they're doing.

 However, the UI is supposed to be the more friendly interface and perhaps
 this is the more appropriate place for this change?  As I stated before,
 horizon prevents /32, but allows /31.

 I'm no UI guy, so maybe the best course of action is to abandon my change in
 gerrit and move the launchpad bug back to unassigned and see if someone with
 horizon experience wants to pick this up.  What do others think about this?

 Thanks again for your participation in this discussion, Paul.  It's been
 very enlightening to me.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fwd: Change in openstack/neutron[master]: Kill dnsmasq and ns-meta softly]

2014-01-28 Thread Rochelle.RochelleGrober
+1 anyway.  Sometimes I feel like we've lost the humor in our work, but at 
least I see it on IRC and now here.

Thanks for the humanity check!

--Rocky

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Tuesday, January 28, 2014 1:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fwd: Change in openstack/neutron[master]: Kill 
dnsmasq and ns-meta softly]

It might be creative, but it's a shame that it did not serve the purpose.
At least it confirmed the kernel bug was related to process termination in 
network namespaces but was due to SIGKILL exlusively, as it occurred with 
SIGTERM as well.

On the bright side, Mark has now pushed another patch which greatly reduces the 
occurence of bug 1273386 [1]
We are also working with the ubuntu kernel team to assess whether a kernel fix 
is needed.

Salvatore

[1] https://review.openstack.org/#/c/69653/

On 28 January 2014 18:13, Edgar Magana 
emag...@plumgrid.commailto:emag...@plumgrid.com wrote:
No doubt about it!!!

Edgar

On 1/28/14 8:45 AM, Jay Pipes jaypi...@gmail.commailto:jaypi...@gmail.com 
wrote:

This might just be the most creative commit message of the year.

-jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-28 Thread Donald Stufft

On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

 On Tue, Jan 28 2014, Doug Hellmann wrote:
 
 There are several reviews related to adding VMware interface code to the
 oslo-incubator so it can be shared among projects (start at
 https://review.openstack.org/#/c/65075/7 if you want to look at the code).
 
 I expect this code to be fairly stand-alone, so I wonder if we would be
 better off creating an oslo.vmware library from the beginning, instead of
 bringing it through the incubator.
 
 Thoughts?
 
 This sounds like a good idea, but it doesn't look OpenStack specific, so
 maybe building a non-oslo library would be better.
 
 Let's not zope it! :)

+1 on not making it an oslo library.

 
 -- 
 Julien Danjou
 # Free Software hacker # independent consultant
 # http://julien.danjou.info
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2014-01-28 Thread James Slagle
Sorry to revive an old thread, but I've updated these images based on
the icehouse-2 milestone.

I've updated the instructions with the new download links:
https://gist.github.com/slagle/981b279299e91ca91bd9

To reiterate, the point here is to give people an easier on ramp to
getting a tripleo setup. This is especially important as more
developers start to get involved with Tuskar in particular. There is
definitely a lot of value in going through the whole devtest process
yourself, but it can be a bit daunting initially, and since this
eliminates the image building step with pre-built images, I think
there's less that can go wrong.

Given Clint's earlier feedback, I could see working the seed vm back
into these steps and getting the config drive setup into incubator so
that everyone that goes through devtest doesn't have to have a custom
seed. Then giving folks the option to use prebuilt vm images vs
building from scratch. Also, given the pending patches around an all
in one Overcloud, we could work the seed back into this, and still be
at just 3 vm's.

Any other feedback welcome.


On Tue, Dec 24, 2013 at 11:50 AM, James Slagle james.sla...@gmail.com wrote:
 I built some vm image files for testing with TripleO based off of the
 icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
 interested in giving them a try you can find a set of instructions and
 how to download the images at:

 https://gist.github.com/slagle/981b279299e91ca91bd9

 The steps are similar to the devtest process, but you use the prebuilt
 vm images for the undercloud and overcloud and don't need a seed vm.
 When the undercloud vm is started it uses the OpenStack Configuration
 Drive as a data source for cloud-init.  This eliminates some of the
 manual configuration that would otherwise be needed.  To that end, the
 steps currently use some different git repos for some of the tripleo
 tooling since not all of that functionality is upstream yet.  I can
 submit those upstream, but they didn't make a whole lot of sense
 without the background, so I wanted to provide that first.

 At the very least, this could be an easier way for developers to get
 setup with tripleo to do a test overcloud deployment to develop on
 things like Tuskar.


 --
 -- James Slagle
 --



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Adrian Otto
OpenStack Devs,

I'd like to introduce you to a team working on an interesting problem space. We 
would like to know what you think about Configuration Discovery. We plan to 
build a tool that aids in the process of automated configuration discovery.

At Rackspace we work with customers who have pre-existing infrastructure and 
software running in production. To help them with changes, migrations, or 
configuration management, we often have to determine what they have in-place 
already. We have some tools and knowledge in-house that we use now, but we 
would like to create something even better in the open that would integrate 
well with OpenStack ecosystem projects like Heat, Solum, Murano, etc…

The concept for the tool is simple. The inputs are facts we know about the 
current system (a username, API Key, URL, etc.) and the output is a set of 
structured configuration information. That information can be used for:

- Generating Heat Templates
- Solum Application creation/import
- Creation of Chef recipes/cookbooks, Puppet modules, Ansible playbooks, setup 
scripts, etc..
- Configuration analysis (compare this config with a catalog of best practices)
- Configuration monitoring (has the configuration changed?)
- Troubleshooting

We would like to hear from anyone interested in this problem domain, start a 
conversation on how this fits in the OpenStack ecosystem, and start exploring 
possible use cases that would be useful for users and operators of OpenStack 
clouds.

For more information about the team, implementation, and prior art in this 
area, please see:

https://wiki.openstack.org/wiki/Satori

Thanks,

Adrian Otto 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-28 Thread Doug Hellmann
On Tue, Jan 28, 2014 at 5:06 PM, Donald Stufft don...@stufft.io wrote:


 On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info wrote:

  On Tue, Jan 28 2014, Doug Hellmann wrote:
 
  There are several reviews related to adding VMware interface code to the
  oslo-incubator so it can be shared among projects (start at
  https://review.openstack.org/#/c/65075/7 if you want to look at the
 code).
 
  I expect this code to be fairly stand-alone, so I wonder if we would be
  better off creating an oslo.vmware library from the beginning, instead
 of
  bringing it through the incubator.
 
  Thoughts?
 
  This sounds like a good idea, but it doesn't look OpenStack specific, so
  maybe building a non-oslo library would be better.
 
  Let's not zope it! :)

 +1 on not making it an oslo library.


Given the number of issues we've seen with stackforge libs in the gate,
I've changed my default stance on this point.

It's not clear from the code whether Vipin et al expect this library to be
useful for anyone not working with both OpenStack and VMware. Either way, I
anticipate having the library under the symmetric gating rules and managed
by the one of the OpenStack teams (oslo, nova, cinder?) and VMware
contributors should make life easier in the long run.

As far as the actual name goes, I'm not set on oslo.vmware it was just a
convenient name for the conversation.

Doug




 
  --
  Julien Danjou
  # Free Software hacker # independent consultant
  # http://julien.danjou.info
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 -
 Donald Stufft
 PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372
 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] VMware tools in oslo-incubator or straight to oslo.vmware

2014-01-28 Thread Russell Bryant
On 01/28/2014 05:06 PM, Donald Stufft wrote:
 
 On Jan 28, 2014, at 5:01 PM, Julien Danjou jul...@danjou.info
 wrote:
 
 On Tue, Jan 28 2014, Doug Hellmann wrote:
 
 There are several reviews related to adding VMware interface
 code to the oslo-incubator so it can be shared among projects
 (start at https://review.openstack.org/#/c/65075/7 if you want
 to look at the code).
 
 I expect this code to be fairly stand-alone, so I wonder if we
 would be better off creating an oslo.vmware library from the
 beginning, instead of bringing it through the incubator.
 
 Thoughts?
 
 This sounds like a good idea, but it doesn't look OpenStack
 specific, so maybe building a non-oslo library would be better.
 
 Let's not zope it! :)
 
 +1 on not making it an oslo library.

Yep, I asked the same thing on IRC a while back.

If this is generally useful Python code for talking to vmware, then
i'd love to just see it as a Python library maintained by vmware,
isntead of making it an OpenStack specific thing.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-28 Thread Robert Li (baoli)
Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Jay Pipes
On Tue, 2014-01-28 at 16:25 -0500, Brad Topol wrote:
 So we are starting to add more cloud audit (aka CADF) support to
 OpenStack.  We have support in Nova and infrastructure added to
 Ceilometer and I am starting to add this capability to keystone.  This
 work is based on sending events to ceilometer.  If this is related to
 the audit work below I would like to be included.

I don't believe Sean is talking about either notifications or
tenant-facing activity. He is talking about Python log message
consistency and cleanup across OpenStack projects.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-28 Thread Ben Nemec

On 2014-01-27 14:12, CARVER, PAUL wrote:

Jay Pipes wrote:


Have you ever tried using Google Translate for anything more than very
simple phrases?



The results can be... well, interesting ;) And given the amount of
technical terms used in these messages, I doubt GT or any automated
translating service would provide a whole lot of value...


Exactly what I wasn't suggesting and why I wasn't suggesting it. I 
meant

an OpenStack specific translation service taking advantage of the work
that the translators have already done and any work they do in the 
future.


I haven't looked at any of the current translation code in any 
OpenStack
project, but I presume there's basically a one to one mapping of 
English

messages to each other available language (maybe with rearrangement
of parameters to account for differences in grammar?)

I'd be surprised and impressed if the translators are applying some 
sort
of context sensitivity such that a particular English string could end 
up

getting translated to multiple different strings depending on something
that isn't captured in the English log message.

So basically instead of doing the search and replace of the static 
text

of each message before writing to the logfile, write the message to
the log in English and then have a separate process (I proposed web
based, but it could be as simple as a CLI script) to search and 
replace

the English with the desired target language after the fact.

If there's still a concern about ambiguity where you couldn't identify 
the

correct translation based only on knowing the original English static
text, then maybe it would be worth assigning unique ID numbers
to every translatable message so that it can be mapped uniquely
to the corresponding message in the target language.



I see a number of technical hurdles to implementing such a thing, but in 
theory I think it would be possible.  The big issue I see is that if I'm 
debugging a problem on a server I want to be tailing the logs live, not 
waiting for all the logging to be complete and then dumping them into a 
tool to make them readable to me and trying to figure out which entries 
correspond to the operation I'm interested in.  Maybe in the real world 
it wouldn't be as much of a problem as I think, but it's not a workflow 
I expect people to be happy with.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-28 Thread Edgar Magana
Well say Carl!
I am sorry I did not get back to you on this topic before.
In general and after thinking about it, it makes sense to leave could
admin to adage the /32 cases if any.

Edgar

On 1/28/14 1:46 PM, Carl Baldwin c...@ecbaldwin.net wrote:

I think I agree.  The new check isn't adding much value and we could
debate for a long time whether /30 is useful and should be disallowed
or not.  There are bigger fish to fry.

Carl

On Fri, Jan 24, 2014 at 10:43 AM, Paul Ward wpw...@us.ibm.com wrote:
 Given your obviously much more extensive understanding of networking
than
 mine, I'm starting to move over to the we shouldn't make this fix
camp.
 Mostly because of this:

 CARVER, PAUL pc2...@att.com wrote on 01/23/2014 08:57:10 PM:



 Putting a friendly helper in Horizon will help novice users and
 provide a good example to anyone who is developing an alternate UI
 to invoke the Neutron API. I¹m not sure what the benefit is of
 putting code in the backend to disallow valid but silly subnet
 masks. I include /30, /31, AND /32 in the category of ³silly² subnet
 masks to use on a broadcast medium. All three are entirely
 legitimate subnet masks, it¹s just that they¹re not useful for end
 host networks.

 My mindset has always been that we should programmatically prevent
things
 that are definitively wrong.  Of which, these netmasks apparently are
not.
 So it would seem we should leave neutron server code alone under the
 assumption that those using CLI to create networks *probably* know what
 they're doing.

 However, the UI is supposed to be the more friendly interface and
perhaps
 this is the more appropriate place for this change?  As I stated before,
 horizon prevents /32, but allows /31.

 I'm no UI guy, so maybe the best course of action is to abandon my
change in
 gerrit and move the launchpad bug back to unassigned and see if someone
with
 horizon experience wants to pick this up.  What do others think about
this?

 Thanks again for your participation in this discussion, Paul.  It's been
 very enlightening to me.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Jay Pipes
On Tue, 2014-01-28 at 22:36 +, Adrian Otto wrote:
 OpenStack Devs,
 
 I'd like to introduce you to a team working on an interesting problem space. 
 We would like to know what you think about Configuration Discovery. We plan 
 to build a tool that aids in the process of automated configuration discovery.
 
 At Rackspace we work with customers who have pre-existing infrastructure and 
 software running in production. To help them with changes, migrations, or 
 configuration management, we often have to determine what they have in-place 
 already. We have some tools and knowledge in-house that we use now, but we 
 would like to create something even better in the open that would integrate 
 well with OpenStack ecosystem projects like Heat, Solum, Murano, etc…
 
 The concept for the tool is simple. The inputs are facts we know about the 
 current system (a username, API Key, URL, etc.) and the output is a set of 
 structured configuration information. That information can be used for:
 
 - Generating Heat Templates
 - Solum Application creation/import
 - Creation of Chef recipes/cookbooks, Puppet modules, Ansible playbooks, 
 setup scripts, etc..
 - Configuration analysis (compare this config with a catalog of best 
 practices)
 - Configuration monitoring (has the configuration changed?)
 - Troubleshooting
 
 We would like to hear from anyone interested in this problem domain, start a 
 conversation on how this fits in the OpenStack ecosystem, and start exploring 
 possible use cases that would be useful for users and operators of OpenStack 
 clouds.
 
 For more information about the team, implementation, and prior art in this 
 area, please see:
 
 https://wiki.openstack.org/wiki/Satori

In the Related Work section, you list:

Devstructure Blueprint (https://github.com/devstructure/blueprint)

That is precisely what I would recommend. I don't see value in having a
separate OpenStack project that does this.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Russell Bryant
On 01/28/2014 06:02 PM, Jay Pipes wrote:
 On Tue, 2014-01-28 at 22:36 +, Adrian Otto wrote:
 OpenStack Devs,

 I'd like to introduce you to a team working on an interesting problem space. 
 We would like to know what you think about Configuration Discovery. We plan 
 to build a tool that aids in the process of automated configuration 
 discovery.

 At Rackspace we work with customers who have pre-existing infrastructure and 
 software running in production. To help them with changes, migrations, or 
 configuration management, we often have to determine what they have in-place 
 already. We have some tools and knowledge in-house that we use now, but we 
 would like to create something even better in the open that would integrate 
 well with OpenStack ecosystem projects like Heat, Solum, Murano, etc…

 The concept for the tool is simple. The inputs are facts we know about the 
 current system (a username, API Key, URL, etc.) and the output is a set of 
 structured configuration information. That information can be used for:

 - Generating Heat Templates
 - Solum Application creation/import
 - Creation of Chef recipes/cookbooks, Puppet modules, Ansible playbooks, 
 setup scripts, etc..
 - Configuration analysis (compare this config with a catalog of best 
 practices)
 - Configuration monitoring (has the configuration changed?)
 - Troubleshooting

 We would like to hear from anyone interested in this problem domain, start a 
 conversation on how this fits in the OpenStack ecosystem, and start 
 exploring possible use cases that would be useful for users and operators of 
 OpenStack clouds.

 For more information about the team, implementation, and prior art in this 
 area, please see:

 https://wiki.openstack.org/wiki/Satori
 
 In the Related Work section, you list:
 
 Devstructure Blueprint (https://github.com/devstructure/blueprint)
 
 That is precisely what I would recommend. I don't see value in having a
 separate OpenStack project that does this.

Sometimes the right answer is to join in and help extend an existing
project.  :-)

If that's *not* the answer, there should be a compelling reason why.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Sean Dague
On 01/28/2014 05:56 PM, Jay Pipes wrote:
 On Tue, 2014-01-28 at 16:25 -0500, Brad Topol wrote:
 So we are starting to add more cloud audit (aka CADF) support to
 OpenStack.  We have support in Nova and infrastructure added to
 Ceilometer and I am starting to add this capability to keystone.  This
 work is based on sending events to ceilometer.  If this is related to
 the audit work below I would like to be included.
 
 I don't believe Sean is talking about either notifications or
 tenant-facing activity. He is talking about Python log message
 consistency and cleanup across OpenStack projects.

Correct, this is just about the per service logging, which might be to
files, syslog, or other logging services.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Sean Dague
On 01/28/2014 06:02 PM, Jay Pipes wrote:
 On Tue, 2014-01-28 at 22:36 +, Adrian Otto wrote:
 OpenStack Devs,

 I'd like to introduce you to a team working on an interesting problem space. 
 We would like to know what you think about Configuration Discovery. We plan 
 to build a tool that aids in the process of automated configuration 
 discovery.

 At Rackspace we work with customers who have pre-existing infrastructure and 
 software running in production. To help them with changes, migrations, or 
 configuration management, we often have to determine what they have in-place 
 already. We have some tools and knowledge in-house that we use now, but we 
 would like to create something even better in the open that would integrate 
 well with OpenStack ecosystem projects like Heat, Solum, Murano, etc…

 The concept for the tool is simple. The inputs are facts we know about the 
 current system (a username, API Key, URL, etc.) and the output is a set of 
 structured configuration information. That information can be used for:

 - Generating Heat Templates
 - Solum Application creation/import
 - Creation of Chef recipes/cookbooks, Puppet modules, Ansible playbooks, 
 setup scripts, etc..
 - Configuration analysis (compare this config with a catalog of best 
 practices)
 - Configuration monitoring (has the configuration changed?)
 - Troubleshooting

 We would like to hear from anyone interested in this problem domain, start a 
 conversation on how this fits in the OpenStack ecosystem, and start 
 exploring possible use cases that would be useful for users and operators of 
 OpenStack clouds.

 For more information about the team, implementation, and prior art in this 
 area, please see:

 https://wiki.openstack.org/wiki/Satori
 
 In the Related Work section, you list:
 
 Devstructure Blueprint (https://github.com/devstructure/blueprint)
 
 That is precisely what I would recommend. I don't see value in having a
 separate OpenStack project that does this.

ACK.

Also, thanks for bringing attention to such a cool project. :)

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Yang Shuo (Shuo)
Andrian,

Looks an interesting idea. Would you envision Satori primarily as an migration 
tool? Let's use the following scenarios as an example to explore the scope in 
your mind.

Server1 is a physical server running a mail server service, and we would like 
migrate it onto a virtual server provisioned by our OpenStack cluster. In this 
case, would Satori be helpful?

Server2 is a VM that I manually installed after OpenStack provides me a VM with 
a basic Ubuntu 12.10 image (as I manually did a lot of things and did not 
follow the Infrastructure as code philosophy, I do not know where I am at now), 
I want Satori to examine the system and create a cookbook or a heat template 
for me. Is this the Satori's primary target case?

Server3 is an EC2 instance, and I want to migrate that instance to my OpenStack 
cluster. In this case, would Satori be helpful?


BTW, what does  Configuration monitoring  mean? Can you elaborate it? 

Love to hear more about you thoughts.

Thanks,
Shuo

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: Tuesday, January 28, 2014 2:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Configuration Discovery - Satori

OpenStack Devs,

I'd like to introduce you to a team working on an interesting problem space. We 
would like to know what you think about Configuration Discovery. We plan to 
build a tool that aids in the process of automated configuration discovery.

At Rackspace we work with customers who have pre-existing infrastructure and 
software running in production. To help them with changes, migrations, or 
configuration management, we often have to determine what they have in-place 
already. We have some tools and knowledge in-house that we use now, but we 
would like to create something even better in the open that would integrate 
well with OpenStack ecosystem projects like Heat, Solum, Murano, etc...

The concept for the tool is simple. The inputs are facts we know about the 
current system (a username, API Key, URL, etc.) and the output is a set of 
structured configuration information. That information can be used for:

- Generating Heat Templates
- Solum Application creation/import
- Creation of Chef recipes/cookbooks, Puppet modules, Ansible playbooks, setup 
scripts, etc..
- Configuration analysis (compare this config with a catalog of best practices)
- Configuration monitoring (has the configuration changed?)
- Troubleshooting

We would like to hear from anyone interested in this problem domain, start a 
conversation on how this fits in the OpenStack ecosystem, and start exploring 
possible use cases that would be useful for users and operators of OpenStack 
clouds.

For more information about the team, implementation, and prior art in this 
area, please see:

https://wiki.openstack.org/wiki/Satori

Thanks,

Adrian Otto 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Caleb Groom
On January 28, 2014 at 5:05:56 PM, Jay Pipes (jaypi...@gmail.com) wrote:
In the Related Work section, you list:

Devstructure Blueprint (https://github.com/devstructure/blueprint)

That is precisely what I would recommend. I don't see value in having a
separate OpenStack project that does this.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi Jay,

Devstructure Blueprint is scoped to gathering information about a single 
server. We listed it as related because its a handy way to handle reverse 
engineering once you’re logged into a server instance. However, Satori has 
greater aims such as discovering the topology of resources (load balancer, its 
configuration, nova instances behind the load balancer, connected cinder 
instances, etc). For each relevant server instance we could gather deep system 
knowledge by using the projects listed in the Related section.

Caleb



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Oslo Context and SecurityContext

2014-01-28 Thread Angus Salkeld

On 28/01/14 07:13 -0800, Georgy Okrokvertskhov wrote:

Hi,

From my experience context is usually bigger then just a storage for user
credentials and specifics of request. Context usually defines an area
within the called method should act. Probably the class name RequestContext
is a bit confusing. The actual goal of the context should be defined by a
service design. If you have a lot of independent components you will
probably will ned to pass a lot of parameters to specify specifics of work,
so it is just more convenient to have dictionary like object which carry
all necessary information about contextual information. This context can be
used to pass information between different components of the service.


I think we should be using the nova style objects for passing data
between solum services (they can be serialized for rpc). But you hit
on a point - this context needs to be called something else, it is
not a RequestContext (we need the RequestContext regardless).
I'd also suggest we don't build it until we know we
need it (I am just suspicious as the other openstack services I
have worked on don't have such a thing). Normally we just pass
arguments to methods.

How about we keep things simple and don't get
into designing a boeing, we can always add these things later if
they are really needed. I get the feeling we are being distracted from
our core problem of getting this service functional by nice to
haves.

-Angus





On Mon, Jan 27, 2014 at 4:27 PM, Angus Salkeld
angus.salk...@rackspace.comwrote:


On 27/01/14 22:53 +, Adrian Otto wrote:


On Jan 27, 2014, at 2:39 PM, Paul Montgomery 
paul.montgom...@rackspace.com
wrote:

 Solum community,


I created several different approaches for community consideration
regarding Solum context, logging and data confidentiality.  Two of these
approaches are documented here:

https://wiki.openstack.org/wiki/Solum/Logging

A) Plain Oslo Log/Config/Context is in the Example of Oslo Log and Oslo
Context section.

B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the
RequestContext class and adds some confidentiality functions is in the
Example of Oslo Log and Oslo Context Combined with SecurityContext
section.

None of this code is production ready or tested by any means.  Please
just
examine the general architecture before I polish too much.

I hope that this is enough information for us to agree on a path A or B.
I honestly am not tied to either path very tightly but it is time that we
reach a final decision on this topic IMO.

Thoughts?



I have a strong preference for using the SecurityContext approach. The
main reason for my preference is outlined in the Pro/Con sections of the
Wiki page. With the A approach, leakage of confidential information mint
happen with *any* future addition of a logging call, a discipline which may
be forgotten, or overlooked during future code reviews. The B approach
handles the classification of data not when logging, but when placing the
data into the SecurityContext. This is much safer from a long term
maintenance perspective.



I think we seperate this out into:

1) we need to be security aware whenever we log information handed to
   us by the user. (I totally agree with this general statement)

2) should we log structured data, non structured data or use the
notification mechanism (which is structured)
   There have been some talks at summit about the potential merging of
   the logging and notification api, I honestly don't know what
   happened to that but have no problem with structured logging. We
   should use the notification system so that ceilometer can take
   advantage of the events.

3) should we use a RequestContext in the spirit of the olso-incubator
  (and inherited from it too). OR one different from all other
  projects.

  IMHO we should just use oslo-incubator RequestContext. Remember the
  context is not a generic dumping ground for I want to log stuff so
  lets put it into the context. It is for user credentials and things
  directly associated with the request (like the request_id). I don't
  see why we need a generic dict style approach, this is more likely
  to result in programming error context.set_priv('userid', bla)
  instead of:
  context.set_priv('user_id', bla)

  I think my point is: We should very quickly zero in on the
  attributes we need in the context and they will seldom change.

  As far as security goes Paul has shown a good example of how to
  change the logging_context_format_string to achieve structured and
  secure logging of the context. oslo log module does not log whatever
  is in the context but only what is configured in the solum.conf (via
  logging_context_format_string). So I don't believe that the
  new/different RequestContext provides any improved security.



-Angus





Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Proposed Logging Standards

2014-01-28 Thread Scott Devoid
 A big part of my interest here is to make INFO a useful informational
 level for operators. That means getting a bunch of messages out of it
 that don't belong.


+1 to that! How should I open / tag bugs for this?

We should be logging user / tenant on every wsgi request, so that should
 be parsable out of INFO. If not, we should figure out what is falling
 down there.


At the moment we're not automatically parsing logs (just collecting via
syslog and logstash).

Follow on question: do you primarily use the EC2 or OSAPI? As there are
 some current short comings on the EC2 logging, and figuring out
 normalizing those would be good as well.


Most of our users work through Horizon or the nova CLI. Good to know about
the EC2 issues though.


On Tue, Jan 28, 2014 at 1:46 PM, Sean Dague s...@dague.net wrote:

 On 01/28/2014 12:41 PM, Scott Devoid wrote:
  For the uses I've seen of it in the nova api code INFO would be
  perfectly fine in place of AUDIT.
 
 
  We've found the AUDIT logs in nova useful for tracking which user
  initiated a particular request (e.g. delete this instance). AUDIT had a
  much better signal to noise ratio than INFO or DEBUG. Although this
  seems to have changed since Essex. For example nova-compute spits out
  AUDIT nova.compute.resource_tracker messages every minute even if
  there are no changes :-/

 A big part of my interest here is to make INFO a useful informational
 level for operators. That means getting a bunch of messages out of it
 that don't belong.

 We should be logging user / tenant on every wsgi request, so that should
 be parsable out of INFO. If not, we should figure out what is falling
 down there.

 Follow on question: do you primarily use the EC2 or OSAPI? As there are
 some current short comings on the EC2 logging, and figuring out
 normalizing those would be good as well.

 -Sean

 
  ~ Scott
 
 
  On Tue, Jan 28, 2014 at 11:11 AM, Everett Toews
  everett.to...@rackspace.com mailto:everett.to...@rackspace.com
 wrote:
 
  Hi Sean,
 
  Could 1.1.1 Every Inbound WSGI request should be logged Exactly
  Once be used to track API call data in order to discover which API
  calls are being made most frequently?
 
  It certainly seems like it could but I want to confirm. I ask
  because this came up as B Get aggregate API call data from
  companies willing to share it. in the user survey discussion [1].
 
  Thanks,
  Everett
 
  [1]
 
 http://lists.openstack.org/pipermail/user-committee/2014-January/000214.html
 
 
  On Jan 27, 2014, at 7:07 AM, Sean Dague wrote:
 
   Back at the beginning of the cycle, I pushed for the idea of doing
  some
   log harmonization, so that the OpenStack logs, across services,
 made
   sense. I've pushed a proposed changes to Nova and Keystone over
  the past
   couple of days.
  
   This is going to be a long process, so right now I want to just
  focus on
   making INFO level sane, because as someone that spends a lot of
 time
   staring at logs in test failures, I can tell you it currently
 isn't.
  
   https://wiki.openstack.org/wiki/LoggingStandards is a few things
 I've
   written down so far, comments welcomed.
  
   We kind of need to solve this set of recommendations once and for
  all up
   front, because negotiating each change, with each project, isn't
 going
   to work (e.g - https://review.openstack.org/#/c/69218/)
  
   What I'd like to find out now:
  
   1) who's interested in this topic?
   2) who's interested in helping flesh out the guidelines for
  various log
   levels?
   3) who's interested in helping get these kinds of patches into
 various
   projects in OpenStack?
   4) which projects are interested in participating (i.e. interested
 in
   prioritizing landing these kinds of UX improvements)
  
   This is going to be progressive and iterative. And will require
  lots of
   folks involved.
  
 -Sean
  
   --
   Sean Dague
   Samsung Research America
   s...@dague.net mailto:s...@dague.net / sean.da...@samsung.com
  mailto:sean.da...@samsung.com
   http://dague.net
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 --
 Sean Dague
 Samsung 

Re: [openstack-dev] [Nova] Dropping XML support in the v3 compute API

2014-01-28 Thread Christopher Yeoh
On Fri, Jan 24, 2014 at 8:28 AM, Russell Bryant rbry...@redhat.com wrote:

 Greetings,

 Recently Sean Dague started some threads [1][2] about the future of XML
 support in Nova's compute API.  Specifically, he proposed [3] that we
 drop XML support in the next major version of the API (v3).  I wanted to
 follow up on this to make the outcome clear.

 I feel that we should move forward with this proposal and drop XML
 support from the v3 compute API.  The ongoing cost in terms of
 development, maintenance, documentation, and verification has been quite
 high.  After talking to a number of people about this, I do not feel
 that keeping it provides enough value to justify the cost.

 Even though we may be dropping it now, I will not say that we will
 *never* support it in the future.  If there is enough interest (and work
 behind it) in the future, we could revisit a new implementation that is
 easier to support long term.  For now, we will stick to what we know
 works, and that is the JSON API.



There is now an etherpad here
https://etherpad.openstack.org/p/NovaRemoveXMLV3
to help coordinate removal of the XML code and avoid duplication of effort.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 plugin swallows mechanism driver exceptions

2014-01-28 Thread Paul Ward

FYI - I have pushed a change to gerrit for this:
https://review.openstack.org/#/c/69748/

I went the simple route of just including the last exception encountered.

All comments and reviews welcome!!



Andre Pech ap...@aristanetworks.com wrote on 01/24/2014 03:43:24 PM:

 From: Andre Pech ap...@aristanetworks.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 01/24/2014 03:48 PM
 Subject: Re: [openstack-dev] [neutron] ML2 plugin swallows mechanism
 driver exceptions

 Hey Paul,

 This is by design, and reraising a single MechanismDriverError was
 really to have a nice defined API for the MechanismManager class,
 avoid blanket try/except calls in the caller. But I do agree that
 it's really annoying to lose the information about the underlying
 exception. I like your idea of including the original exception text
 in the MechanismDriverError message, I think that'd help a lot.

 Andre


 On Fri, Jan 24, 2014 at 1:19 PM, Paul Ward wpw...@us.ibm.com wrote:
 In implementing a mechanism driver for ML2 today, I discovered that
 any exceptions thrown from your mechanism driver will get swallowed
 by the ML2 manager (https://github.com/openstack/neutron/blob/
 master/neutron/plugins/ml2/managers.py at line 164).

 Is this by design?  Sure, you can look at the logs, but it seems
 more user friendly to reraise the exception that got us here.  There
 could be multiple mechanism drivers being called in a chain, so
 changing this to reraise an exception that got us in trouble would
 really only be able to reraise the last exception encountered, but
 it seems that's better than none at all.  Or maybe even keep a list
 of exceptions raised and put all their texts into the
 MechanismDriverError message.

 Thoughts?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Google Sumer of Code 2014 - 2/14/14 deadline for orgs

2014-01-28 Thread Anne Gentle
Hi all,
The Google Summer of Code program has been around since 2005, and as an org
OpenStack has applied a few of the past years without getting accepted due
to needing more detail in the projects and dedicated mentors.

I'd like to see if we as a project want to try again this year to put in
project ideas on our wiki and put enough detail into them to get accepted.
I'd prefer not to serve as admin but am sending this email to recruit an
admin as well as developer mentors who want to write detailed project
descriptions.

Stefano and I discussed this on IRC today and felt a call to the community
was the best next step as both of us are otherwise occupied.

If you're interested, let Stef and me know, and read as much as you can on
the GSoC website to prepare.

http://www.google-melange.com/document/show/gsoc_program/google/gsoc2014/help_page

I highly encourage our community to participate and plan accordingly. We
still have past idea pages [1] [2] and an application template[3] so you
wouldn't have to start with a blank slate.

Thanks,
Anne

1 https://wiki.openstack.org/wiki/GSoC2013/Ideas
2 https://wiki.openstack.org/wiki/GSoC2012/Ideas
3 https://wiki.openstack.org/wiki/GSoC2012/ApplicationTemplate
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Caleb Groom
On January 28, 2014 at 5:18:42 PM, Yang Shuo (Shuo) 
(shuo.y...@huawei.com(mailto://shuo.y...@huawei.com)) wrote:

 Andrian,
  
 Looks an interesting idea. Would you envision Satori primarily as an 
 migration tool? Let's use the following scenarios as an example to explore 
 the scope in your mind.

Migrations are not the primary use case that we are targeting.

Primarily, Satori aims to discover as much detail as possible for an existing 
configuration. Imagine walking into a clients office with the mission to 
optimize the performance of domain.com. One of the first steps is to understand 
how the current environment is configured. Satori would give you a quick 
snapshot of all of the related systems in use. This includes:
- DNS settings
- SSL certificate details
- Load balancer settings and status of the connection pool
- Nova instances in the connection pool and piles of relevant info (OS, 
software installed, active connections to other nova/trove instances, etc).

Imagine the client handing you a Visio diagram of the discovered configuration 
for the entire site that isn’t 6 months old and several major changes behind 
reality :)

 Server1 is a physical server running a mail server service, and we would like 
 migrate it onto a virtual server provisioned by our OpenStack cluster. In 
 this case, would Satori be helpful?

The hardest part of the migration is dealing with the existing data. Satori 
could inspect the server and provide details of how it is configured but it 
wouldn’t seek to promise you a one-click migration that includes moving all of 
your data.

 Server2 is a VM that I manually installed after OpenStack provides me a VM 
 with a basic Ubuntu 12.10 image (as I manually did a lot of things and did 
 not follow the Infrastructure as code philosophy, I do not know where I am at 
 now), I want Satori to examine the system and create a cookbook or a heat 
 template for me. Is this the Satori's primary target case?

This is a valid use case that we would target. However, a single server probe 
is handled very well by Devstructure’s Blueprint already. We would provide 
value in more complicated scenarios where several OpenStack resources and 
services are in use. The generated Heat template would provision the servers 
and the associated resources (queue, cinder volumes, trove database, etc).

 Server3 is an EC2 instance, and I want to migrate that instance to my 
 OpenStack cluster. In this case, would Satori be helpful?

This feels very similar to your first use case.

 BTW, what does  Configuration monitoring  mean? Can you elaborate it?

Configuration monitoring is the passive detection of changes between discovery 
attempts. Monitoring is probably the wrong word as it implies constant polling. 
We could store and diff the results of multiple discovery runs.

  
 Love to hear more about you thoughts.
  
 Thanks,
 Shuo
  

I hope this helps distinguish Satori and a single server reverse engineering 
project like Blueprint.

Caleb

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Configuration Discovery - Satori

2014-01-28 Thread Jay Pipes
On Tue, 2014-01-28 at 17:20 -0600, Caleb Groom wrote:
 On January 28, 2014 at 5:05:56 PM, Jay Pipes (jaypi...@gmail.com)
 wrote:
  In the Related Work section, you list:
  
  Devstructure Blueprint (https://github.com/devstructure/blueprint)
  
  That is precisely what I would recommend. I don't see value in
  having a
  separate OpenStack project that does this.
  
  Best,
  -jay
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 Hi Jay,
 
 
 Devstructure Blueprint is scoped to gathering information about a
 single server. We listed it as related because its a handy way to
 handle reverse engineering once you’re logged into a server instance.
 However, Satori has greater aims such as discovering the topology of
 resources (load balancer, its configuration, nova instances behind the
 load balancer, connected cinder instances, etc). For each relevant
 server instance we could gather deep system knowledge by using the
 projects listed in the Related section.

So why not improve Devstructure Blueprint and add in some plugins that,
given some OpenStack creds, query things like the Neutron and Nova API
endpoints for such information.

I don't think this needs to be a separate OpenStack service.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Ceilometer]bp:send-data-to-ceilometer

2014-01-28 Thread Haomeng, Wang
Hi,

I am working on this ironic bp -
https://blueprints.launchpad.net/ironic/+spec/send-data-to-ceilometer,
worked out solutions as below, can you help to review, welcome your
comments:

We will call ipmi command/api to get sensor data(sensor name, sensor
current value, min/max value, status etc) and try to sent to
ceilometer collector via notification bus(call rpc_notifier.notify()
api to send the ipmi data message) with ironic periodic task, I have
two proposed ironic-ceilometer integration solutions about the data to
be sent to ceilometer:

Solution 1 - sent the ipmi data to ceilometer by ipmi sensor category
in specific, the meter names vary clear for ceilometer, that is
pre-defined already:

Common field:
timestamp
publisher_id
message_id
resource-id #the ironic node-uuid

Category:
FanSpeed
Voltage
Temperature

Meter Names:
fanspeed, fanspeed.min, fanspeed.max, fanspeed.status
voltage, voltage.min, voltage.max, voltage.status
temperature, temperature.min, temperature.max, temperature.status

An message example with one ipmi node sensor data:

message = {
'event_type': 'ipmidata',
'timestamp': '2013-12-1706: 12: 11.554607',
'user_id': 'admin',
'publisher_id': 'ipmidata-os26-control01.localdomain',
'message_id:' '3eca2746-9d81-42cd-b0b3-4bdec52e109x',
'tenant_id: 'c1921aa2216846919269a17978408476',
'instance_uuid: '96e11f69-f12a-485e-abfa-526cd04169c4' # nova
instance uuid
'id': '1329998e8183419794507cd6f0cc121a' # node's uuid
'payload': {
'fanspeed': {
'FAN 1': {
'current_value': '4652',
'min_value': '4200',
'max_value': '4693',
'status': 'ok'
}
'FAN 2': {
'current_value': '4322',
'min_value': '4210',
'max_value': '4593',
'status': 'ok'
},
'voltage': {
'Vcore': {
'current_value': '0.81',
'min_value': '0.80',
'max_value': '0.85',
'status': 'ok'
},
'3.3VCC': {
'current_value': '3.36',
'min_value': '3.20',
'max_value': '3.56',
'status': 'ok'
},
...
}
}

Solution 2- sent the ipmi data to ceilometer on the common sensor
meter level, we have one 'sensor' as common meter, so all the sensor
data will have more detail level to define the sensor name and
attributes - current/min/max/status values:

Common field:
timestamp
publisher_id
message_id

Common sensor meter name:
sensor

An message example with one ipmi node sensor data:

message = {
'event_type': 'ipmidata',
'timestamp': '2013-12-1706: 12: 11.554607',
'user_id': 'admin',
'publisher_id': 'ipmidata-os26-control01.localdomain',
'message_id:' '3eca2746-9d81-42cd-b0b3-4bdec52e109x',
'tenant_id: 'c1921aa2216846919269a17978408476',
'instance_uuid: '96e11f69-f12a-485e-abfa-526cd04169c4' # nova
instance uuid
'id': '1329998e8183419794507cd6f0cc121a' # node's uuid
'payload': {
'FAN 1': {
'current_value': '4652',
'min_value': '4200',
'max_value': '4693',
'status': 'ok'
}
'FAN 2': {
'current_value': '4322',
'min_value': '4210',
'max_value': '4593',
'status': 'ok'
},
'Vcore': {
'current_value': '0.81',
'min_value': '0.80',
'max_value': '0.85',
'status': 'ok'
},
'3.3VCC': {
'current_value': '3.36',
'min_value': '3.20',
'max_value': '3.56',
'status': 'ok'
},
...
}
}

And we have existing patch in ceilometer 'Add ipmi inspector' -
https://review.openstack.org/#/c/51828, it is Abandoned, but I think
we should have same patch/bp in ceilometer to handle the ipmi data
message sent out from ironic. Once the solution is confirmed and
finalized, I will work with ceilometer team to do the testing to
verify if our solution working or not.


Thanks
Haomeng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][swift] Importing Launchpad Answers in Ask OpenStack

2014-01-28 Thread Stefano Maffulli
Hello folks

we're almost ready to import all questions and asnwers from LP Answers
into Ask OpenStack.  You can see the result of the import from Nova on
the staging server http://ask-staging.openstack.org/

There are some formatting issues for the imported questions and I'm
trying to evaluate how bad these are.  The questions I see are mostly
readable and definitely pop up in search results, with their answers so
they are valuable already as is. Some parts, especially the logs, may
not look as good though. Fixing the parsers and get a better rendering
for all imported questions would take an extra 3-5 days of work (maybe
more) and I'm not sure it's worth it.

Please go ahead and browse the staging site and let me know what you think.

Cheers,
stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-28 Thread Chris Friesen

On 01/28/2014 10:55 AM, Jani, Nrupal wrote:


While technically it is possible, we as a team can decide
about the final recommendationJGiven that VFs are going to be used for
the high-performance VMs, mixing VMs with virtio  VFs may not be a good
option.  Initially we can use PF interface for the management traffic
and/or VF configuration!!


I would expect that it would be fairly common to want to dedicate a VF 
link for high-speed data plane and use a virtio link for control plane 
traffic, health checks, etc.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][swift] Importing Launchpad Answers in Ask OpenStack

2014-01-28 Thread Scott Devoid
Is it possible to include a link to the original LP Answers page as a
comment on the question? Or are the LP Answers sections getting wiped
completely after the move?
Also perhaps all imported questions should be tagged lp-answers or
something? This would help manual curators to vote and further clean up
questions.

Otherwise it looks quite good. Thanks for the work!

~ Scott


On Tue, Jan 28, 2014 at 6:38 PM, Stefano Maffulli stef...@openstack.orgwrote:

 Hello folks

 we're almost ready to import all questions and asnwers from LP Answers
 into Ask OpenStack.  You can see the result of the import from Nova on
 the staging server http://ask-staging.openstack.org/

 There are some formatting issues for the imported questions and I'm
 trying to evaluate how bad these are.  The questions I see are mostly
 readable and definitely pop up in search results, with their answers so
 they are valuable already as is. Some parts, especially the logs, may
 not look as good though. Fixing the parsers and get a better rendering
 for all imported questions would take an extra 3-5 days of work (maybe
 more) and I'm not sure it's worth it.

 Please go ahead and browse the staging site and let me know what you think.

 Cheers,
 stef

 --
 Ask and answer questions on https://ask.openstack.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-28 Thread Ben Nemec
 

On 2014-01-27 11:42, Doug Hellmann wrote: 

 We have a blueprint open for separating translated log messages into 
 different domains so the translation team can prioritize them differently 
 (focusing on errors and warnings before debug messages, for example) [1]. 
 Some concerns were raised related to the review [2], and I would like to 
 address those in this thread and see if we can reach consensus about how to 
 proceed. 
 
 The implementation in [2] provides a set of new marker functions similar to 
 _(), one for each log level (we have _LE, LW, _LI, _LD, etc.). These would be 
 used in conjunction with _(), and reserved for log messages. Exceptions, API 
 messages, and other user-facing messages all would still be marked for 
 translation with _() and would (I assume) receive the highest priority work 
 from the translation team. 
 
 When the string extraction CI job is updated, we will have one main catalog 
 for each app or library, and additional catalogs for the log levels. Those 
 show up in transifex separately, but will be named in a way that they are 
 obviously related. Each translation team will be able to decide, based on the 
 requirements of their users, how to set priorities for translating the 
 different catalogs. 
 
 Existing strings being sent to the log and marked with _() will be removed 
 from the main catalog and moved to the appropriate log-level-specific catalog 
 when their marker function is changed. My understanding is that transifex is 
 smart enough to recognize the same string from more than one source, and to 
 suggest previous translations when it sees the same text. This should make it 
 easier for the translation teams to catch up by reusing the translations 
 they have already done, in the new catalogs. 
 
 One concern that was raised was the need to mark all of the log messages by 
 hand. I investigated using extraction patterns like LOG.debug( and 
 LOG.info(, but because of the way the translation actually works internally 
 we cannot do that. There are a few related reasons. 
 
 In other applications, the function _() translates a string at the point 
 where it is invoked, and returns a new string object. OpenStack has a 
 requirement that messages be translated multiple times, whether in the API or 
 the LOG (there is already support for logging in more than one language, to 
 different log files). This requirement means we delay the translation 
 operation until right before the string is output, at which time we know the 
 target language. We could update the log functions to create Message objects 
 dynamically, except... 
 
 Each app or library that uses the translation code will need its own domain 
 for the message catalogs. We get around that right now by not translating 
 many messages from the libraries, but that's obviously not what we want long 
 term (we at least want exceptions translated). If we had a special version of 
 a logger in oslo.log that knew how to create Message objects for the format 
 strings used in logging (the first argument to LOG.debug for example), it 
 would also have to know what translation domain to use so the proper catalog 
 could be loaded. The wrapper functions defined in the patch [2] include this 
 information, and can be updated to be application or library specific when 
 oslo.log eventually becomes its own library. 
 
 Further, as part of moving the logging code from oslo-incubator to oslo.log, 
 and making our logging something we can use from other OpenStack libraries, 
 we are trying to change the implementation of the logging code so it is no 
 longer necessary to create loggers with our special wrapper function. That 
 would mean that oslo.log will be a library for *configuring* logging, but the 
 actual log calls can be handled with Python's standard library, eliminating a 
 dependency between new libraries and oslo.log. (This is a longer, and 
 separate, discussion, but I mention it here as backround. We don't want to 
 change the API of the logger in oslo.log because we don't want to be using it 
 directly in the first place.) 
 
 Another concern raised was the use of a prefix _L for these functions, since 
 it ties the priority definitions to logs. I chose that prefix as an 
 explicit indicate that these *are* just for logs. I am not associating any 
 actual priority with them. The translators want us to move the log messages 
 out of the main catalog. Having them all in separate catalogs is a refinement 
 that gives them what they want -- some translators don't care about log 
 messages at all, some only care about errors, etc. We decided that the 
 translators should set priorities, and we would make that possible by 
 separating the catalogs into logical groups. Everything marked with _() will 
 still go into the main catalog, but beyond that it isn't up to the developers 
 to indicate priority for translations. 
 
 The alternative approach of using babel translator comments would, under 
 other 

Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2014-01-28 Thread Robert Collins
So, thoughts...

I do see this as useful, but I don't see an all-in-one overcloud as
useful for developers of tuskar (or pretty much anything). It's just
not realistic enough.

I'm pro having downloadable images, long as we have rights to do that
for whatever OS we're based on. Ideally we'd have images for all the
OSes we support (except those with restrictions like RHEL and SLES).

Your instructions at the moment need to be refactored to support
devtest_testenv and other recent improvements :)

BTW the MTU note you have will break folks actual environments unless
they have jumbo frames on everything- I *really wouldn't do that* -
instead work on bug https://bugs.launchpad.net/neutron/+bug/1270646

-Rob

On 29 January 2014 11:33, James Slagle james.sla...@gmail.com wrote:
 Sorry to revive an old thread, but I've updated these images based on
 the icehouse-2 milestone.

 I've updated the instructions with the new download links:
 https://gist.github.com/slagle/981b279299e91ca91bd9

 To reiterate, the point here is to give people an easier on ramp to
 getting a tripleo setup. This is especially important as more
 developers start to get involved with Tuskar in particular. There is
 definitely a lot of value in going through the whole devtest process
 yourself, but it can be a bit daunting initially, and since this
 eliminates the image building step with pre-built images, I think
 there's less that can go wrong.

 Given Clint's earlier feedback, I could see working the seed vm back
 into these steps and getting the config drive setup into incubator so
 that everyone that goes through devtest doesn't have to have a custom
 seed. Then giving folks the option to use prebuilt vm images vs
 building from scratch. Also, given the pending patches around an all
 in one Overcloud, we could work the seed back into this, and still be
 at just 3 vm's.

 Any other feedback welcome.


 On Tue, Dec 24, 2013 at 11:50 AM, James Slagle james.sla...@gmail.com wrote:
 I built some vm image files for testing with TripleO based off of the
 icehouse-1 milestone tarballs for Fedora and Ubuntu.  If folks are
 interested in giving them a try you can find a set of instructions and
 how to download the images at:

 https://gist.github.com/slagle/981b279299e91ca91bd9

 The steps are similar to the devtest process, but you use the prebuilt
 vm images for the undercloud and overcloud and don't need a seed vm.
 When the undercloud vm is started it uses the OpenStack Configuration
 Drive as a data source for cloud-init.  This eliminates some of the
 manual configuration that would otherwise be needed.  To that end, the
 steps currently use some different git repos for some of the tripleo
 tooling since not all of that functionality is upstream yet.  I can
 submit those upstream, but they didn't make a whole lot of sense
 without the background, so I wanted to provide that first.

 At the very least, this could be an easier way for developers to get
 setup with tripleo to do a test overcloud deployment to develop on
 things like Tuskar.


 --
 -- James Slagle
 --



 --
 -- James Slagle
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extending keystone identity

2014-01-28 Thread Dolph Mathews
On Tue, Jan 28, 2014 at 12:54 PM, Simon Perfer simon.per...@hotmail.comwrote:

 Thanks again, Dolph.

 First, is there some good documentation on how to write a custom driver?
 I'm wondering specifically about how a keystone user-list is mapped to a
 specific function in identity/backend/mydriver.py.


I believe it's calling list_users() in your implementation of the Driver
interface (or raising Not Implemented from the Driver abstract base class
itself).


 I suppose this mapping is why I was getting the 500 error about the action
 not being implemented.


(501 Not Implemented - 500 is for unhandled exceptions)



 Secondly, before poking around with writing a custom driver, I was decided
 to simply inherit ldap.Identity, as follows:

 class Identity(ldap.Identity):

 def __init__(self):

 super(Identity, self).__init__()

 LOG.debug('My authentication module loaded')


 def authenticate(self, user_id, password):

 LOG.debug('in auth function')

The basic structure of that looks good to me.

   def __init__(self, *args, **kwargs):
   super(Identity, self).__init__(*args, **kwargs)


 When I get a list of users, I never get the debug output.

What debug output are you expecting? The above code snippet doesn't
override list_users(), so I wouldn't expect any output, except what
ldap.Identity already provides.

 Further, I removed the authenticate method from the Identity class in
 ldap.py and list-users STILL worked.

Unsure how this is possible. It seems we're never hitting the authenticate
 method, which is why overridin it in my custom driver doesn't make much of
 a difference in reaching my goal for local users.

Correct - list_users() shouldn't require authenticate() ... or vice versa.


 Is there another method I'm supposed to be overriding?

Not if you only want to change the behavior of authentication. list_users()
should only called by the administrative API.


 I appreciate the help -- I know these are likely silly questions to
 seasoned keystone developers.



 --
 From: dolph.math...@gmail.com
 Date: Mon, 27 Jan 2014 22:35:18 -0600

 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] extending keystone identity

 From your original email, it sounds like you want to extend the existing
 LDAP identity driver implementation, rather than writing a custom driver
 from scratch, which is what you've written. The TemplatedCatalog driver
 sort of follows that pattern with the KVS catalog driver, although it's not
 a spectacular example.


 On Mon, Jan 27, 2014 at 9:11 PM, Simon Perfer simon.per...@hotmail.comwrote:

 I dug a bit more and found this in the logs:

 (keystone.common.wsgi): 2014-01-27 19:07:13,851 WARNING The action you
 have requested has not been implemented.


 Despite basing my (super simple) code on the SQL or LDAP backends, I must
 be doing something wrong.


 -- I've placed my backend code in 
 /usr/share/pyshared/keystone/identity/backends/nicira.py
 or /usr/share/pyshared/keystone/common/nicira.py


 -- I DO see the my authenticate module loaded in the log


 I would appreciate any help in figuring out what I'm missing. Thanks!



 --
 From: simon.per...@hotmail.com
 To: openstack-dev@lists.openstack.org
 Date: Mon, 27 Jan 2014 21:58:43 -0500

 Subject: Re: [openstack-dev] extending keystone identity

 Dolph, I appreciate the response and pointing me in the right direction.

 Here's what I have so far:

 imports here
 CONF = config.CONF
 LOG = logging.getLogger(__name__)


 class Identity(identity.Driver):
 def __init__(self):
 super(Identity, self).__init__()
 LOG.debug('My authentication module loaded')


 def authenticate(self, user_id, password, domain_scope=None):
 LOG.debug('in authenticate method')


 When I request a user-list via the python-keystoneclient, we never make it
 into the authenticate method (as is evident by the missing debug log).


 Any thoughts on why I'm not hitting this method?



 --
 From: dolph.math...@gmail.com
 Date: Mon, 27 Jan 2014 18:14:50 -0600
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] extending keystone identity

 _check_password() is a private/internal API, so we make no guarantees
 about it's stability. Instead, override the public authenticate() method
 with something like this:

 def authenticate(self, user_id, password, domain_scope=None):
 if user_id in SPECIAL_LIST_OF_USERS:
# compare against value from keystone.conf
pass
 else:
 return super(CustomIdentityDriver, self).authenticate(user_id,
 password, domain_scope)

 On Mon, Jan 27, 2014 at 3:27 PM, Simon Perfer simon.per...@hotmail.comwrote:

 I'm looking to create a simple Identity driver that will look at
 usernames. A small number of specific users should be authenticated by
 looking at a hard-coded password in keystone.conf, while any 

[openstack-dev] [Barbican] New developer is coming

2014-01-28 Thread Александра Безбородова
Hi all,
I want to participate in Barbican project. I'm interested in this bp
https://blueprints.launchpad.net/barbican/+spec/support-rsa-key-store-generation
Who can answer some questions about it?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >