[openstack-dev] [Nova] Minesweeper update

2014-01-29 Thread Gary Kotton
Hi,
At the moment we have a terribly long queue for minesweeper. We are able to 
reduce this considerably by running tempest tests in parallel. We have 
encountered 2 race conditions when we do this and it would be very helpful if 
we can get the following patches in – they will help with the speed of the 
minesweeper validations.
- https://review.openstack.org/#/c/65306/ - VMware: fix race for datastore 
directory existence
- https://review.openstack.org/#/c/69622/ - VMware: prevent race for vmdk 
deletion
These have both been validated by minesweeper and show that they address the 
problems.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-29 Thread Florent Flament
Hi Vishvananda,

I would be interested in such a working group.
Can you please confirm the meeting hour for this Friday ?
I've seen 1600 UTC in your email and 2100 UTC in the wiki ( 
https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting ). 
As I'm in Europe I'd prefer 1600 UTC.

Florent Flament

- Original Message -
From: Vishvananda Ishaya vishvana...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, January 28, 2014 7:35:15 PM
Subject: [openstack-dev] Hierarchicical Multitenancy Discussion

Hi Everyone,

I apologize for the obtuse title, but there isn't a better succinct term to 
describe what is needed. OpenStack has no support for multiple owners of 
objects. This means that a variety of private cloud use cases are simply not 
supported. Specifically, objects in the system can only be managed on the 
tenant level or globally.

The key use case here is to delegate administration rights for a group of 
tenants to a specific user/role. There is something in Keystone called a 
“domain” which supports part of this functionality, but without support from 
all of the projects, this concept is pretty useless.

In IRC today I had a brief discussion about how we could address this. I have 
put some details and a straw man up here:

https://wiki.openstack.org/wiki/HierarchicalMultitenancy

I would like to discuss this strawman and organize a group of people to get 
actual work done by having an irc meeting this Friday at 1600UTC. I know this 
time is probably a bit tough for Europe, so if we decide we need a regular 
meeting to discuss progress then we can vote on a better time for this meeting.

https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting

Please note that this is going to be an active team that produces code. We will 
*NOT* spend a lot of time debating approaches, and instead focus on making 
something that works and learning as we go. The output of this team will be a 
MultiTenant devstack install that actually works, so that we can ensure the 
features we are adding to each project work together.

Vish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Launchpad and autoresponders

2014-01-29 Thread Julien Danjou
Hi fellow developers,

Once again this morning I received a mail from Launchpad for a bug on
Ceilometer where the comment was:

  I will be out of the office starting 01/28/2014 and will not return
  until 02/10/2014.

So could you *PLEASE* stop enabling your autoresponders on Launchpad?

Launchpad has apparently no smart tool to filter that out, and this kind
of tools spams us. I've largely enough mail that I don't need to hear
several times a day that your on vacation, thank you.

FTR, example of the phenomena we already had on Ceilometer:

  https://answers.launchpad.net/launchpad/+question/236822

Cheers,
-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-29 Thread Day, Phil

 -Original Message-
 From: Justin Santa Barbara [mailto:jus...@fathomdb.com]
 Sent: 28 January 2014 20:17
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
 through metadata service
 
 Thanks John - combining with the existing effort seems like the right thing to
 do (I've reached out to Claxton to coordinate).  Great to see that the larger
 issues around quotas / write-once have already been agreed.
 
 So I propose that sharing will work in the same way, but some values are
 visible across all instances in the project.  I do not think it would be
 appropriate for all entries to be shared this way.  A few
 options:
 
 1) A separate endpoint for shared values
 2) Keys are shared iff  e.g. they start with a prefix, like 'peers_XXX'
 3) Keys are set the same way, but a 'shared' parameter can be passed, either
 as a query parameter or in the JSON.
 
 I like option #3 the best, but feedback is welcome.
 
 I think I will have to store the value using a system_metadata entry per
 shared key.  I think this avoids issues with concurrent writes, and also makes
 it easier to have more advanced sharing policies (e.g.
 when we have hierarchical projects)
 
 Thank you to everyone for helping me get to what IMHO is a much better
 solution than the one I started with!
 
 Justin
 
 
I think #1 or #3 would be fine.   I don't really like #2 - doing this kind of 
thing through naming conventions always leads to problems IMO.

Phil



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-29 Thread Richard W.M. Jones
On Mon, Jan 27, 2014 at 05:58:20PM +, Daniel P. Berrange wrote:
 On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
  We have a blueprint open for separating translated log messages into
  different domains so the translation team can prioritize them differently
  (focusing on errors and warnings before debug messages, for example) [1].
 
  Feedback?
 
  [1]
  https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
 
 IMHO we've created ourselves a problem we don't need to have in the first
 place by trying to translate every single log message. It causes pain for
 developers  vendors because debug logs from users can in any language
 which the person receiving will often not be able to understand. It creates
 pain for translators by giving them an insane amount of work todo, which
 never ends since log message text is changed so often. Now we're creating
 yet more pain  complexity by trying to produce multiple log domains to solve
 a problem of havin some many msgs to translate. I accept that some people will
 like translated log messages, but I don't think this is a net win when you
 look at the overall burden they're imposing.

Also it impedes using search engines to look up the causes
of error messages.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-29 Thread Day, Phil
 -Original Message-
 From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
 Sent: 29 January 2014 03:40
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
 through metadata service
 
 
 On Jan 28, 2014, at 12:17 PM, Justin Santa Barbara jus...@fathomdb.com
 wrote:
 
  Thanks John - combining with the existing effort seems like the right
  thing to do (I've reached out to Claxton to coordinate).  Great to see
  that the larger issues around quotas / write-once have already been
  agreed.
 
  So I propose that sharing will work in the same way, but some values
  are visible across all instances in the project.  I do not think it
  would be appropriate for all entries to be shared this way.  A few
  options:
 
  1) A separate endpoint for shared values
  2) Keys are shared iff  e.g. they start with a prefix, like 'peers_XXX'
  3) Keys are set the same way, but a 'shared' parameter can be passed,
  either as a query parameter or in the JSON.
 
  I like option #3 the best, but feedback is welcome.
 
  I think I will have to store the value using a system_metadata entry
  per shared key.  I think this avoids issues with concurrent writes,
  and also makes it easier to have more advanced sharing policies (e.g.
  when we have hierarchical projects)
 
  Thank you to everyone for helping me get to what IMHO is a much better
  solution than the one I started with!
 
  Justin
 
 I am -1 on the post data. I think we should avoid using the metadata service
 as a cheap queue for communicating across vms and this moves strongly in
 that direction.
 
 I am +1 on providing a list of ip addresses in the current security group(s) 
 via
 metadata. I like limiting by security group instead of project because this
 could prevent the 1000 instance case where people have large shared
 tenants and it also provides a single tenant a way to have multiple
 autodiscoverd services. Also the security group info is something that
 neutron has access to so the neutron proxy should be able to generate the
 necessary info if neutron is in use.

If the visibility is going to be controlled by security group membership, then 
security
groups will have to be extended to have a share metadata attribute.  Its not 
valid
to assume that instances in the same security group should be able to see 
information
about each other.

The fundamental problem I see here is that the user who has access to the 
GuestOS,
and who therefore has access to the metadata, is not always the same as the 
owner
of the VM.

A PaaS service that runs multiple VMs in the same tenant, and makes those 
individual
VMs available to separate users, needs to be able to prevent those users from 
discovering
the other VMs in the same tenant.Those VMs normally are in the same SG as 
they have 
common inbound and outbound rules - but access within the groups is disabled. 

The other concern that I have about bounding the scope with security groups is 
that its
quite possible that the VMs that want to discover each other could be in 
different
security groups.   That would seem to lead to folks having to create a separate 
SG (maybe
with no rules) just to scope discoverability.   

It kind of feels like we're in danger of overloading the role of security 
groups here in the 
same way that we want to avoid overloading the scope of the metadata service - 
although
I can see that a security group is closer in concept to the kind of 
relationship between VMs
That we're trying to express.

 
 Just as an interesting side note, we put this vm list in way back in the NASA
 days as an easy way to get mpi clusters running. In this case we grouped the
 instances by the key_name used to launch the instance instead of security
 group.
 I don't think it occurred to us to use security groups at the time.  Note we
 also provided the number of cores, but this was for convienience because
 the mpi implementation didn't support discovering number of cores. Code
 below.
 
 Vish
 
 $ git show 2cf40bb3
 commit 2cf40bb3b21d33f4025f80d175a4c2ec7a2f8414
 Author: Vishvananda Ishaya vishvana...@yahoo.com
 Date:   Thu Jun 24 04:11:54 2010 +0100
 
 Adding mpi data
 
 diff --git a/nova/endpoint/cloud.py b/nova/endpoint/cloud.py index
 8046d42..74da0ee 100644
 --- a/nova/endpoint/cloud.py
 +++ b/nova/endpoint/cloud.py
 @@ -95,8 +95,21 @@ class CloudController(object):
  def get_instance_by_ip(self, ip):
  return self.instdir.by_ip(ip)
 
 +def _get_mpi_data(self, project_id):
 +result = {}
 +for node_name, node in self.instances.iteritems():
 +for instance in node.values():
 +if instance['project_id'] == project_id:
 +line = '%s slots=%d' % (instance['private_dns_name'],
 instance.get('vcpus', 0))
 +if instance['key_name'] in result:
 +result[instance['key_name']].append(line)
 +   

Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-29 Thread Alexander Ignatov
Thank you for bringing this up, Trevor.

EDP gets more diverse and it's time to change its model.
I totally agree with your proposal, but one minor comment.
Instead of savanna. prefix in job_configs wouldn't it be better to make it
as edp.? I think savanna. is too more wide word for this.

And one more bureaucratic thing... I see you already started implementing it 
[1], 
and it is named and goes as new EDP workflow [2]. I think new bluprint should 
be 
created for this feature to track all code changes as well as docs updates. 
Docs I mean public Savanna docs about EDP, rest api docs and samples.

[1] https://review.openstack.org/#/c/69712
[2] 
https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce

Regards,
Alexander Ignatov



On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote:

 Hello all,
 
 In our first pass at EDP, the model for job settings was very consistent
 across all of our job types. The execution-time settings fit into this
 (superset) structure:
 
 job_configs = {'configs': {}, # config settings for oozie and hadoop
  'params': {},  # substitution values for Pig/Hive
  'args': []}# script args (Pig and Java actions)
 
 But we have some things that don't fit (and probably more in the
 future):
 
 1) Java jobs have 'main_class' and 'java_opts' settings
   Currently these are handled as additional fields added to the
 structure above.  These were the first to diverge.
 
 2) Streaming MapReduce (anticipated) requires mapper and reducer
 settings (different than the mapred..class settings for
 non-streaming MapReduce)
 
 Problems caused by adding fields
 
 The job_configs structure above is stored in the database. Each time we
 add a field to the structure above at the level of configs, params, and
 args, we force a change to the database tables, a migration script and a
 change to the JSON validation for the REST api.
 
 We also cause a change for python-savannaclient and potentially other
 clients.
 
 This kind of change seems bad.
 
 Proposal: Borrow a page from Oozie and add savanna. configs
 -
 I would like to fit divergent job settings into the structure we already
 have.  One way to do this is to leverage the 'configs' dictionary.  This
 dictionary primarily contains settings for hadoop, but there are a
 number of oozie.xxx settings that are passed to oozie as configs or
 set by oozie for the benefit of running apps.
 
 What if we allow savanna. settings to be added to configs?  If we do
 that, any and all special configuration settings for specific job types
 or subtypes can be handled with no database changes and no api changes.
 
 Downside
 
 Currently, all 'configs' are rendered in the generated oozie workflow.
 The savanna. settings would be stripped out and processed by Savanna,
 thereby changing that behavior a bit (maybe not a big deal)
 
 We would also be mixing savanna. configs with config_hints for jobs,
 so users would potentially see savanna. settings mixed with oozie
 and hadoop settings.  Again, maybe not a big deal, but it might blur the
 lines a little bit.  Personally, I'm okay with this.
 
 Slightly different
 --
 We could also add a 'savanna-configs': {} element to job_configs to
 keep the configuration spaces separate.
 
 But, now we would have 'savanna-configs' (or another name), 'configs',
 'params', and 'args'.  Really? Just how many different types of values
 can we come up with? :)
 
 I lean away from this approach.
 
 Related: breaking up the superset
 -
 
 It is also the case that not every job type has every value type.
 
 Configs   ParamsArgs
 HiveY YN
 Pig Y YY
 MapReduce   Y NN
 JavaY NY
 
 So do we make that explicit in the docs and enforce it in the api with
 errors?
 
 Thoughts? I'm sure there are some :)
 
 Best,
 
 Trevor
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Ivan Melnikov
Hi there,

I see lots of unit tests jobs on gate fail with errors like

2014-01-29 10:48:44.933 | File
/home/jenkins/workspace/gate-taskflow-python26/.tox/py26/lib/python2.6/site-packages/subunit/test_results.py,
line 23, in module
2014-01-29 10:48:44.934 | from testtools.compat import all
2014-01-29 10:48:44.935 | ImportError: cannot import name all
2014-01-29 10:48:44.992 | ERROR: InvocationError:
'/home/jenkins/workspace/gate-taskflow-python26/.tox/py26/bin/python
setup.py testr --slowest --testr-args='

Looks like subunit is not compatible with just-released testtools
0.9.35. I guess we will need to pin testtools to 0.9.34 in
test-requirements.txt. Or there are better solution?

I filed a bug to subunit upstream:
https://bugs.launchpad.net/subunit/+bug/1274056

I also filed a bug for taskflow, feel free to add your projects there if
it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050

-- 
WBR,
Ivan A. Melnikov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Matthieu Huin
Thanks so much for figuring this out, I was very puzzled by that this 
morning,trying to run keystone tests on my local copy !

Matthieu Huin 

m...@enovance.com

- Original Message -
 From: Ivan Melnikov imelni...@griddynamics.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Wednesday, January 29, 2014 12:07:13 PM
 Subject: [openstack-dev] [all] Lots of gating failures because of testtools
 
 Hi there,
 
 I see lots of unit tests jobs on gate fail with errors like
 
 2014-01-29 10:48:44.933 | File
 /home/jenkins/workspace/gate-taskflow-python26/.tox/py26/lib/python2.6/site-packages/subunit/test_results.py,
 line 23, in module
 2014-01-29 10:48:44.934 | from testtools.compat import all
 2014-01-29 10:48:44.935 | ImportError: cannot import name all
 2014-01-29 10:48:44.992 | ERROR: InvocationError:
 '/home/jenkins/workspace/gate-taskflow-python26/.tox/py26/bin/python
 setup.py testr --slowest --testr-args='
 
 Looks like subunit is not compatible with just-released testtools
 0.9.35. I guess we will need to pin testtools to 0.9.34 in
 test-requirements.txt. Or there are better solution?
   
 I filed a bug to subunit upstream:
 https://bugs.launchpad.net/subunit/+bug/1274056
 
 I also filed a bug for taskflow, feel free to add your projects there if
 it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050
 
 --
 WBR,
 Ivan A. Melnikov
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Dina Belova
Please see that temporary solution: https://review.openstack.org/#/c/69840/

Thanks,
Dina


On Wed, Jan 29, 2014 at 3:18 PM, Matthieu Huin
matthieu.h...@enovance.comwrote:

 Thanks so much for figuring this out, I was very puzzled by that this
 morning,trying to run keystone tests on my local copy !

 Matthieu Huin

 m...@enovance.com

 - Original Message -
  From: Ivan Melnikov imelni...@griddynamics.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, January 29, 2014 12:07:13 PM
  Subject: [openstack-dev] [all] Lots of gating failures because of
 testtools
 
  Hi there,
 
  I see lots of unit tests jobs on gate fail with errors like
 
  2014-01-29 10:48:44.933 | File
 
 /home/jenkins/workspace/gate-taskflow-python26/.tox/py26/lib/python2.6/site-packages/subunit/test_results.py,
  line 23, in module
  2014-01-29 10:48:44.934 | from testtools.compat import all
  2014-01-29 10:48:44.935 | ImportError: cannot import name all
  2014-01-29 10:48:44.992 | ERROR: InvocationError:
  '/home/jenkins/workspace/gate-taskflow-python26/.tox/py26/bin/python
  setup.py testr --slowest --testr-args='
 
  Looks like subunit is not compatible with just-released testtools
  0.9.35. I guess we will need to pin testtools to 0.9.34 in
  test-requirements.txt. Or there are better solution?
 
  I filed a bug to subunit upstream:
  https://bugs.launchpad.net/subunit/+bug/1274056
 
  I also filed a bug for taskflow, feel free to add your projects there if
  it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050
 
  --
  WBR,
  Ivan A. Melnikov
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Sylvain Bauza

Le 29/01/2014 12:07, Ivan Melnikov a écrit :

I also filed a bug for taskflow, feel free to add your projects there if
it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050




Climate is also impacted, we can at least declare a recheck with this 
bug number.

-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Sergey Lukjanov
I've proposed temp fix for global requirements:
https://review.openstack.org/#/c/69840/, it's not the best solution, but
looks like the only one now.

logstash link:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkltcG9ydEVycm9yOiBjYW5ub3QgaW1wb3J0IG5hbWUgYWxsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjkwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA5OTMwMjU5MDF9


On Wed, Jan 29, 2014 at 3:18 PM, Matthieu Huin
matthieu.h...@enovance.comwrote:

 Thanks so much for figuring this out, I was very puzzled by that this
 morning,trying to run keystone tests on my local copy !

 Matthieu Huin

 m...@enovance.com

 - Original Message -
  From: Ivan Melnikov imelni...@griddynamics.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
  Sent: Wednesday, January 29, 2014 12:07:13 PM
  Subject: [openstack-dev] [all] Lots of gating failures because of
 testtools
 
  Hi there,
 
  I see lots of unit tests jobs on gate fail with errors like
 
  2014-01-29 10:48:44.933 | File
 
 /home/jenkins/workspace/gate-taskflow-python26/.tox/py26/lib/python2.6/site-packages/subunit/test_results.py,
  line 23, in module
  2014-01-29 10:48:44.934 | from testtools.compat import all
  2014-01-29 10:48:44.935 | ImportError: cannot import name all
  2014-01-29 10:48:44.992 | ERROR: InvocationError:
  '/home/jenkins/workspace/gate-taskflow-python26/.tox/py26/bin/python
  setup.py testr --slowest --testr-args='
 
  Looks like subunit is not compatible with just-released testtools
  0.9.35. I guess we will need to pin testtools to 0.9.34 in
  test-requirements.txt. Or there are better solution?
 
  I filed a bug to subunit upstream:
  https://bugs.launchpad.net/subunit/+bug/1274056
 
  I also filed a bug for taskflow, feel free to add your projects there if
  it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050
 
  --
  WBR,
  Ivan A. Melnikov
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Sean Dague
On 01/29/2014 06:24 AM, Sylvain Bauza wrote:
 Le 29/01/2014 12:07, Ivan Melnikov a écrit :
 I also filed a bug for taskflow, feel free to add your projects there if
 it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050

 
 
 Climate is also impacted, we can at least declare a recheck with this
 bug number.
 -Sylvain

Right, but until a testtools fix is released, it won't pass. So please
no rechecks until we have a new testtools from Robert that fixes things.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Havana Release V3 Extensions and new features to quota

2014-01-29 Thread Vinod Kumar Boppanna
Dear Vishvananda,

Sorry for very late reply. I was stupid not to follow your reply (i had messed 
it some how).

Actually, i am confused after seeing your mail. In the last two weeks, i was 
doing some testing (creating use cases) on Keystone and Nova.

Part 1:  Delegating rights

I had made the following observations using Keystone V3

1. RBAC were not working in Keystone V2 (it was only working in V3)
2. In V3, I could create a role (like 'listRole') and created a user in a 
tenant with this role
3. I had changed the RBAC rules in policy.json file of keystone to allowed a 
user with the 'listRole' in addition to admin, to run the list_domains, 
list_projects and list_users operations
   (earlier this operations can only be run by admin or we can say super-user)
4. These settings were successful and working perfectly fine.

What my point is here, by playing with RBAC with V3 APIs of keystone, i could 
delegate rights to users.

So, i thought the same can be achieved in any other service (like nova).
For example, i thought in nova also i can create a role add change the 
policy.json file to allow him to do the necessary operations like list, update 
etc..

I could not do this check, because i couldn't able to run Nova with V3 
successfully and also could not find the Nova V3 APIs.

But one thing i guess is missing here (even in keystone) is that, if we allow a 
normal user with a role to do certain operations, then he/she can do those 
operations in another domain or another project, in which he does not belong to.
So, i guess this can checked in the code. Lets use RBAC rules to check whether 
a person can run that query or not. Once RBAC says it is allowed, we can check 
whether an admin/super-user is running or a normal user is running that query.
If the user is admin, he can request for anything. If the user is a normal 
user, then we can check whether he is asking only for his domain or his 
project. If so, then only allow otherwise raise an error.

Part 2: Quotas

I would also like to discuss with you about quotas.

As you know, the current quota system is de-centralized and the driver 
available in nova is DbQuotaDriver, which allows to set quotas for a tenant 
and users in the tenant.
I could manage the quota driver to point to a new driver called 
DomainQuotaDriver (from Tiago Martins and team from HP) in nova code. I had 
built a test case in which i checked that a tenant quota cannot be greater than 
the domain quota in which the tenant is registered.Even, the sum of all tenant 
quotas cannot exceed their domain quota. In this, what is missing is the API's 
to operate the quotas for domains. I thought of creating these API's in V2 (as 
i could not find V3 APIs in nova). So, a new level called domain will be added 
to existing quota APIs. For example, the current API 
v2/{tenant_id}/os-quota-setshttp://docs.openstack.org/api/openstack-compute/2/content/GET_os-quota-sets-v2_showQuota_v2__tenant_id__os-quota-sets_ext-os-quota-sets.html
 allows to see the quotas for a tenant. Probably, this can be changed to 
v2/{domain_id}/{tenant_id}/os-quota-setshttp://docs.openstack.org/api/openstack-compute/2/content/GET_os-quota-sets-v2_showQuota_v2__tenant_id__os-quota-sets_ext-os-quota-sets.html
 to see the quotas for a tenant in a domain.

I am currently trying to understand the nova-api code to see how and API 
mapping is done (through routes) and how an API calling is actually leading to 
a python function being called. Once i complete this, i am thinking of about 
these API's. Ideally implementation the extension of domain quotas in V3 APIs 
would have been good. But as i said i could not find any documentation about 
the Nova V3 APIs


I feel once we have Part 1 and Part 2, then quota delegation is not a big task. 
Because with RBAC rules, we can allow a user lets say with tenant admin role, 
can set the quotas for all the users in that tenant.


Please post your comments on this. Here at CERN we want to contribute to the 
quota management (earlier thought of centralized quota, but currently going 
with de-centralized quota with openstack services keeping the quota data).
I will wait for your comments to guide us or tell us how we can contribute..

Thanks  Regards,
Vinod Kumar Boppanna



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-29 Thread Sean Dague
On 01/28/2014 10:53 PM, Vishvananda Ishaya wrote:
 
 On Jan 27, 2014, at 6:57 PM, Christopher Yeoh cbky...@gmail.com
 mailto:cbky...@gmail.com wrote:
 
 On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:

 On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
  Hi Sean,
 
  I'm currently working on moving away from the built-in logging
 to use log_config=filename and the python logging framework so
 that we can start shipping to logstash/sentry/insert other useful
 tool here.
 
  I'd be very interested in getting involved in this, especially
 from a why do we have log messages that are split across multiple
 lines perspective!

 Do we have many that aren't either DEBUG or TRACE? I thought we were
 pretty clean there.

  Cheers,
 
  Matt
 
  P.S. FWIW, I'd also welcome details on what the Audit level
 gives us that the others don't... :)

 Well as far as I can tell the AUDIT level was a prior drive by
 contribution that's not being actively maintained. Honestly, I
 think we
 should probably rip it out, because I don't see any in tree tooling to
 use it, and it's horribly inconsistent.


 For the uses I've seen of it in the nova api code INFO would be
 perfectly fine in place of AUDIT.
 
 +1 AUDIT was added for a specific NASA use case because we needed a
 clean feed of important actions for security compliance and many
 upstream libraries were putting out INFO logs that we did not want
 included.  Safe to rip it out IMO.

Cool.

Thanks for the context Vish, very helpful in understanding why some of
this stuff is there so we can unwind it sanely.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Sylvain Bauza

Le 29/01/2014 12:51, Sean Dague a écrit :

Right, but until a testtools fix is released, it won't pass. So please
no rechecks until we have a new testtools from Robert that fixes things.

-Sean


Indeed you're right. Any way to promote some bugs with Gerrit without 
doing a recheck, then ?


-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-29 Thread Ulrich Schwickerath

Hi,

I'm working with Vinod. We'd like to join as well. Same issue on our 
side: 16:00 UTC is better for us.


Ulrich and Vinod

On 29.01.2014 10:56, Florent Flament wrote:

Hi Vishvananda,

I would be interested in such a working group.
Can you please confirm the meeting hour for this Friday ?
I've seen 1600 UTC in your email and 2100 UTC in the wiki ( 
https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting ). 
As I'm in Europe I'd prefer 1600 UTC.

Florent Flament

- Original Message -
From: Vishvananda Ishaya vishvana...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, January 28, 2014 7:35:15 PM
Subject: [openstack-dev] Hierarchicical Multitenancy Discussion

Hi Everyone,

I apologize for the obtuse title, but there isn't a better succinct term to 
describe what is needed. OpenStack has no support for multiple owners of 
objects. This means that a variety of private cloud use cases are simply not 
supported. Specifically, objects in the system can only be managed on the 
tenant level or globally.

The key use case here is to delegate administration rights for a group of 
tenants to a specific user/role. There is something in Keystone called a 
“domain” which supports part of this functionality, but without support from 
all of the projects, this concept is pretty useless.

In IRC today I had a brief discussion about how we could address this. I have 
put some details and a straw man up here:

https://wiki.openstack.org/wiki/HierarchicalMultitenancy

I would like to discuss this strawman and organize a group of people to get 
actual work done by having an irc meeting this Friday at 1600UTC. I know this 
time is probably a bit tough for Europe, so if we decide we need a regular 
meeting to discuss progress then we can vote on a better time for this meeting.

https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting

Please note that this is going to be an active team that produces code. We will 
*NOT* spend a lot of time debating approaches, and instead focus on making 
something that works and learning as we go. The output of this team will be a 
MultiTenant devstack install that actually works, so that we can ensure the 
features we are adding to each project work together.

Vish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Davanum Srinivas
Robert,

Here's a merge request for subunit
https://code.launchpad.net/~subunit/subunit/trunk/+merge/203723

-- dims

On Wed, Jan 29, 2014 at 6:51 AM, Sean Dague s...@dague.net wrote:
 On 01/29/2014 06:24 AM, Sylvain Bauza wrote:
 Le 29/01/2014 12:07, Ivan Melnikov a écrit :
 I also filed a bug for taskflow, feel free to add your projects there if
 it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050



 Climate is also impacted, we can at least declare a recheck with this
 bug number.
 -Sylvain

 Right, but until a testtools fix is released, it won't pass. So please
 no rechecks until we have a new testtools from Robert that fixes things.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-01-29 Thread Khanh-Toan Tran
Dear all,

As promised in the Scheduler/Gantt meeting, here is our analysis on the
connection between Policy Based Scheduler and Solver Scheduler:

https://docs.google.com/document/d/1RfP7jRsw1mXMjd7in72ARjK0fTrsQv1bqolOri
IQB2Y

This document briefs the mechanism of the two schedulers and the
possibility of cooperation. It is my personal point of view only.

In a nutshell, Policy Based Scheduler allows admin to define policies for
different physical resources (an aggregate, an availability-zone, or all
infrastructure) or different (classes of) users. Admin can modify
(add/remove/modify) any policy in runtime, and the modification effect is
only in the target (e.g. the aggregate, the users) that the policy is
defined to. Solver Scheduler solves the placement of groups of instances
simultaneously by putting all the known information into a integer linear
system and uses Integer Program solver to solve the latter. Thus relation
between VMs and between VMs-computes are all accounted for.

If working together, Policy Based Scheduler can supply the filters and
weighers following the policies rules defined for different computes.
These filters and weighers can be converted into constraints  cost
function for Solver Scheduler to solve. More detailed will be found in the
doc.

I look forward for comments and hope that we can work it out.

Best regards,

Khanh-Toan TRAN


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

2014-01-29 Thread Rossella Sblendido
Hi Trinath,

you can find more info about third party testing here [1]
Every new driver or plugin is required to provide a testing system that
will test new patches and post
a +1/-1 to Gerrit .
There were meetings organized by Kyle to talk about how to set up the
system [2]
It will probably help you if you read the logs of the meeting.

cheers,

Rossella

[1]
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Testing_Requirements
[2]
http://lists.openstack.org/pipermail/openstack-dev/2013-December/021882.html


On Wed, Jan 29, 2014 at 7:50 AM, trinath.soman...@freescale.com 
trinath.soman...@freescale.com wrote:

 Hi Akihiro-

 What kind of third party testing is required?

 I have written the driver, unit test case and checked the driver with
 tempest testing.

 Do I need to check with any other third party testing?

 Kindly help me in this regard.

 --
 Trinath Somanchi - B39208
 trinath.soman...@freescale.com | extn: 4048

 -Original Message-
 From: Akihiro Motoki [mailto:mot...@da.jp.nec.com]
 Sent: Friday, January 24, 2014 6:41 PM
 To: openstack-dev@lists.openstack.org
 Cc: kmest...@cisco.com
 Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

 Hi Trinath,

 Jenkins is not directly related to proposing a new code.
 The process to contribute the code is described in the links Andreas
 pointed. There is no difference even if you are writing a new ML2 mech
 driver.

 In addition to the above, Neutron now requires a third party testing for
 all new/existing plugins and drivers [1].
 Are you talking about third party testing for your ML2 mechanism driver
 when you say Jenkins?

 Both two things can be done in parallel, but you need to make your third
 party testing ready before merging your code into the master repository.

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019219.html

 Thanks,
 Akihiro

 (2014/01/24 21:42), trinath.soman...@freescale.com wrote:
  Hi Andreas -
 
  Thanks you for the reply.. It helped me understand the ground work
  required.
 
  But then, I'm writing a new Mechanism driver (FSL SDN Mechanism
  driver) for ML2.
 
  For submitting new file sets, can I go with GIT or require Jenkins for
  the adding the new code for review.
 
  Kindly help me in this regard.
 
  --
  Trinath Somanchi - B39208
  trinath.soman...@freescale.com | extn: 4048
 
  -Original Message-
  From: Andreas Jaeger [mailto:a...@suse.com]
  Sent: Friday, January 24, 2014 4:54 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: Kyle Mestery (kmestery)
  Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron
  (ML2)
 
  On 01/24/2014 12:10 PM, trinath.soman...@freescale.com wrote:
  Hi-
 
 
 
  Need support for ways to contribute code to Neutron regarding the ML2
  Mechanism drivers.
 
 
 
  I have installed Jenkins and created account in github and launchpad.
 
 
 
  Kindly guide me on
 
 
 
  [1] How to configure Jenkins to submit the code for review?
 
  [2] What is the process involved in pushing the code base to the main
  stream for icehouse release?
 
 
 
  Kindly please help me understand the same..
 
  Please read this wiki page completely, it explains the workflow we use.
 
  https://wiki.openstack.org/wiki/GerritWorkflow
 
  Please also read the general intro at
  https://wiki.openstack.org/wiki/HowToContribute
 
  Btw. for submitting patches, you do not need a local Jenkins running,
 
  Welcome to OpenStack, Kyle!
 
  Andreas
  --
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
 SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
  GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
   GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272
  A126
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] ryu-ml2-driver

2014-01-29 Thread YAMAMOTO Takashi
hi,

we (Ryu project) are currently working on a new version of
Ryu neutron plugin/agent.  we have a blueprint for it
waiting for review/approval.  can you please take a look?  thanks.
https://blueprints.launchpad.net/neutron/+spec/ryu-ml2-driver

YAMAMOTO Takashi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][swift] Importing Launchpad Answers in Ask OpenStack

2014-01-29 Thread Swapnil Kulkarni
Stef,

Getting lauchpad bus in ask.openstack would really help people and this
looks really nice.(just saw some question-answers) I was not able to search
for questions though and  (answered/unanswered) questions filters are not
working. Just one small question, how the import will happen for future
launchpad questions? Or launchpad questions will be disabled making
ask.openstack default for openstack questions-answers?


Best Regards,
Swapnil
*It's better to SHARE*



On Wed, Jan 29, 2014 at 1:13 PM, atul jha stackera...@gmail.com wrote:




 On Wed, Jan 29, 2014 at 6:08 AM, Stefano Maffulli 
 stef...@openstack.orgwrote:

 Hello folks

 we're almost ready to import all questions and asnwers from LP Answers
 into Ask OpenStack.  You can see the result of the import from Nova on
 the staging server http://ask-staging.openstack.org/

 There are some formatting issues for the imported questions and I'm
 trying to evaluate how bad these are.  The questions I see are mostly
 readable and definitely pop up in search results, with their answers so
 they are valuable already as is. Some parts, especially the logs, may
 not look as good though. Fixing the parsers and get a better rendering
 for all imported questions would take an extra 3-5 days of work (maybe
 more) and I'm not sure it's worth it.

 Please go ahead and browse the staging site and let me know what you
 think.

 Cheers,
 stef

 --
 Ask and answer questions on https://ask.openstack.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Great!!

 Cheers!!

 --


 Atul Jha
 http://atuljha.com
 (irc.freenode.net:koolhead17)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] - Cloud federation on top of the Apache

2014-01-29 Thread Marek Denis

On 28.01.2014 21:44, Adam Young wrote:


To be clear, are you going to use mod_mellon as the Apache Auth module?


I am leaning towards mod_shib, as at least in theory it handles ECP 
extension. And I am not so sure mod_mellon does.


Adam, do you have at RedHat any experience with ECP SAML extensions or 
you used only webSSO?


--
Marek Denis
[marek.de...@cern.ch]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-29 Thread Doug Hellmann
On Tue, Jan 28, 2014 at 8:47 PM, Ben Nemec openst...@nemebean.com wrote:

  On 2014-01-27 11:42, Doug Hellmann wrote:

  We have a blueprint open for separating translated log messages into
 different domains so the translation team can prioritize them differently
 (focusing on errors and warnings before debug messages, for example) [1].
 Some concerns were raised related to the review [2], and I would like to
 address those in this thread and see if we can reach consensus about how to
 proceed.

 The implementation in [2] provides a set of new marker functions similar
 to _(), one for each log level (we have _LE, LW, _LI, _LD, etc.). These
 would be used in conjunction with _(), and reserved for log messages.
 Exceptions, API messages, and other user-facing messages all would still be
 marked for translation with _() and would (I assume) receive the highest
 priority work from the translation team.

 When the string extraction CI job is updated, we will have one main
 catalog for each app or library, and additional catalogs for the log
 levels. Those show up in transifex separately, but will be named in a way
 that they are obviously related. Each translation team will be able to
 decide, based on the requirements of their users, how to set priorities for
 translating the different catalogs.

 Existing strings being sent to the log and marked with _() will be removed
 from the main catalog and moved to the appropriate log-level-specific
 catalog when their marker function is changed. My understanding is that
 transifex is smart enough to recognize the same string from more than one
 source, and to suggest previous translations when it sees the same text.
 This should make it easier for the translation teams to catch up by
 reusing the translations they have already done, in the new catalogs.

 One concern that was raised was the need to mark all of the log messages
 by hand. I investigated using extraction patterns like LOG.debug( and
 LOG.info(, but because of the way the translation actually works
 internally we cannot do that. There are a few related reasons.

 In other applications, the function _() translates a string at the point
 where it is invoked, and returns a new string object. OpenStack has a
 requirement that messages be translated multiple times, whether in the API
 or the LOG (there is already support for logging in more than one language,
 to different log files). This requirement means we delay the translation
 operation until right before the string is output, at which time we know
 the target language. We could update the log functions to create Message
 objects dynamically, except...

 Each app or library that uses the translation code will need its own
 domain for the message catalogs. We get around that right now by not
 translating many messages from the libraries, but that's obviously not what
 we want long term (we at least want exceptions translated). If we had a
 special version of a logger in oslo.log that knew how to create Message
 objects for the format strings used in logging (the first argument to
 LOG.debug for example), it would also have to know what translation domain
 to use so the proper catalog could be loaded. The wrapper functions defined
 in the patch [2] include this information, and can be updated to be
 application or library specific when oslo.log eventually becomes its own
 library.

 Further, as part of moving the logging code from oslo-incubator to
 oslo.log, and making our logging something we can use from other OpenStack
 libraries, we are trying to change the implementation of the logging code
 so it is no longer necessary to create loggers with our special wrapper
 function. That would mean that oslo.log will be a library for *configuring*
 logging, but the actual log calls can be handled with Python's standard
 library, eliminating a dependency between new libraries and oslo.log. (This
 is a longer, and separate, discussion, but I mention it here as backround.
 We don't want to change the API of the logger in oslo.log because we don't
 want to be using it directly in the first place.)

 Another concern raised was the use of a prefix _L for these functions,
 since it ties the priority definitions to logs. I chose that prefix as an
 explicit indicate that these *are* just for logs. I am not associating any
 actual priority with them. The translators want us to move the log messages
 out of the main catalog. Having them all in separate catalogs is a
 refinement that gives them what they want -- some translators don't care
 about log messages at all, some only care about errors, etc. We decided
 that the translators should set priorities, and we would make that possible
 by separating the catalogs into logical groups. Everything marked with _()
 will still go into the main catalog, but beyond that it isn't up to the
 developers to indicate priority for translations.

 The alternative approach of using babel translator comments would, under
 other 

Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-29 Thread Justin Santa Barbara
Certainly my original inclination (and code!) was to agree with you Vish, but:

1) It looks like we're going to have writable metadata anyway, for
communication from the instance to the API.
2) I believe the restrictions make it impractical to abuse it as a
message-bus: size-limits, quotas and write-once make it very poorly
suited for anything queue like.
3) Anything that isn't opt-in will likely have security implications
which means that it won't get deployed.  This must be deployed to be
useful.

In short: I agree that it's not the absolute ideal solution (for me,
that would be no opt-in), but it feels like the best solution given
that we must have opt-in, or else e.g. HP won't deploy it.  It uses a
(soon to be) existing mechanism, and is readily extensible without
breaking APIs.

On your idea of scoping by security group, I believe a certain someone
is looking at supporting hierarchical projects, so we will likely need
to support more advanced logic here later anyway.  For example:  the
ability to specify whether an entry should be shared with instances in
child projects.  This will likely take the form of a sort of selector
language, so I anticipate we could offer a filter on security groups
as well if this is useful.  We might well also allow selection by
instance tags.  The approach allows this, though I would like to keep
it as simple as possible at first (share with other instances in
project or don't share)

Justin


On Tue, Jan 28, 2014 at 10:39 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:

 On Jan 28, 2014, at 12:17 PM, Justin Santa Barbara jus...@fathomdb.com 
 wrote:

 Thanks John - combining with the existing effort seems like the right
 thing to do (I've reached out to Claxton to coordinate).  Great to see
 that the larger issues around quotas / write-once have already been
 agreed.

 So I propose that sharing will work in the same way, but some values
 are visible across all instances in the project.  I do not think it
 would be appropriate for all entries to be shared this way.  A few
 options:

 1) A separate endpoint for shared values
 2) Keys are shared iff  e.g. they start with a prefix, like 'peers_XXX'
 3) Keys are set the same way, but a 'shared' parameter can be passed,
 either as a query parameter or in the JSON.

 I like option #3 the best, but feedback is welcome.

 I think I will have to store the value using a system_metadata entry
 per shared key.  I think this avoids issues with concurrent writes,
 and also makes it easier to have more advanced sharing policies (e.g.
 when we have hierarchical projects)

 Thank you to everyone for helping me get to what IMHO is a much better
 solution than the one I started with!

 Justin

 I am -1 on the post data. I think we should avoid using the metadata service
 as a cheap queue for communicating across vms and this moves strongly in
 that direction.

 I am +1 on providing a list of ip addresses in the current security group(s)
 via metadata. I like limiting by security group instead of project because
 this could prevent the 1000 instance case where people have large shared
 tenants and it also provides a single tenant a way to have multiple 
 autodiscoverd
 services. Also the security group info is something that neutron has access
 to so the neutron proxy should be able to generate the necessary info if
 neutron is in use.

 Just as an interesting side note, we put this vm list in way back in the NASA
 days as an easy way to get mpi clusters running. In this case we grouped the
 instances by the key_name used to launch the instance instead of security 
 group.
 I don't think it occurred to us to use security groups at the time.  Note we
 also provided the number of cores, but this was for convienience because the
 mpi implementation didn't support discovering number of cores. Code below.

 Vish

 $ git show 2cf40bb3
 commit 2cf40bb3b21d33f4025f80d175a4c2ec7a2f8414
 Author: Vishvananda Ishaya vishvana...@yahoo.com
 Date:   Thu Jun 24 04:11:54 2010 +0100

 Adding mpi data

 diff --git a/nova/endpoint/cloud.py b/nova/endpoint/cloud.py
 index 8046d42..74da0ee 100644
 --- a/nova/endpoint/cloud.py
 +++ b/nova/endpoint/cloud.py
 @@ -95,8 +95,21 @@ class CloudController(object):
  def get_instance_by_ip(self, ip):
  return self.instdir.by_ip(ip)

 +def _get_mpi_data(self, project_id):
 +result = {}
 +for node_name, node in self.instances.iteritems():
 +for instance in node.values():
 +if instance['project_id'] == project_id:
 +line = '%s slots=%d' % (instance['private_dns_name'], 
 instance.get('vcpus', 0))
 +if instance['key_name'] in result:
 +result[instance['key_name']].append(line)
 +else:
 +result[instance['key_name']] = [line]
 +return result
 +
  def get_metadata(self, ip):
  i = self.get_instance_by_ip(ip)
 +mpi = 

[openstack-dev] [neutron] [ml2] The impending plethora of ML2 MechanismDrivers

2014-01-29 Thread Kyle Mestery
Folks:

As you can see from our meeting agent for today [1], we are tracking
a large number of new ML2 MechanismDrivers at the moment. We plan
to discuss these in the meeting again this week in the ML2 meeting [2]
at 1600 UTC in #openstack-meeting-alt. Also, it would be great if each
MechanismDriver had a representative at these weekly meetings. We
are currently discussing some changes to port binding in ML2, so this
may affect your MechanismDriver.

Thanks, and see you in the weekly ML2 meeting in a few hours!
Kyle

[1] https://wiki.openstack.org/wiki/Meetings/ML2
[2] https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-29 Thread Trevor McKay
So, assuming we go forward with this, the followup question is whether
or not to move main_class and java_opts for Java actions into
edp.java.main_class and edp.java.java_opts configs.

I think yes.

Best,

Trevor

On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
 On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
  Thank you for bringing this up, Trevor.
  
  EDP gets more diverse and it's time to change its model.
  I totally agree with your proposal, but one minor comment.
  Instead of savanna. prefix in job_configs wouldn't it be better to make it
  as edp.? I think savanna. is too more wide word for this.
 
 +1, brilliant. EDP is perfect.  I was worried about the scope of
 savanna. too.
 
  And one more bureaucratic thing... I see you already started implementing 
  it [1], 
  and it is named and goes as new EDP workflow [2]. I think new bluprint 
  should be 
  created for this feature to track all code changes as well as docs updates. 
  Docs I mean public Savanna docs about EDP, rest api docs and samples.
 
 Absolutely, I can make it new blueprint.  Thanks.
 
  [1] https://review.openstack.org/#/c/69712
  [2] 
  https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
  
  Regards,
  Alexander Ignatov
  
  
  
  On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote:
  
   Hello all,
   
   In our first pass at EDP, the model for job settings was very consistent
   across all of our job types. The execution-time settings fit into this
   (superset) structure:
   
   job_configs = {'configs': {}, # config settings for oozie and hadoop
'params': {},  # substitution values for Pig/Hive
'args': []}# script args (Pig and Java actions)
   
   But we have some things that don't fit (and probably more in the
   future):
   
   1) Java jobs have 'main_class' and 'java_opts' settings
 Currently these are handled as additional fields added to the
   structure above.  These were the first to diverge.
   
   2) Streaming MapReduce (anticipated) requires mapper and reducer
   settings (different than the mapred..class settings for
   non-streaming MapReduce)
   
   Problems caused by adding fields
   
   The job_configs structure above is stored in the database. Each time we
   add a field to the structure above at the level of configs, params, and
   args, we force a change to the database tables, a migration script and a
   change to the JSON validation for the REST api.
   
   We also cause a change for python-savannaclient and potentially other
   clients.
   
   This kind of change seems bad.
   
   Proposal: Borrow a page from Oozie and add savanna. configs
   -
   I would like to fit divergent job settings into the structure we already
   have.  One way to do this is to leverage the 'configs' dictionary.  This
   dictionary primarily contains settings for hadoop, but there are a
   number of oozie.xxx settings that are passed to oozie as configs or
   set by oozie for the benefit of running apps.
   
   What if we allow savanna. settings to be added to configs?  If we do
   that, any and all special configuration settings for specific job types
   or subtypes can be handled with no database changes and no api changes.
   
   Downside
   
   Currently, all 'configs' are rendered in the generated oozie workflow.
   The savanna. settings would be stripped out and processed by Savanna,
   thereby changing that behavior a bit (maybe not a big deal)
   
   We would also be mixing savanna. configs with config_hints for jobs,
   so users would potentially see savanna. settings mixed with oozie
   and hadoop settings.  Again, maybe not a big deal, but it might blur the
   lines a little bit.  Personally, I'm okay with this.
   
   Slightly different
   --
   We could also add a 'savanna-configs': {} element to job_configs to
   keep the configuration spaces separate.
   
   But, now we would have 'savanna-configs' (or another name), 'configs',
   'params', and 'args'.  Really? Just how many different types of values
   can we come up with? :)
   
   I lean away from this approach.
   
   Related: breaking up the superset
   -
   
   It is also the case that not every job type has every value type.
   
   Configs   ParamsArgs
   HiveY YN
   Pig Y YY
   MapReduce   Y NN
   JavaY NY
   
   So do we make that explicit in the docs and enforce it in the api with
   errors?
   
   Thoughts? I'm sure there are some :)
   
   Best,
   
   Trevor
   
   
   
   
   
   
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  

Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-29 Thread Jon Maron
I imagine ‘neutron’ would follow suit as well..

On Jan 29, 2014, at 9:23 AM, Trevor McKay tmc...@redhat.com wrote:

 So, assuming we go forward with this, the followup question is whether
 or not to move main_class and java_opts for Java actions into
 edp.java.main_class and edp.java.java_opts configs.
 
 I think yes.
 
 Best,
 
 Trevor
 
 On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
 On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
 Thank you for bringing this up, Trevor.
 
 EDP gets more diverse and it's time to change its model.
 I totally agree with your proposal, but one minor comment.
 Instead of savanna. prefix in job_configs wouldn't it be better to make it
 as edp.? I think savanna. is too more wide word for this.
 
 +1, brilliant. EDP is perfect.  I was worried about the scope of
 savanna. too.
 
 And one more bureaucratic thing... I see you already started implementing 
 it [1], 
 and it is named and goes as new EDP workflow [2]. I think new bluprint 
 should be 
 created for this feature to track all code changes as well as docs updates. 
 Docs I mean public Savanna docs about EDP, rest api docs and samples.
 
 Absolutely, I can make it new blueprint.  Thanks.
 
 [1] https://review.openstack.org/#/c/69712
 [2] 
 https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
 
 Regards,
 Alexander Ignatov
 
 
 
 On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote:
 
 Hello all,
 
 In our first pass at EDP, the model for job settings was very consistent
 across all of our job types. The execution-time settings fit into this
 (superset) structure:
 
 job_configs = {'configs': {}, # config settings for oozie and hadoop
   'params': {},  # substitution values for Pig/Hive
   'args': []}# script args (Pig and Java actions)
 
 But we have some things that don't fit (and probably more in the
 future):
 
 1) Java jobs have 'main_class' and 'java_opts' settings
  Currently these are handled as additional fields added to the
 structure above.  These were the first to diverge.
 
 2) Streaming MapReduce (anticipated) requires mapper and reducer
 settings (different than the mapred..class settings for
 non-streaming MapReduce)
 
 Problems caused by adding fields
 
 The job_configs structure above is stored in the database. Each time we
 add a field to the structure above at the level of configs, params, and
 args, we force a change to the database tables, a migration script and a
 change to the JSON validation for the REST api.
 
 We also cause a change for python-savannaclient and potentially other
 clients.
 
 This kind of change seems bad.
 
 Proposal: Borrow a page from Oozie and add savanna. configs
 -
 I would like to fit divergent job settings into the structure we already
 have.  One way to do this is to leverage the 'configs' dictionary.  This
 dictionary primarily contains settings for hadoop, but there are a
 number of oozie.xxx settings that are passed to oozie as configs or
 set by oozie for the benefit of running apps.
 
 What if we allow savanna. settings to be added to configs?  If we do
 that, any and all special configuration settings for specific job types
 or subtypes can be handled with no database changes and no api changes.
 
 Downside
 
 Currently, all 'configs' are rendered in the generated oozie workflow.
 The savanna. settings would be stripped out and processed by Savanna,
 thereby changing that behavior a bit (maybe not a big deal)
 
 We would also be mixing savanna. configs with config_hints for jobs,
 so users would potentially see savanna. settings mixed with oozie
 and hadoop settings.  Again, maybe not a big deal, but it might blur the
 lines a little bit.  Personally, I'm okay with this.
 
 Slightly different
 --
 We could also add a 'savanna-configs': {} element to job_configs to
 keep the configuration spaces separate.
 
 But, now we would have 'savanna-configs' (or another name), 'configs',
 'params', and 'args'.  Really? Just how many different types of values
 can we come up with? :)
 
 I lean away from this approach.
 
 Related: breaking up the superset
 -
 
 It is also the case that not every job type has every value type.
 
Configs   ParamsArgs
 HiveY YN
 Pig Y YY
 MapReduce   Y NN
 JavaY NY
 
 So do we make that explicit in the docs and enforce it in the api with
 errors?
 
 Thoughts? I'm sure there are some :)
 
 Best,
 
 Trevor
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.

We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.

Another thing is that we need to define the binding:profile dictionary.

Thanks,
Robert



On 1/29/14 4:02 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Will attend

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova V2 Quota API

2014-01-29 Thread Vinod Kumar Boppanna
Hi,

In the Documentation, it was mentioned that there are two API's to see the 
quotas of a tenant.

1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant

2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to 
show quotas for a specified tenant and a user

I guess the first API can be used by a member in a tenant to get the quotas of 
that tenant. The second one can be run by admin to get the quotas of any tenant 
or any user.

But through normal user when i am running any of the below (after 
authentication)

$ nova --debug quota-show --tenant tenant_id(tenant id of a project in 
which this user is member)
It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id}

or even when i am calling directly the API

$  curl -i -HX-Auth-Token:$TOKEN -H Content-type: application/json 
http://localhost:8774/v2/tenant_id/os-quota-sets/http://localhost:8774/v2/2665b63d29a1493990ab1c5412fc838d/os-quota-sets/
It says the Resource not found.

So, Is the first API is available?

Regards,
Vinod Kumar Boppanna
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-29 Thread Telles Nobrega
Hi,

I'm also working with multitenancy and I would like to join this working
group.

Telles Nóbrega


On Wed, Jan 29, 2014 at 9:14 AM, Ulrich Schwickerath 
ulrich.schwicker...@cern.ch wrote:

 Hi,

 I'm working with Vinod. We'd like to join as well. Same issue on our side:
 16:00 UTC is better for us.

 Ulrich and Vinod


 On 29.01.2014 10:56, Florent Flament wrote:

 Hi Vishvananda,

 I would be interested in such a working group.
 Can you please confirm the meeting hour for this Friday ?
 I've seen 1600 UTC in your email and 2100 UTC in the wiki (
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_
 Multitenancy_Meeting ). As I'm in Europe I'd prefer 1600 UTC.

 Florent Flament

 - Original Message -
 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, January 28, 2014 7:35:15 PM
 Subject: [openstack-dev] Hierarchicical Multitenancy Discussion

 Hi Everyone,

 I apologize for the obtuse title, but there isn't a better succinct term
 to describe what is needed. OpenStack has no support for multiple owners of
 objects. This means that a variety of private cloud use cases are simply
 not supported. Specifically, objects in the system can only be managed on
 the tenant level or globally.

 The key use case here is to delegate administration rights for a group of
 tenants to a specific user/role. There is something in Keystone called a
 domain which supports part of this functionality, but without support
 from all of the projects, this concept is pretty useless.

 In IRC today I had a brief discussion about how we could address this. I
 have put some details and a straw man up here:

 https://wiki.openstack.org/wiki/HierarchicalMultitenancy

 I would like to discuss this strawman and organize a group of people to
 get actual work done by having an irc meeting this Friday at 1600UTC. I
 know this time is probably a bit tough for Europe, so if we decide we need
 a regular meeting to discuss progress then we can vote on a better time for
 this meeting.

 https://wiki.openstack.org/wiki/Meetings#Hierarchical_
 Multitenancy_Meeting

 Please note that this is going to be an active team that produces code.
 We will *NOT* spend a lot of time debating approaches, and instead focus on
 making something that works and learning as we go. The output of this team
 will be a MultiTenant devstack install that actually works, so that we can
 ensure the features we are adding to each project work together.

 Vish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Telles Mota Vidal Nobrega
Bsc in Computer Science at UFCG
Developer at PulsarOpenStack Project - HP/LSD-UFCG
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer]bp:send-data-to-ceilometer

2014-01-29 Thread Gordon Chung
 Meter Names:
 fanspeed, fanspeed.min, fanspeed.max, fanspeed.status
 voltage, voltage.min, voltage.max, voltage.status
 temperature, temperature.min, temperature.max, 
temperature.status
 
 'FAN 1': {
 'current_value': '4652',
 'min_value': '4200',
 'max_value': '4693',
 'status': 'ok'
 }
 'FAN 2': {
 'current_value': '4322',
 'min_value': '4210',
 'max_value': '4593',
 'status': 'ok'
 },
 'voltage': {
 'Vcore': {
 'current_value': '0.81',
 'min_value': '0.80',
 'max_value': '0.85',
 'status': 'ok'
 },
 '3.3VCC': {
 'current_value': '3.36',
 'min_value': '3.20',
 'max_value': '3.56',
 'status': 'ok'
 },
 ...
 }
 }

are FAN 1, FAN 2, Vcore, etc... variable names or values that would 
consistently show up? if the former, would it make sense to have the 
meters be similar to fanspeed:trait where trait is FAN1, FAN2, etc...? 
if the meter is just fanspeed, what would the volume be? FAN 1's 
current_value?

cheers,

gordon chung
openstack, ibm software standards
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova V2 Quota API

2014-01-29 Thread Anne Gentle
Hi can you point out where you're seeing documentation for the first
without tenant_id?

At http://api.openstack.org/api-ref-compute-ext.html#ext-os-quota-sets only
the tenant_id is documented.

This is documented identically at
http://docs.openstack.org/api/openstack-compute/2/content/ext-os-quota-sets.html

Let us know where you're seeing the misleading documentation so we can log
a bug and fix it.
Anne


On Wed, Jan 29, 2014 at 8:48 AM, Vinod Kumar Boppanna 
vinod.kumar.boppa...@cern.ch wrote:

  Hi,

 In the Documentation, it was mentioned that there are two API's to see the
 quotas of a tenant.

 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant

 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin
 to show quotas for a specified tenant and a user

 I guess the first API can be used by a member in a tenant to get the
 quotas of that tenant. The second one can be run by admin to get the quotas
 of any tenant or any user.

 But through normal user when i am running any of the below (after
 authentication)

 $ nova --debug quota-show --tenant tenant_id(tenant id of a project
 in which this user is member)
 It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id}

 or even when i am calling directly the API

 $  curl -i -HX-Auth-Token:$TOKEN -H Content-type: application/json
 http://localhost:8774/v2/tenant_id/os-quota-sets/http://localhost:8774/v2/2665b63d29a1493990ab1c5412fc838d/os-quota-sets/
 It says the Resource not found.

 So, Is the first API is available?

 Regards,
 Vinod Kumar Boppanna

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova V2 Quota API

2014-01-29 Thread Yingjun Li

On Jan 29, 2014, at 22:48, Vinod Kumar Boppanna vinod.kumar.boppa...@cern.ch 
wrote:

 Hi,
 
 In the Documentation, it was mentioned that there are two API's to see the 
 quotas of a tenant.
 
 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
  
 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to 
 show quotas for a specified tenant and a user
 
 I guess the first API can be used by a member in a tenant to get the quotas 
 of that tenant. The second one can be run by admin to get the quotas of any 
 tenant or any user.
 
 But through normal user when i am running any of the below (after 
 authentication)
 
 $ nova --debug quota-show --tenant tenant_id(tenant id of a project in 
 which this user is member)
 It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id} 
 
 or even when i am calling directly the API 
 
 $  curl -i -HX-Auth-Token:$TOKEN -H Content-type: application/json 
 http://localhost:8774/v2/tenant_id/os-quota-sets/

I think the documentation is missing tenant_id behind os-quota-sets/
It should be like curl -i -HX-Auth-Token:$TOKEN -H Content-type: 
application/json http://localhost:8774/v2/tenant_id/os-quota-sets/tenant_id

 It says the Resource not found.
 
 So, Is the first API is available?
 
 Regards,
 Vinod Kumar Boppanna
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana Release V3 Extensions and new features to quota

2014-01-29 Thread Vishvananda Ishaya

On Jan 29, 2014, at 3:55 AM, Vinod Kumar Boppanna 
vinod.kumar.boppa...@cern.ch wrote:

 Dear Vishvananda,
 
 Sorry for very late reply. I was stupid not to follow your reply (i had 
 messed it some how). 
 
 Actually, i am confused after seeing your mail. In the last two weeks, i was 
 doing some testing (creating use cases) on Keystone and Nova.
 
 Part 1:  Delegating rights 
 
 I had made the following observations using Keystone V3
 
 1. RBAC were not working in Keystone V2 (it was only working in V3)
 2. In V3, I could create a role (like 'listRole') and created a user in a 
 tenant with this role
 3. I had changed the RBAC rules in policy.json file of keystone to allowed a 
 user with the 'listRole' in addition to admin, to run the list_domains, 
 list_projects and list_users operations
(earlier this operations can only be run by admin or we can say super-user)
 4. These settings were successful and working perfectly fine.
 
 What my point is here, by playing with RBAC with V3 APIs of keystone, i could 
 delegate rights to users. 
 
 So, i thought the same can be achieved in any other service (like nova). 
 For example, i thought in nova also i can create a role add change the 
 policy.json file to allow him to do the necessary operations like list, 
 update etc..
 
 I could not do this check, because i couldn't able to run Nova with V3 
 successfully and also could not find the Nova V3 APIs.
 
 But one thing i guess is missing here (even in keystone) is that, if we allow 
 a normal user with a role to do certain operations, then he/she can do those 
 operations in another domain or another project, in which he does not belong 
 to.
 So, i guess this can checked in the code. Lets use RBAC rules to check 
 whether a person can run that query or not. Once RBAC says it is allowed, we 
 can check whether an admin/super-user is running or a normal user is running 
 that query.
 If the user is admin, he can request for anything. If the user is a normal 
 user, then we can check whether he is asking only for his domain or his 
 project. If so, then only allow otherwise raise an error.

This idea is great in principle, but “asking only for his domain or his project 
doesn’t make any sense in this case”. In nova objects are explicitly owned by a 
project. There is no way to determine of an object is part of a domain, so 
roles in that sense are non-functional. This is true across projects and is 
something tht needs to be addressed.

 
 Part 2: Quotas
 
 I would also like to discuss with you about quotas. 
 
 As you know, the current quota system is de-centralized and the driver 
 available in nova is DbQuotaDriver, which allows to set quotas for a tenant 
 and users in the tenant. 
 I could manage the quota driver to point to a new driver called 
 DomainQuotaDriver (from Tiago Martins and team from HP) in nova code. I had 
 built a test case in which i checked that a tenant quota cannot be greater 
 than the domain quota in which the tenant is registered.Even, the sum of all 
 tenant quotas cannot exceed their domain quota. In this, what is missing is 
 the API's to operate the quotas for domains. I thought of creating these 
 API's in V2 (as i could not find V3 APIs in nova). So, a new level called 
 domain will be added to existing quota APIs. For example, the current API 
 v2/{tenant_id}/os-quota-sets allows to see the quotas for a tenant. 
 Probably, this can be changed to v2/{domain_id}/{tenant_id}/os-quota-sets 
 to see the quotas for a tenant in a domain. 

Again this makes sense in principle. We do have the domain in the request 
context from keystone. Unfortunately, once again there is no mapping of domain 
to object so there is no way to count the existing objects to determine how 
much has already been used.

If you can make the Hierarchical Ownership meeting tomorrow we will discuss 
adressing these and other issues so that we can at the very least have a 
prototype solution.

Vish
 
 I am currently trying to understand the nova-api code to see how and API 
 mapping is done (through routes) and how an API calling is actually leading 
 to a python function being called. Once i complete this, i am thinking of 
 about these API's. Ideally implementation the extension of domain quotas in 
 V3 APIs would have been good. But as i said i could not find any 
 documentation about the Nova V3 APIs
 
 
 I feel once we have Part 1 and Part 2, then quota delegation is not a big 
 task. Because with RBAC rules, we can allow a user lets say with tenant 
 admin role, can set the quotas for all the users in that tenant. 
 
 
 Please post your comments on this. Here at CERN we want to contribute to the 
 quota management (earlier thought of centralized quota, but currently going 
 with de-centralized quota with openstack services keeping the quota data). 
 I will wait for your comments to guide us or tell us how we can contribute..
 
 Thanks  Regards,
 Vinod Kumar Boppanna
 
 
 
 

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-29 Thread Robert Li (baoli)
Hi Yongli,

Thank you for addressing my comments, and for adding the encryption card
use case. One thing that I want to point out is that in this use case, you
may not use the pci-flavor in the --nic option because it's not a neutron
feature.

I have a few more questions:
1. pci-flavor-attrs is configured through configuration files and will be
available on both the controller node and the compute nodes. Can the cloud
admin decide to add a new attribute in a running cloud? If that's
possible, how is that done?
2. PCI flavor will be defined using the attributes in pci-flavor-attrs. A
flavor is defined with a matching expression in the form of attr1 = val11
[| val12 Š.], [attr2 = val21 [| val22 Š]], Š. And this expression is used
to match one or more PCI stats groups until a free PCI device is located.
In this case, both attr1 and attr2 can have multiple values, and both
attributes need to be satisfied. Please confirm this understanding is
correct
3. I'd like to see an example that involves multiple attributes. let's say
pci-flavor-attrs = {gpu, net-group, device_id, product_id}. I'd like to
know how PCI stats groups are formed on compute nodes based on that, and
how many of PCI stats groups are there? What's the reasonable guidelines
in defining the PCI flavors.


thanks,
Robert



On 1/28/14 10:16 PM, Robert Li (baoli) ba...@cisco.com wrote:

Hi,

I added a few comments in this wiki that Yongli came up with:
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support

Please check it out and look for Robert in the wiki.

Thanks,
Robert

On 1/21/14 9:55 AM, Robert Li (baoli) ba...@cisco.com wrote:

Yunhong, 

Just try to understand your use case:
-- a VM can only work with cards from vendor V1
-- a VM can work with cards from both vendor V1 and V2

  So stats in the two flavors will overlap in the PCI flavor
solution.
I'm just trying to say that this is something that needs to be properly
addressed.


Just for the sake of discussion, another solution to meeting the above
requirement is able to say in the nova flavor's extra-spec

   encrypt_card = card from vendor V1 OR encrypt_card = card from
vendor V2


In other words, this can be solved in the nova flavor, rather than
introducing a new flavor.

Thanks,
Robert
   

On 1/17/14 7:03 PM, yunhong jiang yunhong.ji...@linux.intel.com
wrote:

On Fri, 2014-01-17 at 22:30 +, Robert Li (baoli) wrote:
 Yunhong,
 
 I'm hoping that these comments can be directly addressed:
   a practical deployment scenario that requires arbitrary
 attributes.

I'm just strongly against to support only one attributes (your PCI
group) for scheduling and management, that's really TOO limited.

A simple scenario is, I have 3 encryption card:
 Card 1 (vendor_id is V1, device_id =0xa)
 card 2(vendor_id is V1, device_id=0xb)
 card 3(vendor_id is V2, device_id=0xb)

 I have two images. One image only support Card 1 and another image
support Card 1/3 (or any other combination of the 3 card type). I don't
only one attributes will meet such requirement.

As to arbitrary attributes or limited list of attributes, my opinion is,
as there are so many type of PCI devices and so many potential of PCI
devices usage, support arbitrary attributes will make our effort more
flexible, if we can push the implementation into the tree.

   detailed design on the following (that also take into account
 the
 introduction of predefined attributes):
 * PCI stats report since the scheduler is stats based

I don't think there are much difference with current implementation.

 * the scheduler in support of PCI flavors with arbitrary
 attributes and potential overlapping.

As Ian said, we need make sure the pci_stats and the PCI flavor have the
same set of attributes, so I don't think there are much difference with
current implementation.

   networking requirements to support multiple provider
 nets/physical
 nets

Can't the extra info resolve this issue? Can you elaborate the issue?

Thanks
--jyh
 
 I guess that the above will become clear as the discussion goes on.
 And we
 also need to define the deliveries
  
 Thanks,
 Robert 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-29 Thread Sergey Lukjanov
Trevor,

it sounds reasonable to move main_class and java_opts to edp.java.

Jon,

does you mean neutron-related info for namespaces support? If yes than
neutron isn't the user-side config.

Thanks.


On Wed, Jan 29, 2014 at 6:37 PM, Jon Maron jma...@hortonworks.com wrote:

 I imagine 'neutron' would follow suit as well..

 On Jan 29, 2014, at 9:23 AM, Trevor McKay tmc...@redhat.com wrote:

  So, assuming we go forward with this, the followup question is whether
  or not to move main_class and java_opts for Java actions into
  edp.java.main_class and edp.java.java_opts configs.
 
  I think yes.
 
  Best,
 
  Trevor
 
  On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
  On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
  Thank you for bringing this up, Trevor.
 
  EDP gets more diverse and it's time to change its model.
  I totally agree with your proposal, but one minor comment.
  Instead of savanna. prefix in job_configs wouldn't it be better to
 make it
  as edp.? I think savanna. is too more wide word for this.
 
  +1, brilliant. EDP is perfect.  I was worried about the scope of
  savanna. too.
 
  And one more bureaucratic thing... I see you already started
 implementing it [1],
  and it is named and goes as new EDP workflow [2]. I think new bluprint
 should be
  created for this feature to track all code changes as well as docs
 updates.
  Docs I mean public Savanna docs about EDP, rest api docs and samples.
 
  Absolutely, I can make it new blueprint.  Thanks.
 
  [1] https://review.openstack.org/#/c/69712
  [2]
 https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
 
  Regards,
  Alexander Ignatov
 
 
 
  On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote:
 
  Hello all,
 
  In our first pass at EDP, the model for job settings was very
 consistent
  across all of our job types. The execution-time settings fit into this
  (superset) structure:
 
  job_configs = {'configs': {}, # config settings for oozie and hadoop
'params': {},  # substitution values for Pig/Hive
'args': []}# script args (Pig and Java actions)
 
  But we have some things that don't fit (and probably more in the
  future):
 
  1) Java jobs have 'main_class' and 'java_opts' settings
   Currently these are handled as additional fields added to the
  structure above.  These were the first to diverge.
 
  2) Streaming MapReduce (anticipated) requires mapper and reducer
  settings (different than the mapred..class settings for
  non-streaming MapReduce)
 
  Problems caused by adding fields
  
  The job_configs structure above is stored in the database. Each time
 we
  add a field to the structure above at the level of configs, params,
 and
  args, we force a change to the database tables, a migration script
 and a
  change to the JSON validation for the REST api.
 
  We also cause a change for python-savannaclient and potentially other
  clients.
 
  This kind of change seems bad.
 
  Proposal: Borrow a page from Oozie and add savanna. configs
  -
  I would like to fit divergent job settings into the structure we
 already
  have.  One way to do this is to leverage the 'configs' dictionary.
  This
  dictionary primarily contains settings for hadoop, but there are a
  number of oozie.xxx settings that are passed to oozie as configs or
  set by oozie for the benefit of running apps.
 
  What if we allow savanna. settings to be added to configs?  If we do
  that, any and all special configuration settings for specific job
 types
  or subtypes can be handled with no database changes and no api
 changes.
 
  Downside
  
  Currently, all 'configs' are rendered in the generated oozie workflow.
  The savanna. settings would be stripped out and processed by
 Savanna,
  thereby changing that behavior a bit (maybe not a big deal)
 
  We would also be mixing savanna. configs with config_hints for jobs,
  so users would potentially see savanna. settings mixed with
 oozie
  and hadoop settings.  Again, maybe not a big deal, but it might blur
 the
  lines a little bit.  Personally, I'm okay with this.
 
  Slightly different
  --
  We could also add a 'savanna-configs': {} element to job_configs to
  keep the configuration spaces separate.
 
  But, now we would have 'savanna-configs' (or another name), 'configs',
  'params', and 'args'.  Really? Just how many different types of values
  can we come up with? :)
 
  I lean away from this approach.
 
  Related: breaking up the superset
  -
 
  It is also the case that not every job type has every value type.
 
 Configs   ParamsArgs
  HiveY YN
  Pig Y YY
  MapReduce   Y NN
  JavaY NY
 
  So do we make that explicit in the docs and enforce it in the api 

[openstack-dev] [nova][neutron][ml2] Proposal to support VIF security, PCI-passthru/SR-IOV, and other binding-specific data

2014-01-29 Thread Robert Kukura
The neutron patch [1] and nova patch [2], proposed to resolve the
get_firewall_required should use VIF parameter from neutron bug [3],
replace the binding:capabilities attribute in the neutron portbindings
extension with a new binding:vif_security attribute that is a dictionary
with several keys defined to control VIF security. When using the ML2
plugin, this binding:vif_security attribute flows from the bound
MechanismDriver to nova's GenericVIFDriver.

Separately, work on PCI-passthru/SR-IOV for ML2 also requires
binding-specific information to flow from the bound MechanismDriver to
nova's GenericVIFDriver. See [4] for links to various documents and BPs
on this.

A while back, in reviewing [1], I suggested a general mechanism to allow
ML2 MechanismDrivers to supply arbitrary port attributes in order to
meet both the above requirements. That approach was incorporated into
[1] and has been cleaned up and generalized a bit in [5].

I'm now becoming convinced that proliferating new port attributes for
various data passed from the neutron plugin (the bound MechanismDriver
in the case of ML2) to nova's GenericVIFDriver is not such a great idea.
One issue is that adding attributes keeps changing the API, but this
isn't really a user-facing API. Another is that all ports should have
the same set of attributes, so the plugin still has to be able to supply
those attributes when a bound MechanismDriver does not supply them. See [5].

Instead, I'm proposing here that the binding:vif_security attribute
proposed in [1] and [2] be renamed binding:vif_details, and used to
transport whatever data needs to flow from the neutron plugin (i.e.
ML2's bound MechanismDriver) to the nova GenericVIFDriver. This same
dictionary attribute would be able to carry the VIF security key/value
pairs defined in [1], those needed for [4], as well as any needed for
future GenericVIFDriver features. The set of key/value pairs in
binding:vif_details that apply would depend on the value of
binding:vif_type.

If this proposal is agreed to, I can quickly write a neutron BP covering
this and provide a generic implementation for ML2. Then [1] and [2]
could be updated to use binding:vif_details for the VIF security data
and eliminate the existing binding:capabilities attribute.

If we take this proposed approach of using binding:vif_details, the
internal ML2 handling of binding:vif_type and binding:vif_details could
either take the approach used for binding:vif_type and
binding:capabilities in the current code, where the values are stored in
the port binding DB table. Or they could take the approach in [5] where
they are obtained from bound MechanismDriver when needed. Comments on
these options are welcome.

Please provide feedback on this proposal and the various options in this
email thread and/or at today's ML2 sub-team meeting.

Thanks,

-Bob

[1] https://review.openstack.org/#/c/21946/
[2] https://review.openstack.org/#/c/44596/
[3] https://bugs.launchpad.net/nova/+bug/1112912
[4] https://wiki.openstack.org/wiki/Meetings/Passthrough
[5] https://review.openstack.org/#/c/69783/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Irena Berezovsky
Hi Robert,
I think that I can go with Bob's suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.com; Sandhya Dasu (sadasu); OpenStack 
Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
[IrenaB] As long as existing binding capable Mech Drivers will take vnic_type 
into its consideration, I guess doing it via binding:profile will introduce 
less changes all over (CLI, API). But I am not sure this reason is strong 
enough to choose this direction
We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.
[IrenaB] binding:profile is already supported, so it probably depends on 
direction with vnic_type

Another thing is that we need to define the binding:profile dictionary.
[IrenaB] With regards to PCI SRIOV related attributes, right?

Thanks,
Robert



On 1/29/14 4:02 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Will attend

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Nova] [oslo] [Ceilometer] about notifications : huge and may be non secure

2014-01-29 Thread Swann Croiset
Hi stackers,

I would like to share my wonder here about Notifications.

I'm working [1] on Heat notifications and I noticed that :
1/ Heat uses his context to store 'password'
2/ Heat and Nova store 'auth_token' in context too. Didn't check for other
projects except for neutron which doesn't store auth_token

These infos are consequently sent thru their notifications.

I guess we consider the broker as securised and network communications with
services too BUT
should not we delete these data anyway since IIRC they are never in use (at
least by ceilometer) and by the way
throwing it away the security question ?

My other concern is the size (Kb) of notifications : 70% for auth_token
(with pki) !
We can reduce the volume drastically and easily by deleting these data from
notifications.
I know that RabbitMQ (or others) is very robust and can handle this volume
but when I see this kind of improvements, I'am tempted to do it.

I see an easy way to fix that in oslo-incubator [2] :
delete keys of context if existing, config driven with password and
auth_token by default

thoughts?

[1]
https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications
[2]
https://github.com/openstack/oslo-incubator/blob/master/openstack/common/notifier/rpc_notifier.py
and others
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova V2 Quota API

2014-01-29 Thread Yingjun Li
I reported a bug here: https://bugs.launchpad.net/openstack-manuals/+bug/1274153

On Jan 29, 2014, at 23:33, Anne Gentle a...@openstack.org wrote:

 Hi can you point out where you're seeing documentation for the first without 
 tenant_id? 
 
 At http://api.openstack.org/api-ref-compute-ext.html#ext-os-quota-sets only 
 the tenant_id is documented. 
 
 This is documented identically at 
 http://docs.openstack.org/api/openstack-compute/2/content/ext-os-quota-sets.html
 
 Let us know where you're seeing the misleading documentation so we can log a 
 bug and fix it.
 Anne
 
 
 On Wed, Jan 29, 2014 at 8:48 AM, Vinod Kumar Boppanna 
 vinod.kumar.boppa...@cern.ch wrote:
 Hi,
 
 In the Documentation, it was mentioned that there are two API's to see the 
 quotas of a tenant.
 
 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant
  
 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin to 
 show quotas for a specified tenant and a user
 
 I guess the first API can be used by a member in a tenant to get the quotas 
 of that tenant. The second one can be run by admin to get the quotas of any 
 tenant or any user.
 
 But through normal user when i am running any of the below (after 
 authentication)
 
 $ nova --debug quota-show --tenant tenant_id(tenant id of a project in 
 which this user is member)
 It is calling the second API i.e  v2/{tenant_id}/os-quota-sets/{tenant_id} 
 
 or even when i am calling directly the API 
 
 $  curl -i -HX-Auth-Token:$TOKEN -H Content-type: application/json 
 http://localhost:8774/v2/tenant_id/os-quota-sets/
 It says the Resource not found.
 
 So, Is the first API is available?
 
 Regards,
 Vinod Kumar Boppanna
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-29 Thread Vishvananda Ishaya
I apologize for the confusion. The Wiki time of 2100 UTC is the correct time 
(Noon Pacific time). We can move tne next meeting to a different day/time that 
is more convienient for Europe.

Vish


On Jan 29, 2014, at 1:56 AM, Florent Flament 
florent.flament-...@cloudwatt.com wrote:

 Hi Vishvananda,
 
 I would be interested in such a working group.
 Can you please confirm the meeting hour for this Friday ?
 I've seen 1600 UTC in your email and 2100 UTC in the wiki ( 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting ). 
 As I'm in Europe I'd prefer 1600 UTC.
 
 Florent Flament
 
 - Original Message -
 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, January 28, 2014 7:35:15 PM
 Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
 
 Hi Everyone,
 
 I apologize for the obtuse title, but there isn't a better succinct term to 
 describe what is needed. OpenStack has no support for multiple owners of 
 objects. This means that a variety of private cloud use cases are simply not 
 supported. Specifically, objects in the system can only be managed on the 
 tenant level or globally.
 
 The key use case here is to delegate administration rights for a group of 
 tenants to a specific user/role. There is something in Keystone called a 
 “domain” which supports part of this functionality, but without support from 
 all of the projects, this concept is pretty useless.
 
 In IRC today I had a brief discussion about how we could address this. I have 
 put some details and a straw man up here:
 
 https://wiki.openstack.org/wiki/HierarchicalMultitenancy
 
 I would like to discuss this strawman and organize a group of people to get 
 actual work done by having an irc meeting this Friday at 1600UTC. I know this 
 time is probably a bit tough for Europe, so if we decide we need a regular 
 meeting to discuss progress then we can vote on a better time for this 
 meeting.
 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
 
 Please note that this is going to be an active team that produces code. We 
 will *NOT* spend a lot of time debating approaches, and instead focus on 
 making something that works and learning as we go. The output of this team 
 will be a MultiTenant devstack install that actually works, so that we can 
 ensure the features we are adding to each project work together.
 
 Vish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-29 Thread Vishvananda Ishaya

On Jan 29, 2014, at 5:26 AM, Justin Santa Barbara jus...@fathomdb.com wrote:

 Certainly my original inclination (and code!) was to agree with you Vish, but:
 
 1) It looks like we're going to have writable metadata anyway, for
 communication from the instance to the API.
 2) I believe the restrictions make it impractical to abuse it as a
 message-bus: size-limits, quotas and write-once make it very poorly
 suited for anything queue like.
 3) Anything that isn't opt-in will likely have security implications
 which means that it won't get deployed.  This must be deployed to be
 useful.

Fair enough. I agree that there are significant enough security implications
to skip the simple version.

Vish

 
 In short: I agree that it's not the absolute ideal solution (for me,
 that would be no opt-in), but it feels like the best solution given
 that we must have opt-in, or else e.g. HP won't deploy it.  It uses a
 (soon to be) existing mechanism, and is readily extensible without
 breaking APIs.
 
 On your idea of scoping by security group, I believe a certain someone
 is looking at supporting hierarchical projects, so we will likely need
 to support more advanced logic here later anyway.  For example:  the
 ability to specify whether an entry should be shared with instances in
 child projects.  This will likely take the form of a sort of selector
 language, so I anticipate we could offer a filter on security groups
 as well if this is useful.  We might well also allow selection by
 instance tags.  The approach allows this, though I would like to keep
 it as simple as possible at first (share with other instances in
 project or don't share)
 
 Justin
 
 
 On Tue, Jan 28, 2014 at 10:39 PM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 
 On Jan 28, 2014, at 12:17 PM, Justin Santa Barbara jus...@fathomdb.com 
 wrote:
 
 Thanks John - combining with the existing effort seems like the right
 thing to do (I've reached out to Claxton to coordinate).  Great to see
 that the larger issues around quotas / write-once have already been
 agreed.
 
 So I propose that sharing will work in the same way, but some values
 are visible across all instances in the project.  I do not think it
 would be appropriate for all entries to be shared this way.  A few
 options:
 
 1) A separate endpoint for shared values
 2) Keys are shared iff  e.g. they start with a prefix, like 'peers_XXX'
 3) Keys are set the same way, but a 'shared' parameter can be passed,
 either as a query parameter or in the JSON.
 
 I like option #3 the best, but feedback is welcome.
 
 I think I will have to store the value using a system_metadata entry
 per shared key.  I think this avoids issues with concurrent writes,
 and also makes it easier to have more advanced sharing policies (e.g.
 when we have hierarchical projects)
 
 Thank you to everyone for helping me get to what IMHO is a much better
 solution than the one I started with!
 
 Justin
 
 I am -1 on the post data. I think we should avoid using the metadata service
 as a cheap queue for communicating across vms and this moves strongly in
 that direction.
 
 I am +1 on providing a list of ip addresses in the current security group(s)
 via metadata. I like limiting by security group instead of project because
 this could prevent the 1000 instance case where people have large shared
 tenants and it also provides a single tenant a way to have multiple 
 autodiscoverd
 services. Also the security group info is something that neutron has access
 to so the neutron proxy should be able to generate the necessary info if
 neutron is in use.
 
 Just as an interesting side note, we put this vm list in way back in the NASA
 days as an easy way to get mpi clusters running. In this case we grouped the
 instances by the key_name used to launch the instance instead of security 
 group.
 I don't think it occurred to us to use security groups at the time.  Note we
 also provided the number of cores, but this was for convienience because the
 mpi implementation didn't support discovering number of cores. Code below.
 
 Vish
 
 $ git show 2cf40bb3
 commit 2cf40bb3b21d33f4025f80d175a4c2ec7a2f8414
 Author: Vishvananda Ishaya vishvana...@yahoo.com
 Date:   Thu Jun 24 04:11:54 2010 +0100
 
Adding mpi data
 
 diff --git a/nova/endpoint/cloud.py b/nova/endpoint/cloud.py
 index 8046d42..74da0ee 100644
 --- a/nova/endpoint/cloud.py
 +++ b/nova/endpoint/cloud.py
 @@ -95,8 +95,21 @@ class CloudController(object):
 def get_instance_by_ip(self, ip):
 return self.instdir.by_ip(ip)
 
 +def _get_mpi_data(self, project_id):
 +result = {}
 +for node_name, node in self.instances.iteritems():
 +for instance in node.values():
 +if instance['project_id'] == project_id:
 +line = '%s slots=%d' % (instance['private_dns_name'], 
 instance.get('vcpus', 0))
 +if instance['key_name'] in result:
 +

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi Irena,

I'm now even more confused. I must have missed something. See inline….

Thanks,
Robert

On 1/29/14 10:19 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
I think that I can go with Bob’s suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

[ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
[IrenaB] As long as existing binding capable Mech Drivers will take vnic_type 
into its consideration, I guess doing it via binding:profile will introduce 
less changes all over (CLI, API). But I am not sure this reason is strong 
enough to choose this direction
We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.
[IrenaB] binding:profile is already supported, so it probably depends on 
direction with vnic_type

[ROBERT] Can you let me know where in the code binding:profile is supported? in 
portbindings_db.py, the PortBindingPort model doesn't have a column for 
binding:profile. So I guess that I must have missed it.
Regarding BPs for the CLI/API, we are planning to add vnic-type and profileid 
in the CLI, also the new keys in binding:profile. Are you saying no changes are 
needed (say display them, interpret the added cli arguments, etc), therefore no 
new BPs are needed for them?

Another thing is that we need to define the binding:profile dictionary.
[IrenaB] With regards to PCI SRIOV related attributes, right?

[ROBERT] yes.


Thanks,
Robert



On 1/29/14 4:02 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Will attend

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 12:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Folks,

Can we have one more meeting tomorrow? I'd like to discuss the blueprints we 
are going to have and what each BP will be covering.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-29 Thread Vishvananda Ishaya
For those of you in Europe, I would appreciate your attendance at 2100 UTC if 
you can make it. I know this is a bad time for you, so I will also jump in 
#openstack-meeting-alt on Friday at 1600 UTC. We can have an impromptu 
discussion there so I can incorporate your knowledge and feedback into the 2100 
meeting.

Thanks!

Vish

On Jan 29, 2014, at 7:59 AM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 I apologize for the confusion. The Wiki time of 2100 UTC is the correct time 
 (Noon Pacific time). We can move tne next meeting to a different day/time 
 that is more convienient for Europe.
 
 Vish
 
 
 On Jan 29, 2014, at 1:56 AM, Florent Flament 
 florent.flament-...@cloudwatt.com wrote:
 
 Hi Vishvananda,
 
 I would be interested in such a working group.
 Can you please confirm the meeting hour for this Friday ?
 I've seen 1600 UTC in your email and 2100 UTC in the wiki ( 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting 
 ). As I'm in Europe I'd prefer 1600 UTC.
 
 Florent Flament
 
 - Original Message -
 From: Vishvananda Ishaya vishvana...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, January 28, 2014 7:35:15 PM
 Subject: [openstack-dev] Hierarchicical Multitenancy Discussion
 
 Hi Everyone,
 
 I apologize for the obtuse title, but there isn't a better succinct term to 
 describe what is needed. OpenStack has no support for multiple owners of 
 objects. This means that a variety of private cloud use cases are simply not 
 supported. Specifically, objects in the system can only be managed on the 
 tenant level or globally.
 
 The key use case here is to delegate administration rights for a group of 
 tenants to a specific user/role. There is something in Keystone called a 
 “domain” which supports part of this functionality, but without support from 
 all of the projects, this concept is pretty useless.
 
 In IRC today I had a brief discussion about how we could address this. I 
 have put some details and a straw man up here:
 
 https://wiki.openstack.org/wiki/HierarchicalMultitenancy
 
 I would like to discuss this strawman and organize a group of people to get 
 actual work done by having an irc meeting this Friday at 1600UTC. I know 
 this time is probably a bit tough for Europe, so if we decide we need a 
 regular meeting to discuss progress then we can vote on a better time for 
 this meeting.
 
 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting
 
 Please note that this is going to be an active team that produces code. We 
 will *NOT* spend a lot of time debating approaches, and instead focus on 
 making something that works and learning as we go. The output of this team 
 will be a MultiTenant devstack install that actually works, so that we can 
 ensure the features we are adding to each project work together.
 
 Vish
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-29 Thread demontie

Hi,

I'm working with multitenancy and I also wanna join this working group, 
but I'm not sure whether I can attend the meeting this Friday.


Demontiê Santos

Em 2014-01-29 12:59, Vishvananda Ishaya escreveu:

I apologize for the confusion. The Wiki time of 2100 UTC is the
correct time (Noon Pacific time). We can move tne next meeting to a
different day/time that is more convienient for Europe.

Vish


On Jan 29, 2014, at 1:56 AM, Florent Flament
florent.flament-...@cloudwatt.com wrote:

Hi Vishvananda,

I would be interested in such a working group.
Can you please confirm the meeting hour for this Friday ?
I've seen 1600 UTC in your email and 2100 UTC in the wiki ( 
https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting 
). As I'm in Europe I'd prefer 1600 UTC.


Florent Flament

- Original Message -
From: Vishvananda Ishaya vishvana...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org

Sent: Tuesday, January 28, 2014 7:35:15 PM
Subject: [openstack-dev] Hierarchicical Multitenancy Discussion

Hi Everyone,

I apologize for the obtuse title, but there isn't a better succinct 
term to describe what is needed. OpenStack has no support for multiple 
owners of objects. This means that a variety of private cloud use cases 
are simply not supported. Specifically, objects in the system can only 
be managed on the tenant level or globally.


The key use case here is to delegate administration rights for a group 
of tenants to a specific user/role. There is something in Keystone 
called a “domain” which supports part of this functionality, but 
without support from all of the projects, this concept is pretty 
useless.


In IRC today I had a brief discussion about how we could address this. 
I have put some details and a straw man up here:


https://wiki.openstack.org/wiki/HierarchicalMultitenancy

I would like to discuss this strawman and organize a group of people to 
get actual work done by having an irc meeting this Friday at 1600UTC. I 
know this time is probably a bit tough for Europe, so if we decide we 
need a regular meeting to discuss progress then we can vote on a better 
time for this meeting.


https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting

Please note that this is going to be an active team that produces code. 
We will *NOT* spend a lot of time debating approaches, and instead 
focus on making something that works and learning as we go. The output 
of this team will be a MultiTenant devstack install that actually 
works, so that we can ensure the features we are adding to each project 
work together.


Vish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova V2 Quota API

2014-01-29 Thread Anne Gentle
Thanks, it's confirmed and the doc team can work on it. Appreciate you
asking!

Anne


On Wed, Jan 29, 2014 at 9:44 AM, Yingjun Li liyingjun1...@gmail.com wrote:

 I reported a bug here:
 https://bugs.launchpad.net/openstack-manuals/+bug/1274153

 On Jan 29, 2014, at 23:33, Anne Gentle a...@openstack.org wrote:

 Hi can you point out where you're seeing documentation for the first
 without tenant_id?

 At http://api.openstack.org/api-ref-compute-ext.html#ext-os-quota-setsonly 
 the tenant_id is documented.

 This is documented identically at
 http://docs.openstack.org/api/openstack-compute/2/content/ext-os-quota-sets.html

 Let us know where you're seeing the misleading documentation so we can log
 a bug and fix it.
 Anne


 On Wed, Jan 29, 2014 at 8:48 AM, Vinod Kumar Boppanna 
 vinod.kumar.boppa...@cern.ch wrote:

  Hi,

 In the Documentation, it was mentioned that there are two API's to see
 the quotas of a tenant.

 1. v2/{tenant_id}/os-quota-sets - Shows quotas for a tenant

 2. v2/{tenant_id}/os-quota-sets/{tenant_id}/{user_id} - Enables an admin
 to show quotas for a specified tenant and a user

 I guess the first API can be used by a member in a tenant to get the
 quotas of that tenant. The second one can be run by admin to get the quotas
 of any tenant or any user.

 But through normal user when i am running any of the below (after
 authentication)

 $ nova --debug quota-show --tenant tenant_id(tenant id of a
 project in which this user is member)
 It is calling the second API i.e
 v2/{tenant_id}/os-quota-sets/{tenant_id}

 or even when i am calling directly the API

 $  curl -i -HX-Auth-Token:$TOKEN -H Content-type: application/json
 http://localhost:8774/v2/tenant_id/os-quota-sets/http://localhost:8774/v2/2665b63d29a1493990ab1c5412fc838d/os-quota-sets/
 It says the Resource not found.

 So, Is the first API is available?

 Regards,
 Vinod Kumar Boppanna

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Ceilometer]bp:send-data-to-ceilometer

2014-01-29 Thread Devananda van der Veen
On Wed, Jan 29, 2014 at 7:22 AM, Gordon Chung chu...@ca.ibm.com wrote:

  Meter Names:
  fanspeed, fanspeed.min, fanspeed.max, fanspeed.status
  voltage, voltage.min, voltage.max, voltage.status
  temperature, temperature.min, temperature.max, temperature.status
 
  'FAN 1': {
  'current_value': '4652',
  'min_value': '4200',
  'max_value': '4693',
  'status': 'ok'
  }
  'FAN 2': {
  'current_value': '4322',
  'min_value': '4210',
  'max_value': '4593',
  'status': 'ok'
  },
  'voltage': {
  'Vcore': {
  'current_value': '0.81',
  'min_value': '0.80',
  'max_value': '0.85',
  'status': 'ok'
  },
  '3.3VCC': {
  'current_value': '3.36',
  'min_value': '3.20',
  'max_value': '3.56',
  'status': 'ok'
  },
  ...
  }
  }


 are FAN 1, FAN 2, Vcore, etc... variable names or values that would
 consistently show up? if the former, would it make sense to have the meters
 be similar to fanspeed:trait where trait is FAN1, FAN2, etc...? if the
 meter is just fanspeed, what would the volume be? FAN 1's current_value?


Different hardware will expose different number of each of these things. In
Haomeng's first proposal, all hardware would expose a fanspeed and a
voltage category, but with a variable number of meters in each category.
In the second proposal, it looks like there are no categories and hardware
exposes a variable number of meters whose names adhere to some consistent
structure (eg, FAN ? and V???).

It looks to me like the question is whether or not to use categories to
group similar meters.

-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [policy] More conflict resolution

2014-01-29 Thread Tim Hinrichs
Hi Manuel,

Responses inline.  Thanks for the feedback!

- Original Message -
| From: Manuel Stein (Manuel) manuel.st...@alcatel-lucent.com
| To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
| Sent: Friday, January 24, 2014 2:03:43 AM
| Subject: Re: [openstack-dev] More conflict resolution
| 
| Tim,
| 
| w.r.t. different tenants I might be missing something - why should
| policies remain stored per-user? In general, when the user creates
| something, wouldn't the user's policies (more like
| preferences/template) be applied to and saved for the tenant/created
| elements they're active in? IMHO you can't solve the policy
| anomalies when you don't know yet whether they'll ever be applied to
| the same entity or never actually conflict.

Each policy has a src and destination group to which it applies.  Groups 
include apps, VMs, logical ports, or whatever.  Group membership can change.  
Because groups are dynamic it may be that at the time the policy is written 
there are no conflicts but that as elements are added to the groups, conflicts 
arise.  And since each policy only applies to some packets, there can be 
conflicts for some communications and not others.

| 
| FYI: Ehab Al-Shaer (UNCC) is well known in IEEE regarding policies
| and has formalized policy anomalies and investigated their
| detection, also in distributed systems.

Thanks for the pointer.  I quickly perused his publication list and will return 
to it when I have more time to browse.

| 
| The scenarios in 1) could be solved with priorities and different
| corrective actions.
| I'd say an admin rule has a higher priority than the non-admin one,
| in which case both should be informed about the precedence taken.
| The cases user-vs-user and admin-vs-admin shouldn't allow to apply
| conflicting rules on the same entity. Two admins share the
| responsibility within a tenant/project and rules should be visible
| to one another. Same for the user group. I wouldn't know how to deal
| with hidden user-specific rules that somehow interfere with and
| shadow my already applied policies.

Priorities seem like the right solution for admin-tenant conflicts.  Are there 
any other roles that should have priorities?

I think that disallowing conflicting policies is a bit difficult, esp in the 
tenant-tenant case b/c the two tenants may not know each other.  It might be 
disconcerting for a tenant to have her policy rejected b/c someone she didn't 
know wrote a policy she's not allowed to see that may potentially conflict with 
her policy at some point in the future.  An alternative to disallowing 
conflicting policies from is to combine the policies of the same priority into 
a single policy (conceptually) and then apply the conflict-resolution scheme 
for that single policy.   Thoughts?

| 
| as of 2) at creation or runtime
| Either way you'd want a policy anomaly detected at creation, i.e.
| when a user's rule is applied. Either the new rule's priority is
| lower and hence shadowed by the higher priority or the new rule's
| priority is higher and supersedes actions of another. In either case
| you'd want the anomaly detected and corrective action taken at the
| time it is applied (Supersede and email the non-admin, report the
| user which rules shadow/generalize which, etc, etc). The
| conflicts/status (overruled/active/etc) should be part of the
| applied rules set.

What if the rules only conflict sometimes (e.g. for some packets), e.g. one 
policy says to ALLOW packets with dest port 80 and another rule says to DROP 
packets with src port 80.  For most packets, there's no conflict, but if the 
dest-port is 80 and the src-port is 80, then there's a conflict.  So (a) it's 
not as simple as marking a rule as overruled/active/etc., (b) computationally 
it may be non-trivial to identify rules that sometimes conflict, and (c) the 
users writing the policies may know that the packets that cause conflicts will 
never appear in the network.

| 
| as of 3) role changes
| my gut feeling was to say that rules keep their priorities, because
| they've been made by an admin/non-admin at that time. The
| suddenly-an-admin user could remove the shadowing rules if it bugs
| the user-rules.

Role changes are complicated for 2 reasons: (i) we need to have a way of 
finding out about them and (ii) once we have the information we need to decide 
what to do with it.  Any thoughts on how to accomplish (i)?

As for (ii), I worry about leaving rule priorities as they were before the role 
change because the whole purpose of a role change is to change someone's 
rights.  If someone was an admin and were demoted for questionable behavior, 
leaving their policies in the admin priority seems like a mistake.  At the same 
time, re-prioritizing the rules for role changes could cause drastic behavioral 
changes in the network.  If a regular user's policies are all suddenly promoted 
to admin priority, no one will be able 

[openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Russell Bryant
Greetings,

A while back I mentioned that we would revisit the potential deprecation
of nova-network in Icehouse after the icehouse-2 milestone.  The time
has come.  :-)

First, let me recap my high level view of the blockers to deprecating
nova-network in favor of Neutron:

  - Feature parity
- The biggest gap here has been nova-network's multi-host mode.
  Neutron needs some sort of HA for l3 agents, as well as the
  ability to run in a mode that enables a single tenant's traffic
  to be actively handled by multiple nodes.

  - Testing / Quality parity
- Neutron needs to reach testing and quality parity in CI.  This
  includes running the full tempest suite, for example.  For all
  tests run against nova with nova-network that are applicable, they
  need to be run against Neutron, as well.  All of these jobs should
  have comparable or better reliability than the ones with
  nova-network.

  - Production-ready open source components
- nova-network provides basic, but usable in production networking
  based purely on open source components.  Neutron must have
  production-ready options based purely on open source components,
  as well, that provides comparable or better performance and
  reliability.

First, I would like to say thank you to those in the Neutron team that
have worked hard to make progress in various areas.  While there has
been good progress, we're not quite there on achieving these items.  As
a result, nova-network will *not* be marked deprecated in Icehouse.  We
will revisit this question again in a future release.  I'll leave it to
the Neutron team to comment further on the likelihood of meeting these
goals in the Juno development cycle.

Regarding nova-network, I would like to make some changes.  We froze
development on nova-network in advance of its deprecation.
Unfortunately, this process has taken longer than anyone thought or
hoped.  This has had some negative consequences on the nova-network code
(such as [1]).

Effective immediately, I would like to unfreeze nova-network
development.  What this really means:

  - We will no longer skip nova-network when making general
architectural improvements to the rest of the code.  An example
of playing catch-up in nova-network is [2].

  - We will accept new features, evaluated on a case by case basis,
just like any other Nova feature.  However, we are explicitly
*not* interested in features that widen the parity gaps between
nova-network and Neutron.

  - While we will accept incremental features to nova-network, we
are *not* interested in increasing the scope of nova-network
to include support of any SDN controller.  We leave that as
something exclusive to Neutron.

I firmly believe that Neutron is the future of networking for OpenStack.
 We just need to loosen up nova-network to move it along to ease some
pressure and solve some problems as we continue down this transition.

Thanks,

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/024052.html
[2] https://blueprints.launchpad.net/nova/+spec/nova-network-objects

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Matt Riedemann



On 1/29/2014 10:47 AM, Russell Bryant wrote:

Greetings,

A while back I mentioned that we would revisit the potential deprecation
of nova-network in Icehouse after the icehouse-2 milestone.  The time
has come.  :-)

First, let me recap my high level view of the blockers to deprecating
nova-network in favor of Neutron:

   - Feature parity
 - The biggest gap here has been nova-network's multi-host mode.
   Neutron needs some sort of HA for l3 agents, as well as the
   ability to run in a mode that enables a single tenant's traffic
   to be actively handled by multiple nodes.

   - Testing / Quality parity
 - Neutron needs to reach testing and quality parity in CI.  This
   includes running the full tempest suite, for example.  For all
   tests run against nova with nova-network that are applicable, they
   need to be run against Neutron, as well.  All of these jobs should
   have comparable or better reliability than the ones with
   nova-network.

   - Production-ready open source components
 - nova-network provides basic, but usable in production networking
   based purely on open source components.  Neutron must have
   production-ready options based purely on open source components,
   as well, that provides comparable or better performance and
   reliability.

First, I would like to say thank you to those in the Neutron team that
have worked hard to make progress in various areas.  While there has
been good progress, we're not quite there on achieving these items.  As
a result, nova-network will *not* be marked deprecated in Icehouse.  We
will revisit this question again in a future release.  I'll leave it to
the Neutron team to comment further on the likelihood of meeting these
goals in the Juno development cycle.

Regarding nova-network, I would like to make some changes.  We froze
development on nova-network in advance of its deprecation.
Unfortunately, this process has taken longer than anyone thought or
hoped.  This has had some negative consequences on the nova-network code
(such as [1]).

Effective immediately, I would like to unfreeze nova-network
development.  What this really means:

   - We will no longer skip nova-network when making general
 architectural improvements to the rest of the code.  An example
 of playing catch-up in nova-network is [2].

   - We will accept new features, evaluated on a case by case basis,
 just like any other Nova feature.  However, we are explicitly
 *not* interested in features that widen the parity gaps between
 nova-network and Neutron.

   - While we will accept incremental features to nova-network, we
 are *not* interested in increasing the scope of nova-network
 to include support of any SDN controller.  We leave that as
 something exclusive to Neutron.

I firmly believe that Neutron is the future of networking for OpenStack.
  We just need to loosen up nova-network to move it along to ease some
pressure and solve some problems as we continue down this transition.

Thanks,

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/024052.html
[2] https://blueprints.launchpad.net/nova/+spec/nova-network-objects



Timely thread.  I was just going through nova/neutron-related blueprints 
and patches yesterday for Icehouse and noted these as something I think 
we definitely need as pre-reqs before going all-in with neutron:


https://blueprints.launchpad.net/neutron/+spec/instance-nw-info-api
https://bugs.launchpad.net/nova/+bug/1255594
https://bugs.launchpad.net/nova/+bug/1258620

There are patches up for the two bugs, but they need some work.

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Augmenting openstack_dashboard settings and possible horizon bug

2014-01-29 Thread Timur Sufiev
Thanks to the Ana Krivokapic's comment in
https://bugs.launchpad.net/horizon/+bug/1271151 I've found a typo in my
code, so it's not Horizon's bug but mine. So, the original approach I used
for augmenting openstack_dashboard settings (which is described in the
initial letter of this thread) works fine, and it's still more flexible
than a new Horizon's mechanism for adding separate dashboards settings, so
we'll stick to our approach. Just to keep you informed :).

Also, the thing that bothered me most when I thought about how Murano could
use this new Horizon's config facility, is changing DATABASES parameter
(muranodashboard app needs it to be set) - and more generally, resolving
conflicts when different dashboards change some common parameters (such as
DATABASES or SESSION_BACKEND) affecting all Django applications. Current
algorithm of determining which dashboard sets the actual value of a
parameter such as DATABASES seems unreliable to me in sense that it can
break other dashboards which failed to set this parameter to the value they
needed.


On Thu, Jan 16, 2014 at 1:57 PM, Timur Sufiev tsuf...@mirantis.com wrote:

 Radomir,

 it looks interesting indeed. I think Murano could use it in case several
 additional parameters were added. I will submit a patch with my ideas a bit
 later.

 One thing that seemed tricky to me in your patchset is determining which
 dashboard will actually be the default one, but I have yet no clue on how
 it could be made simpler using pluggable architecture.


 On Wed, Jan 15, 2014 at 6:57 PM, Radomir Dopieralski 
 openst...@sheep.art.pl wrote:

 On 15/01/14 15:30, Timur Sufiev wrote:
  Recently I've decided to fix situation with Murano's dashboard and move
  all Murano-specific django settings into a separate file (previously
  they were appended to
  /usr/share/openstack-dashboard/openstack_dashboard/settings.py). But, as
  I knew, /etc/openstack_dashboard/local_settings.py is for customization
  by admins and is distro-specific also - so I couldn't use it for
  Murano's dashboard customization.

 [snip]

  2. What is the sensible approach for customizing settings for some
  Horizon's dashboard in that case?

 We recently added a way for dashboards to have (some) of their
 configuration provided in separate files, maybe that would be
 helpful for Murano?

 The patch is https://review.openstack.org/#/c/56367/

 We can add more settings that can be changed, we just have to know what
 is needed.

 --
 Radomir Dopieralski


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Timur Sufiev




-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Dan Smith
 Effective immediately, I would like to unfreeze nova-network
 development.

I fully support this plan, while also agreeing that Neutron is the
future of networking for OpenStack. As we have seen with recent
performance-related gate failures, we cannot continue to ignore
nova-network while the rest of the system moves forward.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-29 Thread Ben Nemec
 

Okay, I think you've convinced me. Specific comments below. 

-Ben 

On 2014-01-29 07:05, Doug Hellmann wrote: 

 On Tue, Jan 28, 2014 at 8:47 PM, Ben Nemec openst...@nemebean.com wrote:
 
 On 2014-01-27 11:42, Doug Hellmann wrote: 
 
 We have a blueprint open for separating translated log messages into 
 different domains so the translation team can prioritize them differently 
 (focusing on errors and warnings before debug messages, for example) [1]. 
 Some concerns were raised related to the review [2], and I would like to 
 address those in this thread and see if we can reach consensus about how to 
 proceed. 
 The implementation in [2] provides a set of new marker functions similar to 
 _(), one for each log level (we have _LE, LW, _LI, _LD, etc.). These would be 
 used in conjunction with _(), and reserved for log messages. Exceptions, API 
 messages, and other user-facing messages all would still be marked for 
 translation with _() and would (I assume) receive the highest priority work 
 from the translation team. 
 When the string extraction CI job is updated, we will have one main catalog 
 for each app or library, and additional catalogs for the log levels. Those 
 show up in transifex separately, but will be named in a way that they are 
 obviously related. Each translation team will be able to decide, based on the 
 requirements of their users, how to set priorities for translating the 
 different catalogs. 
 Existing strings being sent to the log and marked with _() will be removed 
 from the main catalog and moved to the appropriate log-level-specific catalog 
 when their marker function is changed. My understanding is that transifex is 
 smart enough to recognize the same string from more than one source, and to 
 suggest previous translations when it sees the same text. This should make it 
 easier for the translation teams to catch up by reusing the translations 
 they have already done, in the new catalogs. 
 One concern that was raised was the need to mark all of the log messages by 
 hand. I investigated using extraction patterns like LOG.debug( and 
 LOG.info(, but because of the way the translation actually works internally 
 we cannot do that. There are a few related reasons. 
 In other applications, the function _() translates a string at the point 
 where it is invoked, and returns a new string object. OpenStack has a 
 requirement that messages be translated multiple times, whether in the API or 
 the LOG (there is already support for logging in more than one language, to 
 different log files). This requirement means we delay the translation 
 operation until right before the string is output, at which time we know the 
 target language. We could update the log functions to create Message objects 
 dynamically, except... 
 Each app or library that uses the translation code will need its own domain 
 for the message catalogs. We get around that right now by not translating 
 many messages from the libraries, but that's obviously not what we want long 
 term (we at least want exceptions translated). If we had a special version of 
 a logger in oslo.log that knew how to create Message objects for the format 
 strings used in logging (the first argument to LOG.debug for example), it 
 would also have to know what translation domain to use so the proper catalog 
 could be loaded. The wrapper functions defined in the patch [2] include this 
 information, and can be updated to be application or library specific when 
 oslo.log eventually becomes its own library. 
 Further, as part of moving the logging code from oslo-incubator to oslo.log, 
 and making our logging something we can use from other OpenStack libraries, 
 we are trying to change the implementation of the logging code so it is no 
 longer necessary to create loggers with our special wrapper function. That 
 would mean that oslo.log will be a library for *configuring* logging, but the 
 actual log calls can be handled with Python's standard library, eliminating a 
 dependency between new libraries and oslo.log. (This is a longer, and 
 separate, discussion, but I mention it here as backround. We don't want to 
 change the API of the logger in oslo.log because we don't want to be using it 
 directly in the first place.) 
 Another concern raised was the use of a prefix _L for these functions, since 
 it ties the priority definitions to logs. I chose that prefix as an 
 explicit indicate that these *are* just for logs. I am not associating any 
 actual priority with them. The translators want us to move the log messages 
 out of the main catalog. Having them all in separate catalogs is a refinement 
 that gives them what they want -- some translators don't care about log 
 messages at all, some only care about errors, etc. We decided that the 
 translators should set priorities, and we would make that possible by 
 separating the catalogs into logical groups. Everything marked with _() will 
 still go into the main catalog, 

Re: [openstack-dev] [savanna] How to handle diverging EDP job configuration settings

2014-01-29 Thread Andrew Lazarev
I like idea of edp. prefix.

Andrew.


On Wed, Jan 29, 2014 at 6:23 AM, Trevor McKay tmc...@redhat.com wrote:

 So, assuming we go forward with this, the followup question is whether
 or not to move main_class and java_opts for Java actions into
 edp.java.main_class and edp.java.java_opts configs.

 I think yes.

 Best,

 Trevor

 On Wed, 2014-01-29 at 09:15 -0500, Trevor McKay wrote:
  On Wed, 2014-01-29 at 14:35 +0400, Alexander Ignatov wrote:
   Thank you for bringing this up, Trevor.
  
   EDP gets more diverse and it's time to change its model.
   I totally agree with your proposal, but one minor comment.
   Instead of savanna. prefix in job_configs wouldn't it be better to
 make it
   as edp.? I think savanna. is too more wide word for this.
 
  +1, brilliant. EDP is perfect.  I was worried about the scope of
  savanna. too.
 
   And one more bureaucratic thing... I see you already started
 implementing it [1],
   and it is named and goes as new EDP workflow [2]. I think new bluprint
 should be
   created for this feature to track all code changes as well as docs
 updates.
   Docs I mean public Savanna docs about EDP, rest api docs and samples.
 
  Absolutely, I can make it new blueprint.  Thanks.
 
   [1] https://review.openstack.org/#/c/69712
   [2]
 https://blueprints.launchpad.net/openstack/?searchtext=edp-oozie-streaming-mapreduce
  
   Regards,
   Alexander Ignatov
  
  
  
   On 28 Jan 2014, at 20:47, Trevor McKay tmc...@redhat.com wrote:
  
Hello all,
   
In our first pass at EDP, the model for job settings was very
 consistent
across all of our job types. The execution-time settings fit into
 this
(superset) structure:
   
job_configs = {'configs': {}, # config settings for oozie and hadoop
 'params': {},  # substitution values for Pig/Hive
 'args': []}# script args (Pig and Java actions)
   
But we have some things that don't fit (and probably more in the
future):
   
1) Java jobs have 'main_class' and 'java_opts' settings
  Currently these are handled as additional fields added to the
structure above.  These were the first to diverge.
   
2) Streaming MapReduce (anticipated) requires mapper and reducer
settings (different than the mapred..class settings for
non-streaming MapReduce)
   
Problems caused by adding fields

The job_configs structure above is stored in the database. Each time
 we
add a field to the structure above at the level of configs, params,
 and
args, we force a change to the database tables, a migration script
 and a
change to the JSON validation for the REST api.
   
We also cause a change for python-savannaclient and potentially other
clients.
   
This kind of change seems bad.
   
Proposal: Borrow a page from Oozie and add savanna. configs
-
I would like to fit divergent job settings into the structure we
 already
have.  One way to do this is to leverage the 'configs' dictionary.
  This
dictionary primarily contains settings for hadoop, but there are a
number of oozie.xxx settings that are passed to oozie as configs or
set by oozie for the benefit of running apps.
   
What if we allow savanna. settings to be added to configs?  If we
 do
that, any and all special configuration settings for specific job
 types
or subtypes can be handled with no database changes and no api
 changes.
   
Downside

Currently, all 'configs' are rendered in the generated oozie
 workflow.
The savanna. settings would be stripped out and processed by
 Savanna,
thereby changing that behavior a bit (maybe not a big deal)
   
We would also be mixing savanna. configs with config_hints for
 jobs,
so users would potentially see savanna. settings mixed with
 oozie
and hadoop settings.  Again, maybe not a big deal, but it might blur
 the
lines a little bit.  Personally, I'm okay with this.
   
Slightly different
--
We could also add a 'savanna-configs': {} element to job_configs to
keep the configuration spaces separate.
   
But, now we would have 'savanna-configs' (or another name),
 'configs',
'params', and 'args'.  Really? Just how many different types of
 values
can we come up with? :)
   
I lean away from this approach.
   
Related: breaking up the superset
-
   
It is also the case that not every job type has every value type.
   
Configs   ParamsArgs
HiveY YN
Pig Y YY
MapReduce   Y NN
JavaY NY
   
So do we make that explicit in the docs and enforce it in the api
 with
errors?
   
Thoughts? I'm sure there are some :)
   
Best,
   
Trevor
   
   
   
   
  

[openstack-dev] OpenStack python clients libraries release process

2014-01-29 Thread Tiago Mello
Hey there,

Could someone clarify how python clients like novaclient, glanceclient
release process works? How can we add more features and how the target
releases are set... etc...

Any documentation, or any comment is appreciated.

Thanks!

Tiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-29 Thread Doug Hellmann
On Wed, Jan 29, 2014 at 11:52 AM, Ben Nemec openst...@nemebean.com wrote:

  Okay, I think you've convinced me.  Specific comments below.

 -Ben

 On 2014-01-29 07:05, Doug Hellmann wrote:


 On Tue, Jan 28, 2014 at 8:47 PM, Ben Nemec openst...@nemebean.com wrote:

   On 2014-01-27 11:42, Doug Hellmann wrote:

  We have a blueprint open for separating translated log messages into
 different domains so the translation team can prioritize them differently
 (focusing on errors and warnings before debug messages, for example) [1].
 Some concerns were raised related to the review [2], and I would like to
 address those in this thread and see if we can reach consensus about how to
 proceed.
 The implementation in [2] provides a set of new marker functions similar
 to _(), one for each log level (we have _LE, LW, _LI, _LD, etc.). These
 would be used in conjunction with _(), and reserved for log messages.
 Exceptions, API messages, and other user-facing messages all would still be
 marked for translation with _() and would (I assume) receive the highest
 priority work from the translation team.
 When the string extraction CI job is updated, we will have one main
 catalog for each app or library, and additional catalogs for the log
 levels. Those show up in transifex separately, but will be named in a way
 that they are obviously related. Each translation team will be able to
 decide, based on the requirements of their users, how to set priorities for
 translating the different catalogs.
 Existing strings being sent to the log and marked with _() will be
 removed from the main catalog and moved to the appropriate
 log-level-specific catalog when their marker function is changed. My
 understanding is that transifex is smart enough to recognize the same
 string from more than one source, and to suggest previous translations when
 it sees the same text. This should make it easier for the translation teams
 to catch up by reusing the translations they have already done, in the
 new catalogs.
 One concern that was raised was the need to mark all of the log messages
 by hand. I investigated using extraction patterns like LOG.debug( and
 LOG.info(, but because of the way the translation actually works
 internally we cannot do that. There are a few related reasons.
 In other applications, the function _() translates a string at the point
 where it is invoked, and returns a new string object. OpenStack has a
 requirement that messages be translated multiple times, whether in the API
 or the LOG (there is already support for logging in more than one language,
 to different log files). This requirement means we delay the translation
 operation until right before the string is output, at which time we know
 the target language. We could update the log functions to create Message
 objects dynamically, except...
 Each app or library that uses the translation code will need its own
 domain for the message catalogs. We get around that right now by not
 translating many messages from the libraries, but that's obviously not what
 we want long term (we at least want exceptions translated). If we had a
 special version of a logger in oslo.log that knew how to create Message
 objects for the format strings used in logging (the first argument to
 LOG.debug for example), it would also have to know what translation domain
 to use so the proper catalog could be loaded. The wrapper functions defined
 in the patch [2] include this information, and can be updated to be
 application or library specific when oslo.log eventually becomes its own
 library.
 Further, as part of moving the logging code from oslo-incubator to
 oslo.log, and making our logging something we can use from other OpenStack
 libraries, we are trying to change the implementation of the logging code
 so it is no longer necessary to create loggers with our special wrapper
 function. That would mean that oslo.log will be a library for *configuring*
 logging, but the actual log calls can be handled with Python's standard
 library, eliminating a dependency between new libraries and oslo.log. (This
 is a longer, and separate, discussion, but I mention it here as backround.
 We don't want to change the API of the logger in oslo.log because we don't
 want to be using it directly in the first place.)
 Another concern raised was the use of a prefix _L for these functions,
 since it ties the priority definitions to logs. I chose that prefix as an
 explicit indicate that these *are* just for logs. I am not associating any
 actual priority with them. The translators want us to move the log messages
 out of the main catalog. Having them all in separate catalogs is a
 refinement that gives them what they want -- some translators don't care
 about log messages at all, some only care about errors, etc. We decided
 that the translators should set priorities, and we would make that possible
 by separating the catalogs into logical groups. Everything marked with _()
 will still go into the 

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Irena Berezovsky
Hi Robert,
Please see inline, I'll try to post my understanding.


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 6:03 PM
To: Irena Berezovsky; rkuk...@redhat.com; Sandhya Dasu (sadasu); OpenStack 
Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Irena,

I'm now even more confused. I must have missed something. See inline

Thanks,
Robert

On 1/29/14 10:19 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
I think that I can go with Bob's suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

[ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?
[IrenaB] binding:profile is can be used by any plugin that supports binding 
extension. To persist the binding:profile (in the DB), plugin should add DB 
table for this . The PortBindingMixin does not persist the binding:profile for 
now.

Another issue that came up during the meeting is about whether or not vnic-type 
should be part of the top level binding or part of binding:profile. In other 
words, should it be defined as binding:vnic-type or binding:profile:vnic-type.
[IrenaB] As long as existing binding capable Mech Drivers will take vnic_type 
into its consideration, I guess doing it via binding:profile will introduce 
less changes all over (CLI, API). But I am not sure this reason is strong 
enough to choose this direction
We also need one or two BPs to cover the change in the neutron 
port-create/port-show CLI/API.
[IrenaB] binding:profile is already supported, so it probably depends on 
direction with vnic_type

[ROBERT] Can you let me know where in the code binding:profile is supported? in 
portbindings_db.py, the PortBindingPort model doesn't have a column for 
binding:profile. So I guess that I must have missed it.
[IrenaB] For existing examples for supporting binding:profile by existing 
plugins you can look at two examples:
https://github.com/openstack/neutron/blob/master/neutron/plugins/mlnx/mlnx_plugin.py
 - line 
266https://github.com/openstack/neutron/blob/master/neutron/plugins/mlnx/mlnx_plugin.py%20-%20line%20266

https://github.com/openstack/neutron/blob/master/neutron/plugins/nec/nec_plugin.py
 - line 
424https://github.com/openstack/neutron/blob/master/neutron/plugins/nec/nec_plugin.py%20-%20line%20424

Regarding BPs for the CLI/API, we are planning to add vnic-type and profileid 
in the CLI, also the new keys in binding:profile. Are you saying no changes are 
needed (say display them, interpret the added cli arguments, etc), therefore no 
new BPs are needed for them?
[IrenaB] I think so. It should work bysetting on neutron port-create 
-binding:profile type=dict vnic_type=direct

Another thing is that we need to define the binding:profile dictionary.
[IrenaB] With regards to PCI SRIOV related attributes, right?

[ROBERT] 

Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Daniel P. Berrange
On Wed, Jan 29, 2014 at 11:47:07AM -0500, Russell Bryant wrote:
 Greetings,
 
 A while back I mentioned that we would revisit the potential deprecation
 of nova-network in Icehouse after the icehouse-2 milestone.  The time
 has come.  :-)
 
 First, let me recap my high level view of the blockers to deprecating
 nova-network in favor of Neutron:
 
   - Feature parity
 - The biggest gap here has been nova-network's multi-host mode.
   Neutron needs some sort of HA for l3 agents, as well as the
   ability to run in a mode that enables a single tenant's traffic
   to be actively handled by multiple nodes.
 
   - Testing / Quality parity
 - Neutron needs to reach testing and quality parity in CI.  This
   includes running the full tempest suite, for example.  For all
   tests run against nova with nova-network that are applicable, they
   need to be run against Neutron, as well.  All of these jobs should
   have comparable or better reliability than the ones with
   nova-network.
 
   - Production-ready open source components
 - nova-network provides basic, but usable in production networking
   based purely on open source components.  Neutron must have
   production-ready options based purely on open source components,
   as well, that provides comparable or better performance and
   reliability.

What, no mention of providing an automated upgrade path ? Given how
we go to great lengths to enable continuous deployment with automated
upgrade paths, I'd really expect to see something to deal with migrating
people from nova-network to neutron with existing tenants unaffected.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Russell Bryant
On 01/29/2014 12:27 PM, Daniel P. Berrange wrote:
 On Wed, Jan 29, 2014 at 11:47:07AM -0500, Russell Bryant wrote:
 Greetings,

 A while back I mentioned that we would revisit the potential deprecation
 of nova-network in Icehouse after the icehouse-2 milestone.  The time
 has come.  :-)

 First, let me recap my high level view of the blockers to deprecating
 nova-network in favor of Neutron:

   - Feature parity
 - The biggest gap here has been nova-network's multi-host mode.
   Neutron needs some sort of HA for l3 agents, as well as the
   ability to run in a mode that enables a single tenant's traffic
   to be actively handled by multiple nodes.

   - Testing / Quality parity
 - Neutron needs to reach testing and quality parity in CI.  This
   includes running the full tempest suite, for example.  For all
   tests run against nova with nova-network that are applicable, they
   need to be run against Neutron, as well.  All of these jobs should
   have comparable or better reliability than the ones with
   nova-network.

   - Production-ready open source components
 - nova-network provides basic, but usable in production networking
   based purely on open source components.  Neutron must have
   production-ready options based purely on open source components,
   as well, that provides comparable or better performance and
   reliability.
 
 What, no mention of providing an automated upgrade path ? Given how
 we go to great lengths to enable continuous deployment with automated
 upgrade paths, I'd really expect to see something to deal with migrating
 people from nova-network to neutron with existing tenants unaffected.

That's a good point.  This is actually a very sticky situation.  We have
a upgrade path already, which is why I didn't mention it.  It's not
really great though, so it's worth further discussion.  The path is roughly:

1) Deploy a parallel nova install that uses Neutron, but shares all
other services with the existing Nova that uses nova-network.
(Keystone, glance, cinder, etc)

2) Spawn new instances in the new Nova.

3) For any instances that you want to migrate over to Neutron, snapshot
them to glance, and then re-spawn them in the new Nova.

This is the only plan that I've heard that we *know* should work for all
deployment variations.  I've seen very little effort go into
investigating or documenting any more advanced upgrade paths.

The other upgrade piece is some sort of data migration.  There are some
bits of data, such as security group definitions, that we should be able
to automatically export from nova and import into neutron.  I don't
think anyone has worked on that, either.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Daniel P. Berrange
On Wed, Jan 29, 2014 at 12:39:23PM -0500, Russell Bryant wrote:
 On 01/29/2014 12:27 PM, Daniel P. Berrange wrote:
  On Wed, Jan 29, 2014 at 11:47:07AM -0500, Russell Bryant wrote:
  Greetings,
 
  A while back I mentioned that we would revisit the potential deprecation
  of nova-network in Icehouse after the icehouse-2 milestone.  The time
  has come.  :-)
 
  First, let me recap my high level view of the blockers to deprecating
  nova-network in favor of Neutron:
 
- Feature parity
  - The biggest gap here has been nova-network's multi-host mode.
Neutron needs some sort of HA for l3 agents, as well as the
ability to run in a mode that enables a single tenant's traffic
to be actively handled by multiple nodes.
 
- Testing / Quality parity
  - Neutron needs to reach testing and quality parity in CI.  This
includes running the full tempest suite, for example.  For all
tests run against nova with nova-network that are applicable, they
need to be run against Neutron, as well.  All of these jobs should
have comparable or better reliability than the ones with
nova-network.
 
- Production-ready open source components
  - nova-network provides basic, but usable in production networking
based purely on open source components.  Neutron must have
production-ready options based purely on open source components,
as well, that provides comparable or better performance and
reliability.
  
  What, no mention of providing an automated upgrade path ? Given how
  we go to great lengths to enable continuous deployment with automated
  upgrade paths, I'd really expect to see something to deal with migrating
  people from nova-network to neutron with existing tenants unaffected.
 
 That's a good point.  This is actually a very sticky situation.  We have
 a upgrade path already, which is why I didn't mention it.  It's not
 really great though, so it's worth further discussion.  The path is roughly:
 
 1) Deploy a parallel nova install that uses Neutron, but shares all
 other services with the existing Nova that uses nova-network.
 (Keystone, glance, cinder, etc)
 
 2) Spawn new instances in the new Nova.
 
 3) For any instances that you want to migrate over to Neutron, snapshot
 them to glance, and then re-spawn them in the new Nova.
 
 This is the only plan that I've heard that we *know* should work for all
 deployment variations.  I've seen very little effort go into
 investigating or documenting any more advanced upgrade paths.
 
 The other upgrade piece is some sort of data migration.  There are some
 bits of data, such as security group definitions, that we should be able
 to automatically export from nova and import into neutron.  I don't
 think anyone has worked on that, either.

I was thinking of an upgrade path more akin to what users got when we
removed the nova volume driver, in favour of cinder.

  https://wiki.openstack.org/wiki/MigrateToCinder

ie no guest visible downtime / interuption of service, nor running of
multiple Nova instances in parallel.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack python clients libraries release process

2014-01-29 Thread Thierry Carrez
Tiago Mello wrote:
 Could someone clarify how python clients like novaclient, glanceclient
 release process works? How can we add more features and how the target
 releases are set... etc...

Libraries are released as-needed, and new features are continuously
pushed to them. They use semver versioning, which reflects library API
compatibility.

You can propose blueprints for new features in libraries, and/or propose
a code change to the corresponding code repository.

Anything special you had in mind ?


-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] icehouse-1 test disk images setup

2014-01-29 Thread James Slagle
On Tue, Jan 28, 2014 at 9:05 PM, Robert Collins
robe...@robertcollins.net wrote:
 So, thoughts...

 I do see this as useful, but I don't see an all-in-one overcloud as
 useful for developers of tuskar (or pretty much anything). It's just
 not realistic enough.

True.

I do however see the all in one useful to test that your deployment
infrastructure is working. PXE is setup right, iscsi is going to
work, etc. Networking, on some level, is working. No need to start two
vm's to see it fail twice.


 I'm pro having downloadable images, long as we have rights to do that
 for whatever OS we're based on. Ideally we'd have images for all the
 OSes we support (except those with restrictions like RHEL and SLES).

 Your instructions at the moment need to be refactored to support
 devtest_testenv and other recent improvements :)

Indeed, my goal would be to work what I have into devtest, not the
other way around.

 BTW the MTU note you have will break folks actual environments unless
 they have jumbo frames on everything- I *really wouldn't do that* -
 instead work on bug https://bugs.launchpad.net/neutron/+bug/1270646

Good point, I wasn't actually sure if what I was seeing was that bug
or not.  I'll look into it.

Thanks, I appreciate the feedback.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Gary Kotton


On 1/29/14 7:39 PM, Russell Bryant rbry...@redhat.com wrote:

On 01/29/2014 12:27 PM, Daniel P. Berrange wrote:
 On Wed, Jan 29, 2014 at 11:47:07AM -0500, Russell Bryant wrote:
 Greetings,

 A while back I mentioned that we would revisit the potential
deprecation
 of nova-network in Icehouse after the icehouse-2 milestone.  The time
 has come.  :-)

 First, let me recap my high level view of the blockers to deprecating
 nova-network in favor of Neutron:

   - Feature parity
 - The biggest gap here has been nova-network's multi-host mode.
   Neutron needs some sort of HA for l3 agents, as well as the
   ability to run in a mode that enables a single tenant's traffic
   to be actively handled by multiple nodes.

   - Testing / Quality parity
 - Neutron needs to reach testing and quality parity in CI.  This
   includes running the full tempest suite, for example.  For all
   tests run against nova with nova-network that are applicable,
they
   need to be run against Neutron, as well.  All of these jobs
should
   have comparable or better reliability than the ones with
   nova-network.

   - Production-ready open source components
 - nova-network provides basic, but usable in production networking
   based purely on open source components.  Neutron must have
   production-ready options based purely on open source components,
   as well, that provides comparable or better performance and
   reliability.
 
 What, no mention of providing an automated upgrade path ? Given how
 we go to great lengths to enable continuous deployment with automated
 upgrade paths, I'd really expect to see something to deal with migrating
 people from nova-network to neutron with existing tenants unaffected.

I was thinking for the upgrade process that we could leverage the port
attach/detach BP done by Dan Smith a while ago. This has libvirt support
and there are patches pending approval for Xen and Vmware. Not sure about
the other drivers.

If the guest can deal with the fact that the nova port is being removed
and a new logical port is added then we may have the chance of no down
time. If this works then we may need to add support for nova-network port
detach and we may have a seamless upgrade path.


That's a good point.  This is actually a very sticky situation.  We have
a upgrade path already, which is why I didn't mention it.  It's not
really great though, so it's worth further discussion.  The path is
roughly:

1) Deploy a parallel nova install that uses Neutron, but shares all
other services with the existing Nova that uses nova-network.
(Keystone, glance, cinder, etc)

2) Spawn new instances in the new Nova.

3) For any instances that you want to migrate over to Neutron, snapshot
them to glance, and then re-spawn them in the new Nova.

This is the only plan that I've heard that we *know* should work for all
deployment variations.  I've seen very little effort go into
investigating or documenting any more advanced upgrade paths.

The other upgrade piece is some sort of data migration.  There are some
bits of data, such as security group definitions, that we should be able
to automatically export from nova and import into neutron.  I don't
think anyone has worked on that, either.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=e
H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=L%2FurKFNkdV1Uy8T0anh
nW56lLyvFw%2BDQxjEDvp4Ji5I%3D%0As=27097272ad5841caccb4cf2ca0e9f6cf02cbf30
a9cff5a58fcee92b9933bad37


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Russell Bryant
On 01/29/2014 12:45 PM, Daniel P. Berrange wrote:
 I was thinking of an upgrade path more akin to what users got when we
 removed the nova volume driver, in favour of cinder.
 
   https://wiki.openstack.org/wiki/MigrateToCinder
 
 ie no guest visible downtime / interuption of service, nor running of
 multiple Nova instances in parallel.

Yeah, I'd love to see something like that.  I would really like to see
more effort in this area.  I honestly haven't been thinking about it
much in a while personally, because the rest of the make it work gaps
have still been a work in progress.

There's a bit of a bigger set of questions here, too ...

Should nova-network *ever* go away?  Or will there always just be a
choice between the basic/legacy nova-network option, and the new fancy
SDN-enabling Neutron option?  Is the Neutron team's time better spent on
OpenDaylight integration than the existing open source plugins?

Depending on the answers to those questions, the non-visible no-downtime
migration path may be a less important issue.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Oslo Context and SecurityContext

2014-01-29 Thread Georgy Okrokvertskhov
Hi Angus,

Let me share my view on this. I think we need to distinguish implementation
and semantics. Context means that you provide an information for method but
method will not keep or store this information. Method does not own context
but can modify it.  Context does not explicitly define what information
will be used by method. Context usually used when you keep some state and
this state is shared between methods.

Parameters in contrary are part of method definition and strictly define
that method requires them.

So semantically there is a difference between context and parameters, while
implementation can be the same.

Lets take this example:
https://review.openstack.org/#/c/69308/5/solum/objects/plan.py

There is a class Plan which defines a model for specific entity. The method
definition def create(self, context): shows us that there is no required
parameters but method result might be affected by context and the context
itself might be affected by this method. It does not say what will be the
behavior and what will be a resulting plan, but even with empty context it
will return something meaningful. Also it will be reasonable to expect that
I will have mostly the same result for different contexts like
RequestContext in API call and ExecutionContext in a working code when
worker executes this plan.

Now I am reading test
https://review.openstack.org/#/c/69308/5/solum/tests/objects/test_plan.pytest
case test_check_data. From what I see here I can figure out is that
Plan actually stores all values from context inside plan object as its
attributes and just adds additional attribute id.
There is a question: Is plan just a copy of Context with id? Why do we need
it? What are the functions of plan and what it consist of?

If plan needs parameters and context its really just a container for
parameters, lets use **kwargs or something more meaningful which clearly
defines how to use Plan and what are its methods.
We want to define a data model for a Plan entity. Lets clearly express what
data is mandatory for a plan object like Plan.create(project_id, user_id,
raw_data, context).
Let's keep data model clear and well defined instead of blur it with
meaningless contexts.




On Tue, Jan 28, 2014 at 3:26 PM, Angus Salkeld
angus.salk...@rackspace.comwrote:

 On 28/01/14 07:13 -0800, Georgy Okrokvertskhov wrote:

 Hi,

 From my experience context is usually bigger then just a storage for user
 credentials and specifics of request. Context usually defines an area
 within the called method should act. Probably the class name
 RequestContext
 is a bit confusing. The actual goal of the context should be defined by a
 service design. If you have a lot of independent components you will
 probably will ned to pass a lot of parameters to specify specifics of
 work,
 so it is just more convenient to have dictionary like object which carry
 all necessary information about contextual information. This context can
 be
 used to pass information between different components of the service.


 I think we should be using the nova style objects for passing data
 between solum services (they can be serialized for rpc). But you hit
 on a point - this context needs to be called something else, it is
 not a RequestContext (we need the RequestContext regardless).
 I'd also suggest we don't build it until we know we
 need it (I am just suspicious as the other openstack services I
 have worked on don't have such a thing). Normally we just pass
 arguments to methods.

 How about we keep things simple and don't get
 into designing a boeing, we can always add these things later if
 they are really needed. I get the feeling we are being distracted from
 our core problem of getting this service functional by nice to
 haves.

 -Angus





 On Mon, Jan 27, 2014 at 4:27 PM, Angus Salkeld
 angus.salk...@rackspace.comwrote:

  On 27/01/14 22:53 +, Adrian Otto wrote:

  On Jan 27, 2014, at 2:39 PM, Paul Montgomery 
 paul.montgom...@rackspace.com
 wrote:

  Solum community,


 I created several different approaches for community consideration
 regarding Solum context, logging and data confidentiality.  Two of
 these
 approaches are documented here:

 https://wiki.openstack.org/wiki/Solum/Logging

 A) Plain Oslo Log/Config/Context is in the Example of Oslo Log and
 Oslo
 Context section.

 B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the
 RequestContext class and adds some confidentiality functions is in the
 Example of Oslo Log and Oslo Context Combined with SecurityContext
 section.

 None of this code is production ready or tested by any means.  Please
 just
 examine the general architecture before I polish too much.

 I hope that this is enough information for us to agree on a path A or
 B.
 I honestly am not tied to either path very tightly but it is time that
 we
 reach a final decision on this topic IMO.

 Thoughts?


 I have a strong preference for using the SecurityContext approach. The
 main reason for my 

Re: [openstack-dev] OpenStack python clients libraries release process

2014-01-29 Thread Tiago Mello
Hi Thierry,

On Wed, Jan 29, 2014 at 3:46 PM, Thierry Carrez thie...@openstack.orgwrote:

 Tiago Mello wrote:
  Could someone clarify how python clients like novaclient, glanceclient
  release process works? How can we add more features and how the target
  releases are set... etc...

 Libraries are released as-needed, and new features are continuously
 pushed to them. They use semver versioning, which reflects library API
 compatibility.

 You can propose blueprints for new features in libraries, and/or propose
 a code change to the corresponding code repository.

 Anything special you had in mind ?


Thanks for the answer! We are working on the
https://blueprints.launchpad.net/python-glanceclient/+spec/cross-service-request-idand
we were wondering what is the timing for getting a new version of the
client and bump the version in nova requirements.txt...

Tiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Dan Smith
 I was thinking for the upgrade process that we could leverage the port
 attach/detach BP done by Dan Smith a while ago. This has libvirt support
 and there are patches pending approval for Xen and Vmware. Not sure about
 the other drivers.
 
 If the guest can deal with the fact that the nova port is being removed
 and a new logical port is added then we may have the chance of no down
 time. If this works then we may need to add support for nova-network port
 detach and we may have a seamless upgrade path.

That's a good thought for sure. However, it would be much better if we
could avoid literally detaching the VIF from the guest and instead just
swap where the other end is plugged into. With virtual infrastructure,
that should be pretty easy to do, unless you're switching actual L2
networks. If you're doing the latter, however, you might as well reboot
the guest I think.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova style cleanups with associated hacking check addition

2014-01-29 Thread Andreas Jaeger
On 01/29/2014 07:22 PM, Joe Gordon wrote:
 On Tue, Jan 28, 2014 at 4:45 AM, John Garbutt j...@johngarbutt.com wrote:
 On 27 January 2014 10:10, Daniel P. Berrange berra...@redhat.com wrote:
 On Fri, Jan 24, 2014 at 11:42:54AM -0500, Joe Gordon wrote:
 On Fri, Jan 24, 2014 at 7:24 AM, Daniel P. Berrange 
 berra...@redhat.comwrote:

 Periodically I've seen people submit big coding style cleanups to Nova
 code. These are typically all good ideas / beneficial, however, I have
 rarely (perhaps even never?) seen the changes accompanied by new hacking
 check rules.

 The problem with not having a hacking check added *in the same commit*
 as the cleanup is two-fold

  - No guarantee that the cleanup has actually fixed all violations
in the codebase. Have to trust the thoroughness of the submitter
or do a manual code analysis yourself as reviewer. Both suffer
from human error.

  - Future patches will almost certainly re-introduce the same style
problems again and again and again and again and again and again
and again and again and again I could go on :-)

 I don't mean to pick on one particular person, since it isn't their
 fault that reviewers have rarely/never encouraged people to write
 hacking rules, but to show one example The following recent change
 updates all the nova config parameter declarations cfg.XXXOpt(...) to
 ensure that the help text was consistently styled:

   https://review.openstack.org/#/c/67647/

 One of the things it did was to ensure that the help text always started
 with a capital letter. Some of the other things it did were more subtle
 and hard to automate a check for, but an 'initial capital letter' rule
 is really straightforward.

 By updating nova/hacking/checks.py to add a new rule for this, it was
 found that there were another 9 files which had incorrect capitalization
 of their config parameter help. So the hacking rule addition clearly
 demonstrates its value here.

 This sounds like a rule that we should add to
 https://github.com/openstack-dev/hacking.git.

 Yep, it could well be added there. I figure rules added to Nova can
 be upstreamed to the shared module periodically.

 +1

 I worry about diverging, but I guess thats always going to happen here.

 I will concede that documentation about /how/ to write hacking checks
 is not entirely awesome. My current best advice is to look at how some
 of the existing hacking checks are done - find one that is checking
 something that is similar to what you need and adapt it. There are a
 handful of Nova specific rules in nova/hacking/checks.py, and quite a
 few examples in the shared repo
 https://github.com/openstack-dev/hacking.git
 see the file hacking/core.py. There's some very minimal documentation
 about variables your hacking check method can receive as input
 parameters
 https://github.com/jcrocholl/pep8/blob/master/docs/developer.rst


 In summary, if you are doing a global coding style cleanup in Nova for
 something which isn't already validated by pep8 checks, then I strongly
 encourage additions to nova/hacking/checks.py to validate the cleanup
 correctness. Obviously with some style cleanups, it will be too complex
 to write logic rules to reliably validate code, so this isn't a code
 review point that must be applied 100% of the time. Reasonable personal
 judgement should apply. I will try comment on any style cleanups I see
 where I think it is pratical to write a hacking check.


 I would take this even further, I don't think we should accept any style
 cleanup patches that can be enforced with a hacking rule and aren't.

 IMHO that would mostly just serve to discourage people from submitting
 style cleanup patches because it is too much stick, not enough carrot.
 Realistically for some types of style cleanup, the effort involved in
 writing a style checker that does not have unacceptable false positives
 will be too high to justify. So I think a pragmatic approach to enforcement
 is more suitable.

 +1

 I would love to enforce it 100% of the time, but sometimes its hard to
 write the rules, but still a useful cleanup. Lets see how it goes I
 guess.
 
 I am weary of adding any new style rules that have to manually
 enforced by human reviewers, we already have a lot of other items to
 cover in a review.

Based on the feedback I got on IRC, I've written such a style guide a
few days ago for the config help strings:

https://review.openstack.org/#/c/69381

Even if you do not notice these during a review, it's easy for somebody
else to cleanup - and then it's important to have a style guide that
explains how things should be written. I'd like to have consistency
across OpenStack on how these help strings are written.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 

Re: [openstack-dev] [trove] scheduled tasks redux

2014-01-29 Thread Greg Hill

On Jan 23, 2014, at 3:41 PM, Michael Basnight mbasni...@gmail.com wrote:

 
 Will we be doing more complex things than every day at some time? ie, does 
 the user base see value in configuring backups every 12th day of every other 
 month? I think this is easy to write the schedule code, but i fear that it 
 will be hard to build a smarter scheduler that would only allow X tasks in a 
 given hour for a window. If we limit to daily at X time, it seems easier to 
 estimate how a given window for backup will look for now and into the future 
 given a constant user base :P Plz note, I think its viable to schedule more 
 than 1 per day, in cron * 0,12 or * */12.

 
 Will we be using this as a single task service as well? So if we assume the 
 first paragraph is true, that tasks are scheduled daily, single task services 
 would be scheduled once, and could use the same crontab fields. But at this 
 point, we only really care about the minute, hour, and _frequency_, which is 
 daily or once. Feel free to add 12 scheduled tasks for every 2 hours if you 
 want to back it up that often, or a single task as * 0/2. From the backend, i 
 see that as 12 tasks created, one for each 2 hours.

I hadn't really considered anything but repeated use, so that's a good point.  
I'll have to think on that more.  I do think that the frequency won't only be 
daily or once.  It's not uncommon to have weekly or monthly maintenance 
tasks, which it was my understanding was something we wanted to cover with this 
spec.   I'll do some research to see if there is a suitable standard format 
besides cron that works well for both repeated and scheduled singular tasks.

 But this doesnt take into mind windows, when you say you want a cron style 
 2pm backup, thats really just during some available window. Would it make 
 more sense for an operator to configure a time window, and then let users 
 choose a slot within a time window (and say there are a finite number of 
 slots in a time window). The slotting would be done behind the scenes and a 
 user would only be able to select a window, and if the slots are all taken, 
 it wont be shown in the get available time windows. the available time 
 windows could be smart, in that, your avail time window _could be_ based on 
 the location of the hardware your vm is sitting on (or some other rule…). 
 Think network saturation if everyone on host A is doing a backup to swift.

I don't think having windows will solve as much as we hope it will, and it's a 
tricky problem to get right as the number of tasks that can run per window is 
highly variable.  I'll have to gather my thoughts on this more and post another 
message when I've got something more to say than my gut says this doesn't feel 
right.

Greg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Joe Gordon
On Wed, Jan 29, 2014 at 2:12 PM, Joe Gordon joe.gord...@gmail.com wrote:
 Projects that have set the testtools line in test-requirements.txt to:

   testtools=0.9.32,0.9.35


 Will not be able to pass there unit tests.

Note: due to https://launchpad.net/bugs/1274251 auto-sync global
requirements isn't working currently.


 On Wed, Jan 29, 2014 at 7:23 AM, Davanum Srinivas dava...@gmail.com wrote:
 Robert,

 Here's a merge request for subunit
 https://code.launchpad.net/~subunit/subunit/trunk/+merge/203723

 -- dims

 On Wed, Jan 29, 2014 at 6:51 AM, Sean Dague s...@dague.net wrote:
 On 01/29/2014 06:24 AM, Sylvain Bauza wrote:
 Le 29/01/2014 12:07, Ivan Melnikov a écrit :
 I also filed a bug for taskflow, feel free to add your projects there if
 it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050



 Climate is also impacted, we can at least declare a recheck with this
 bug number.
 -Sylvain

 Right, but until a testtools fix is released, it won't pass. So please
 no rechecks until we have a new testtools from Robert that fixes things.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Nova] [oslo] [Ceilometer] about notifications : huge and may be non secure

2014-01-29 Thread Sandy Walsh


On 01/29/2014 11:50 AM, Swann Croiset wrote:
 Hi stackers,
 
 I would like to sharemy wonder here about Notifications.
 
 I'm working [1] on Heat notifications and I noticed that :
 1/ Heat uses his context to store 'password' 
 2/ Heat and Nova store 'auth_token' in context too. Didn't check for
 other projects except for neutron which doesn't store auth_token
 
 These infos are consequently sent thru their notifications.
 
 I guess we consider the broker as securised and network communications
 with services too BUT
 should not we delete these data anyway since IIRC they are never in
 use(at least by ceilometer)and by the way
 throwing it away the security question ?
 
 My other concern is the size (Kb) of notifications : 70% for auth_token
 (with pki) !
 We can reduce the volume drastically and easily by deleting these data
 from notifications.
 I know that RabbitMQ (or others) is very robust and can handle this
 volume but when I see this kind of improvements, I'am tempted to do it.
 
 I see an easy way to fix that in oslo-incubator [2] :
 delete keys of context if existing, config driven with password and
 auth_token by default
 
 thoughts?

Yeah, there was a bunch of work in nova to eliminate these sorts of
fields from the notification payload. They should certainly be
eliminated from other services as well. Ideally, as you mention, at the
olso layer.

We assume the notifications can be large, but they shouldn't be that large.

The CADF work that IBM is doing to provide versioning and schemas to
notifications will go a long way here. They have provisions for marking
fields as private. I think this is the right way to go, but we may have
to do some hack fixes in the short term.

-S



 [1]
 https://blueprints.launchpad.net/ceilometer/+spec/handle-heat-notifications
 [2]
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/notifier/rpc_notifier.py
  
 and others
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-01-29 Thread Dolph Mathews
CC'd Adam Young

Several of us were very much in favor of this around the Folsom release,
but we settled on domains as a solution to the most immediate use case
(isolation between flat collections of tenants, without impacting the rest
of openstack). I don't think it has been discussed much in the keystone
community since, but it's still a concept that I'm very much interested in,
as it's much more powerful than domains when it comes to issues like
granular delegation.


On Tue, Jan 28, 2014 at 12:35 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 Hi Everyone,

 I apologize for the obtuse title, but there isn't a better succinct term
 to describe what is needed. OpenStack has no support for multiple owners of
 objects. This means that a variety of private cloud use cases are simply
 not supported. Specifically, objects in the system can only be managed on
 the tenant level or globally.

 The key use case here is to delegate administration rights for a group of
 tenants to a specific user/role. There is something in Keystone called a
 “domain” which supports part of this functionality, but without support
 from all of the projects, this concept is pretty useless.

 In IRC today I had a brief discussion about how we could address this. I
 have put some details and a straw man up here:

 https://wiki.openstack.org/wiki/HierarchicalMultitenancy

 I would like to discuss this strawman and organize a group of people to
 get actual work done by having an irc meeting this Friday at 1600UTC. I
 know this time is probably a bit tough for Europe, so if we decide we need
 a regular meeting to discuss progress then we can vote on a better time for
 this meeting.

 https://wiki.openstack.org/wiki/Meetings#Hierarchical_Multitenancy_Meeting

 Please note that this is going to be an active team that produces code. We
 will *NOT* spend a lot of time debating approaches, and instead focus on
 making something that works and learning as we go. The output of this team
 will be a MultiTenant devstack install that actually works, so that we can
 ensure the features we are adding to each project work together.

 Vish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Ben Nemec

On 2014-01-29 13:12, Joe Gordon wrote:

Projects that have set the testtools line in test-requirements.txt to:

  testtools=0.9.32,0.9.35



Was this supposed to be Projects that have _not_ set...?



Will not be able to pass there unit tests.

On Wed, Jan 29, 2014 at 7:23 AM, Davanum Srinivas dava...@gmail.com 
wrote:

Robert,

Here's a merge request for subunit
https://code.launchpad.net/~subunit/subunit/trunk/+merge/203723

-- dims

On Wed, Jan 29, 2014 at 6:51 AM, Sean Dague s...@dague.net wrote:

On 01/29/2014 06:24 AM, Sylvain Bauza wrote:

Le 29/01/2014 12:07, Ivan Melnikov a écrit :
I also filed a bug for taskflow, feel free to add your projects 
there if
it's affected, too: 
https://bugs.launchpad.net/taskflow/+bug/1274050





Climate is also impacted, we can at least declare a recheck with 
this

bug number.
-Sylvain


Right, but until a testtools fix is released, it won't pass. So 
please
no rechecks until we have a new testtools from Robert that fixes 
things.


-Sean

--
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Joe Gordon
On Wed, Jan 29, 2014 at 11:30 AM, Ben Nemec openst...@nemebean.com wrote:
 On 2014-01-29 13:12, Joe Gordon wrote:

 Projects that have set the testtools line in test-requirements.txt to:

   testtools=0.9.32,0.9.35


 Was this supposed to be Projects that have _not_ set...?

eep, yes. Although a new version of subunit just came out that should
fix the problem. So the temporary pinning shouldn't be necessary.
-infra is working on getting the new subunit into our mirror.




 Will not be able to pass there unit tests.

 On Wed, Jan 29, 2014 at 7:23 AM, Davanum Srinivas dava...@gmail.com
 wrote:

 Robert,

 Here's a merge request for subunit
 https://code.launchpad.net/~subunit/subunit/trunk/+merge/203723

 -- dims

 On Wed, Jan 29, 2014 at 6:51 AM, Sean Dague s...@dague.net wrote:

 On 01/29/2014 06:24 AM, Sylvain Bauza wrote:

 Le 29/01/2014 12:07, Ivan Melnikov a écrit :

 I also filed a bug for taskflow, feel free to add your projects there
 if
 it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050



 Climate is also impacted, we can at least declare a recheck with this
 bug number.
 -Sylvain


 Right, but until a testtools fix is released, it won't pass. So please
 no rechecks until we have a new testtools from Robert that fixes things.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Lots of gating failures because of testtools

2014-01-29 Thread Robert Collins
Awesome - thanks dims, sadly I didn't see this (you proposed to merge
trunk *into* your fix branch not the otherway around) until I read
this list thread - I got up and saw IRC pings first, so fixed it asap.

Anyhow - subunit 0.0.18 fixes this, and will work with older
testtools, so it should be a simple pin of = 0.0.18 and away we go.

I've raised the issue of backwards compat on the testtools-dev list,
proposing that we make it a 1.0.0 release and become super strict on
backwards compat to guard against such things in future.

-Rob

On 30 January 2014 01:23, Davanum Srinivas dava...@gmail.com wrote:
 Robert,

 Here's a merge request for subunit
 https://code.launchpad.net/~subunit/subunit/trunk/+merge/203723

 -- dims

 On Wed, Jan 29, 2014 at 6:51 AM, Sean Dague s...@dague.net wrote:
 On 01/29/2014 06:24 AM, Sylvain Bauza wrote:
 Le 29/01/2014 12:07, Ivan Melnikov a écrit :
 I also filed a bug for taskflow, feel free to add your projects there if
 it's affected, too: https://bugs.launchpad.net/taskflow/+bug/1274050



 Climate is also impacted, we can at least declare a recheck with this
 bug number.
 -Sylvain

 Right, but until a testtools fix is released, it won't pass. So please
 no rechecks until we have a new testtools from Robert that fixes things.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Tim Bell

I'm not seeing a path to migrate 1,000s of production VMs from nova network to 
Neutron.

Can someone describe how this can be done without downtime for the VMs ?

Can we build an approach for the cases below in a single OpenStack production 
cloud:

1. Existing VMs to carry on running without downtime (and no new features)
2. Existing VMs to choose a window for reconfiguration for Neutron (to get the 
new function)
3. New VMs to take advantage of Neutron features such as LBaaS
- 
Tim

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: 29 January 2014 19:04
 To: Daniel P. Berrange
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and 
 beyond
 
 On 01/29/2014 12:45 PM, Daniel P. Berrange wrote:
  I was thinking of an upgrade path more akin to what users got when we
  removed the nova volume driver, in favour of cinder.
 
https://wiki.openstack.org/wiki/MigrateToCinder
 
  ie no guest visible downtime / interuption of service, nor running of
  multiple Nova instances in parallel.
 
 Yeah, I'd love to see something like that.  I would really like to see more 
 effort in this area.  I honestly haven't been thinking about it
 much in a while personally, because the rest of the make it work gaps have 
 still been a work in progress.
 
 There's a bit of a bigger set of questions here, too ...
 
 Should nova-network *ever* go away?  Or will there always just be a choice 
 between the basic/legacy nova-network option, and the
 new fancy SDN-enabling Neutron option?  Is the Neutron team's time better 
 spent on OpenDaylight integration than the existing
 open source plugins?
 
 Depending on the answers to those questions, the non-visible no-downtime 
 migration path may be a less important issue.
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

2014-01-29 Thread Sumit Naiksatam
I believe the current recommendation is also to not vote -1 automatically, see:
https://review.openstack.org/#/c/63478

On Wed, Jan 29, 2014 at 4:41 AM, Rossella Sblendido
rosse...@midokura.com wrote:
 Hi Trinath,

 you can find more info about third party testing here [1]
 Every new driver or plugin is required to provide a testing system that will
 test new patches and post
 a +1/-1 to Gerrit .
 There were meetings organized by Kyle to talk about how to set up the system
 [2]
 It will probably help you if you read the logs of the meeting.

 cheers,

 Rossella

 [1]
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Testing_Requirements
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/021882.html


 On Wed, Jan 29, 2014 at 7:50 AM, trinath.soman...@freescale.com
 trinath.soman...@freescale.com wrote:

 Hi Akihiro-

 What kind of third party testing is required?

 I have written the driver, unit test case and checked the driver with
 tempest testing.

 Do I need to check with any other third party testing?

 Kindly help me in this regard.

 --
 Trinath Somanchi - B39208
 trinath.soman...@freescale.com | extn: 4048

 -Original Message-
 From: Akihiro Motoki [mailto:mot...@da.jp.nec.com]
 Sent: Friday, January 24, 2014 6:41 PM
 To: openstack-dev@lists.openstack.org
 Cc: kmest...@cisco.com
 Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron (ML2)

 Hi Trinath,

 Jenkins is not directly related to proposing a new code.
 The process to contribute the code is described in the links Andreas
 pointed. There is no difference even if you are writing a new ML2 mech
 driver.

 In addition to the above, Neutron now requires a third party testing for
 all new/existing plugins and drivers [1].
 Are you talking about third party testing for your ML2 mechanism driver
 when you say Jenkins?

 Both two things can be done in parallel, but you need to make your third
 party testing ready before merging your code into the master repository.

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019219.html

 Thanks,
 Akihiro

 (2014/01/24 21:42), trinath.soman...@freescale.com wrote:
  Hi Andreas -
 
  Thanks you for the reply.. It helped me understand the ground work
  required.
 
  But then, I'm writing a new Mechanism driver (FSL SDN Mechanism
  driver) for ML2.
 
  For submitting new file sets, can I go with GIT or require Jenkins for
  the adding the new code for review.
 
  Kindly help me in this regard.
 
  --
  Trinath Somanchi - B39208
  trinath.soman...@freescale.com | extn: 4048
 
  -Original Message-
  From: Andreas Jaeger [mailto:a...@suse.com]
  Sent: Friday, January 24, 2014 4:54 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: Kyle Mestery (kmestery)
  Subject: Re: [openstack-dev] [Neutron]Contributing code to Neutron
  (ML2)
 
  On 01/24/2014 12:10 PM, trinath.soman...@freescale.com wrote:
  Hi-
 
 
 
  Need support for ways to contribute code to Neutron regarding the ML2
  Mechanism drivers.
 
 
 
  I have installed Jenkins and created account in github and launchpad.
 
 
 
  Kindly guide me on
 
 
 
  [1] How to configure Jenkins to submit the code for review?
 
  [2] What is the process involved in pushing the code base to the main
  stream for icehouse release?
 
 
 
  Kindly please help me understand the same..
 
  Please read this wiki page completely, it explains the workflow we use.
 
  https://wiki.openstack.org/wiki/GerritWorkflow
 
  Please also read the general intro at
  https://wiki.openstack.org/wiki/HowToContribute
 
  Btw. for submitting patches, you do not need a local Jenkins running,
 
  Welcome to OpenStack, Kyle!
 
  Andreas
  --
Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
 SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
  GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG
  Nürnberg)
   GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272
  A126
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___

[openstack-dev] Barbican Incubation Review

2014-01-29 Thread Jarret Raim

All,

Barbican, the key management service for OpenStack, requested incubation
before the holidays. After the initial review, there were several issues
brought up by various individuals that needed to be resolved
pre-incubation. At this point, we have completed the work on those tasks.
I'd like to request a final review before a vote on our incubation at the
next TC meeting, which should be on 2/4.

The list of tasks and their status is documented as part of our incubation
request, which is on the openstack wiki:
https://wiki.openstack.org/wiki/Barbican/Incubation


The only outstanding PR on the list is our devstack integration. I'd love
it if we could get some eyes on that patch. Things seem to be working for
us in our testing, but it'd be great to get some feedback from -infra to
make sure we aren¹t going to cause any headaches for the gate. The review
is here: 
https://review.openstack.org/#/c/69962


During our initial request, there was a conversation about our being a
mostly Rackspace driven effort. While it was decided that diversifying the
team isn't a requirement for incubation, it is for integration and we've
made some headway on that effort. At this point, we have external
contributors from eVault, HP and RedHat that have submitted code and / or
blueprints for the system. There are other folks that have expressed
interest in contributing, so I'm hopeful that our team will continue to
diversify over the course of our incubation period.

Our general page is here:
https://wiki.openstack.org/wiki/Barbican

Our GitHub documentation:
https://github.com/cloudkeep/barbican
https://github.com/cloudkeep/barbican/wiki

We are currently working on moving this documentation to the OpenStack
standard docbook format. We have a ways to go on this front, but the
staging area for that work can be found here:
http://docs.cloudkeep.io/barbican-devguide/content/preface.html


The team hangs out in the #openstack-barbican channel on freenode. If you
want to talk, stop on by.


Thanks,

Jarret Raim


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Barbican] nova-cert information

2014-01-29 Thread Jarret Raim
We are currently hammering out the blueprints for asymmetric key support
(both for escrow and generation). Happy to talk more about your use case if
you are interested. We can do it over email or you can hop into
#openstack-barbican on Freenode and talk to the team there.


Thanks,
Jarret

From:  Vishvananda Ishaya vishvana...@gmail.com
Reply-To:  OpenStack List openstack-dev@lists.openstack.org
Date:  Wednesday, January 29, 2014 at 12:12 AM
To:  OpenStack List openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Nova] [Barbican] nova-cert information

I would not want to use nova-cert for something like this. It was a minimum
viable option for supporting the ec2 upload bundle use case which requires
certs to work. We also managed to get it working for our hacky vpn solution
long ago, but it is definitely not a best practices case. Perhaps certs
could be added to barbican (if it doesn¹t support them already?)

https://github.com/stackforge/barbican

Vish

On Jan 23, 2014, at 5:00 PM, Miller, Mark M (EB SW Cloud - RD - Corvallis)
mark.m.mil...@hp.com wrote:

 Hello,
  
 I am trying to locate information about what services the nova-cert service
 provides and whether or not it can be used to distribute certificates in a
 cloud. After several hours of web surfing I have found very little
 information. I am writing in hopes that someone can point me to a tutorial
 that describes what this service can and cannot do.
  
 Thank you in advance,
  
 Mark
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] New developer is coming

2014-01-29 Thread Jarret Raim
Thanks Adam. 

The guys working on this are hdegikli and reaperhulk. The easiest way to
fine them is in #openstack-barbican. I'll cc reaperhulk on this so you have
his email. I don't think I have hdegikli's email.


Thanks,
Jarret


From:  Александра Безбородова bezborodov...@gmail.com
Reply-To:  OpenStack List openstack-dev@lists.openstack.org
Date:  Tuesday, January 28, 2014 at 9:41 PM
To:  OpenStack List openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Barbican] New developer is coming

Thx, Adam!


2014-01-29 Adam Young ayo...@redhat.com
 On 01/28/2014 09:55 PM, Александра Безбородова wrote:
 Hi all, 
 I want to participate in Barbican project. I'm interested in this bp
 https://blueprints.launchpad.net/barbican/+spec/support-rsa-key-store-generat
 ion
 Who can answer some questions about it?
 
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/l
 istinfo/openstack-dev
 Get on Freenode and ask in #openstack-barbican
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 





smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] team meeting Friday 31 Jan 1400 UTC

2014-01-29 Thread Victor Sergeyev
Hello All.

Also I have a proposition to discuss current graduation status of oslo.db
code.
This code going to move into a separate library (it will be soon, I hope),
so it's would be nice to look at it's state/issues/and-so-on due to speed
up graduation process and avoid any confusion in the future.
See blueprint [1] for more details

[1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib

Thanks,
Victor.


On Tue, Jan 28, 2014 at 6:50 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Tue, Jan 28, 2014 at 3:55 AM, Flavio Percoco fla...@redhat.com wrote:

 On 27/01/14 14:57 -0500, Doug Hellmann wrote:

 The Oslo team has a few items we need to discuss, so I'm calling a
 meeting for
 this Friday, 31 Jan. Our normal slot is 1400 UTC Friday in
 #openstack-meeting.
 The agenda [1] includes 2 items (so far):

 1. log translations (see the other thread started today)
 2. parallelizing our tests


 We should also discuss the process to pull out packages from oslo. I
 mean, check if anything has changed in terms of stability since the
 summit and what our plans are for moving forward with this.


 I added an item to discuss managing graduating code that is still in the
 incubator (the rpc discussion this week brought it to the front of my
 mind). Is that the same thing you mean?

 Doug




 Cheers,
 flaper


  If you have anything else you would like to discuss, please add it to the
 agenda.

 See you Friday!
 Doug


 [1] https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_
 Next_Meeting


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Vishvananda Ishaya

On Jan 29, 2014, at 10:45 AM, Dan Smith d...@danplanet.com wrote:

 I was thinking for the upgrade process that we could leverage the port
 attach/detach BP done by Dan Smith a while ago. This has libvirt support
 and there are patches pending approval for Xen and Vmware. Not sure about
 the other drivers.
 
 If the guest can deal with the fact that the nova port is being removed
 and a new logical port is added then we may have the chance of no down
 time. If this works then we may need to add support for nova-network port
 detach and we may have a seamless upgrade path.
 
 That's a good thought for sure. However, it would be much better if we
 could avoid literally detaching the VIF from the guest and instead just
 swap where the other end is plugged into. With virtual infrastructure,
 that should be pretty easy to do, unless you're switching actual L2
 networks. If you're doing the latter, however, you might as well reboot
 the guest I think.
 
 —Dan

I did a very brief investigation into this about six months ago moving
nova-network over to l3-agent using ovs. I think it should be possible, but
I got a little bit stuck on a technical point and then got involved with
other things. I see the process as going something like this:

* Migrate network data from nova into neutron
* Turn off nova-network on the node
* Run the neutron l3 agent and trigger it to create the required bridges etc.
* Use ovsctl to remove the vnic from the nova bridge and add it to the 
appropriate ovs bridge

Because the ovs bridge and the nova bridge are plugged in to the same physical
device, traffic flows appropriately.

There is some hand waving above about how to trigger the l3 agent to create the
ports and security groups properly, but I think conceptually it could work.

I tried to do the above via devstack but it isn’t quite trivial to get devstack
to install and run neutron without deleting everything and starting over. Even
this doesn’t seem particularly hard, I just ran out of time to focus on it.

Vish
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Kashyap Chamarthy
On 01/29/2014 11:34 PM, Russell Bryant wrote:

[. . .]

 There's a bit of a bigger set of questions here, too ...
 
 Should nova-network *ever* go away?  Or will there always just be a
 choice between the basic/legacy nova-network option, and the new fancy
 SDN-enabling Neutron option?  Is the Neutron team's time better spent on
 OpenDaylight integration than the existing open source plugins?

I'll keep in mind you called these as 'bigger set of questions', that
said -- I hope your above question will not be misinterpreted as in open
source plugins can be treated as second-class citizens.


If I'm saying something silly, don't hesitate to call me out.

 
 Depending on the answers to those questions, the non-visible no-downtime
 migration path may be a less important issue.
 


-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Russell Bryant
On 01/29/2014 04:20 PM, Kashyap Chamarthy wrote:
 On 01/29/2014 11:34 PM, Russell Bryant wrote:
 
 [. . .]
 
 There's a bit of a bigger set of questions here, too ...

 Should nova-network *ever* go away?  Or will there always just be a
 choice between the basic/legacy nova-network option, and the new fancy
 SDN-enabling Neutron option?  Is the Neutron team's time better spent on
 OpenDaylight integration than the existing open source plugins?
 
 I'll keep in mind you called these as 'bigger set of questions', that
 said -- I hope your above question will not be misinterpreted as in open
 source plugins can be treated as second-class citizens.
 
 
 If I'm saying something silly, don't hesitate to call me out.

Note that the alternative I mentioned is another open source thing.

Neutron must have a production viable, completely open source option to
ever be considered a replacement for anything, IMO, as noted in my first
message in this thread..


 Depending on the answers to those questions, the non-visible no-downtime
 migration path may be a less important issue.

 
 


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday January 30th at 17:00UTC

2014-01-29 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, January 30th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

12:00 EST
02:00 JST
03:30 ACDT
18:00 CET
11:00 CST
9:00 PST


-Matt Treinish 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-29 Thread Ian Wells
My proposals:

On 29 January 2014 16:43, Robert Li (baoli) ba...@cisco.com wrote:

 1. pci-flavor-attrs is configured through configuration files and will be
 available on both the controller node and the compute nodes. Can the cloud
 admin decide to add a new attribute in a running cloud? If that's
 possible, how is that done?


When nova-compute starts up, it requests the VIF attributes that the
schedulers need.  (You could have multiple schedulers; they could be in
disagreement; it picks the last answer.)  It returns pci_stats by the
selected combination of VIF attributes.

When nova-scheduler starts up, it sends an unsolicited cast of the
attributes.  nova-compute updates the attributes, clears its pci_stats and
recreates them.

If nova-scheduler receives pci_stats with incorrect attributes it discards
them.

(There is a row from nova-compute summarising devices for each unique
combination of vif_stats, including 'None' where no attribute is set.)

I'm assuming here that the pci_flavor_attrs are read on startup of
nova-scheduler and could be re-read and different when nova-scheduler is
reset.  There's a relatively straightforward move from here to an API for
setting it if this turns out to be useful, but firstly I think it would be
an uncommon occurrence and secondly it's not something we should implement
now.

2. PCI flavor will be defined using the attributes in pci-flavor-attrs. A
 flavor is defined with a matching expression in the form of attr1 = val11
 [| val12 Š.], [attr2 = val21 [| val22 Š]], Š. And this expression is used
 to match one or more PCI stats groups until a free PCI device is located.
 In this case, both attr1 and attr2 can have multiple values, and both
 attributes need to be satisfied. Please confirm this understanding is
 correct


This looks right to me as we've discussed it, but I think we'll be wanting
something that allows a top level AND.  In the above example, I can't say
an Intel NIC and a Mellanox NIC are equally OK, because I can't say (intel
+ product ID 1) AND (Mellanox + product ID 2).  I'll leave Yunhong to
decice how the details should look, though.

3. I'd like to see an example that involves multiple attributes. let's say
 pci-flavor-attrs = {gpu, net-group, device_id, product_id}. I'd like to
 know how PCI stats groups are formed on compute nodes based on that, and
 how many of PCI stats groups are there? What's the reasonable guidelines
 in defining the PCI flavors.


I need to write up the document for this, and it's overdue.  Leave it with
me.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Kukura
On 01/29/2014 09:46 AM, Robert Li (baoli) wrote:

 Another issue that came up during the meeting is about whether or not
 vnic-type should be part of the top level binding or part of
 binding:profile. In other words, should it be defined as
 binding:vnic-type or binding:profile:vnic-type.   

I'd phrase that choice as top-level attribute vs. key/value pair
within the binding:profile attribute. If we go with a new top-level
attribute, it may or may not end up being part of the portbindings
extension.

Although I've been advocating making vnic_type a key within
binding:profile (minimizing effort), it just occurred to me that
policy.json contains:

create_port:binding:profile: rule:admin_only,
get_port:binding:profile: rule:admin_only,
update_port:binding:profile: rule:admin_only,

This means that only administrative users (including nova's integration
with neutron) can read or write the binding:profile attribute by default.

But my (limited) understanding of the PCI-passthru use cases is that
normal users need to specify vnic_type because this is what determines
the NIC type that their VMs see for the port. If that is correct, then I
think this tips the balance towards vnic_type being a new top-level
attribute to which normal users have read/write access. Comments?

If I'm mistaken on the above, please ignore the rest of this email...

If vnic_type is a new top-level attribute accessible to normal users,
then I'm not sure it belongs in the portbindings extension. First,
everything else in that extension is only visible to administrative
users. Second, from the normal user's point of view, vnic_type has to do
with the type of NIC they want within their VM, not with how the port is
bound outside their VM to some underlying network segment and networking
mechanism they aren't even aware of. So we need a new extension for
vnic_type, which has the advantage of not requiring any change to
existing plugins that don't support that extension.

If vnic_type is a new top-level attribute in a new API extension, it
deserves its own neutron BP covering defining the extension and
implementing it in ML2. This is probably an update of Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
Implementations for other plugins could follow via separate BPs as they
choose to implement the extension.

If anything else we've been planning to put in binding:profile needs
normal user access, it could be defined in this new extension instead.
For now, I'm assuming other input data for PCI-passthru (such as the
slot info from nova) is only accessible to administrators and will go in
binding:profile. I'll submit a separate BP for generically implementing
the binding:profile attribute in ML2, as we've discussed.

This leaves us with potentially 3 separate generic neutron/Ml2 BPs
providing the infrastructure for PCI-passthru:

1) Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
2) My BP to implement binding:profile in ML2
3) Definition/implementation of binding:vif_details based on Nachi's
binding:vif_security patch, for which I could submit a BP.

-Bob


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2014-01-29 Thread Matt Riedemann



On Monday, January 27, 2014 7:17:27 AM, Alessandro Pilotti wrote:

On 25 Jan 2014, at 16:51 , Matt Riedemann mrie...@linux.vnet.ibm.com wrote:




On 1/24/2014 3:41 PM, Peter Pouliot wrote:

Hello OpenStack Community,

I am excited at this opportunity to make the community aware that the
Hyper-V CI infrastructure

is now up and running.  Let’s first start with some housekeeping
details.  Our Tempest logs are

publically available here: http://64.119.130.115. You will see them show
up in any

Nova Gerrit commit from this moment on.
snip


So now some questions. :)

I saw this failed on one of my nova patches [1].  It says the build succeeded 
but that the tests failed.  I talked with Alessandro about this yesterday and 
he said that's working as designed, something with how the scoring works with 
zuul?


I spoke with clarkb on infra, since we were also very puzzled by this 
behaviour. I’ve been told that when the job is non voting, it’s always reported 
as succeeded, which makes sense, although sligltly misleading.
The message in the Gerrit comment is clearly stating: Test run failed in ..m 
..s (non-voting)”, so this should be fair enough. It’d be great to have a way to get 
rid of the “Build succeded” message above.


The problem I'm having is figuring out why it failed.  I looked at the compute 
logs but didn't find any errors.  Can someone help me figure out what went 
wrong here?



The reason for the failure of this job can be found here:

http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz

Please search for (1054, Unknown column 'instances.locked_by' in 'field 
list')

In this case the job failed when nova service-list” got called to verify 
wether the compute nodes have been properly added to the devstack instance in the 
overcloud.

During the weekend we added also a console.log to help in simplifying 
debugging, especially in the rare cases in which the job fails before getting 
to run tempest:

http://64.119.130.115/69047/1/console.log.gz


Let me know if this helps in tracking down your issue!

Alessandro



[1] https://review.openstack.org/#/c/69047/1

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Alex, thanks, figured it out and yes, the console log is helpful, and 
the fail was a real bug in my patch which changed how the 180 migration 
was doing something which later broke another migration running against 
your MySQL backend - so nice catch.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] Undoing a change in the alembic migrations

2014-01-29 Thread Trevor McKay
Hi Sergey,

  In https://review.openstack.org/#/c/69982/1 we are moving the
'main_class' and 'java_opts' fields for a job execution into the
job_configs['configs'] dictionary.  This means that 'main_class' and
'java_opts' don't need to be in the database anymore.

  These fields were just added in the initial version of the migration
scripts.  The README says that migrations work from icehouse. Since
this is the initial script, does that mean we can just remove references
to those fields from the db models and the script, or do we need a new
migration script (002) to erase them?

Thanks,

Trevor


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican Incubation Review

2014-01-29 Thread Justin Santa Barbara
Given the issues we continue to face with achieving stable APIs, I
hope there will be some form of formal API review before we approve
any new OpenStack APIs.  When we release an API, it should mean that
we're committing to support that API _forever_.

Glancing at the specification, I noticed some API issues that will be
hard to fix:
* the API for asymmetric keys (i.e. keys with a public and private
part) has not yet been fleshed out
* there does not appear to be support for key rotation
* I don't see metadata or tags or some other way for API consumers to
attach extra information they might need
* cypher_type is spelled in the less common way

The first two are deal-breakers IMHO for a 1.0.  #3 is a straight
extension, so could be added later, but I think it an important safety
valve in case anything else got missed.  #4 will probably cause the
most argument :-)

Everyone is looking forward to the better security that Barbican will
bring, so I think it all the more important that we avoid a rapid v2.0
and the pain that brings to everyone.  I would hope that the PTLs of
all projects that are going to offer encryption review the proposed
API to make sure that it meets their project's future requirements.

I'm presuming that this is our last opportunity for API review - if
this isn't the right occasion to bring this up, ignore me!

Justin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Undoing a change in the alembic migrations

2014-01-29 Thread Andrew Lazarev
+1 on new migration script. Just to be consecutive.

Andrew.


On Wed, Jan 29, 2014 at 2:17 PM, Trevor McKay tmc...@redhat.com wrote:

 Hi Sergey,

   In https://review.openstack.org/#/c/69982/1 we are moving the
 'main_class' and 'java_opts' fields for a job execution into the
 job_configs['configs'] dictionary.  This means that 'main_class' and
 'java_opts' don't need to be in the database anymore.

   These fields were just added in the initial version of the migration
 scripts.  The README says that migrations work from icehouse. Since
 this is the initial script, does that mean we can just remove references
 to those fields from the db models and the script, or do we need a new
 migration script (002) to erase them?

 Thanks,

 Trevor


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] nova-network in Icehouse and beyond

2014-01-29 Thread Kyle Mestery

On Jan 29, 2014, at 12:04 PM, Russell Bryant rbry...@redhat.com wrote:

 On 01/29/2014 12:45 PM, Daniel P. Berrange wrote:
 I was thinking of an upgrade path more akin to what users got when we
 removed the nova volume driver, in favour of cinder.
 
  https://wiki.openstack.org/wiki/MigrateToCinder
 
 ie no guest visible downtime / interuption of service, nor running of
 multiple Nova instances in parallel.
 
 Yeah, I'd love to see something like that.  I would really like to see
 more effort in this area.  I honestly haven't been thinking about it
 much in a while personally, because the rest of the make it work gaps
 have still been a work in progress.
 
 There's a bit of a bigger set of questions here, too ...
 
 Should nova-network *ever* go away?  Or will there always just be a
 choice between the basic/legacy nova-network option, and the new fancy
 SDN-enabling Neutron option?  Is the Neutron team's time better spent on
 OpenDaylight integration than the existing open source plugins?
 
This point about OpenDaylight vs. existing open source plugins is something
which some of us have talked about for a while now. I’ve spent a lot of time
with the OpenDaylight team over the last 2 months, and I believe once we
get that ML2 MechanismDriver upstreamed (waiting on third party testing and
reviews [1], perhaps we can at least remove some pressure agent-wise. The
current OpenDaylight driver doesn’t use a compute agent. And future iterations
will hopefully remove the need for an L3 agent as well, maybe even DHCP.
Since a lot of the gate issues seem to resolve around those things, my hope
is this approach can simplify some code and lead to more stability. But we’ll
see, we’re very early here at the moment.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/69775/1

 Depending on the answers to those questions, the non-visible no-downtime
 migration path may be a less important issue.
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi Irena,

With your reply, and after taking a close look at the code, I think that I 
understand it now.

Regarding the cli change:

  neutron port-create –binding:profile type=dict vnic_type=direct

following the neutron net-create —provider:physical_network as an example, 
--binding:* can be treated as unknown arguments. And they are opaquely 
transmitted to the neutron plugin for processing. I have always wondered why 
net-create help doesn't display the —provider:* arguments, and sometimes have 
to google the syntax. After taking look at the code, I think I kind of know 
what's going on out of there.  I'd like to know why it's done that way. But I 
think that it will work for —binding:* in the neutron port-create commands.

now regarding binding:profile for SR-IOV, from your google doc, it will have 
the following properties:
   pci_slot in the format of vendor_id:product_id:domain:bus:slot.fn.
   pci_flavor: will be a PCI flavor name when the API is available and 
it's desirable for neutron to use it. For now, it will be a physical network 
name.
   profileid: for 802.1qbh/802.1br
   vnic-type: it's still debatable whether or not this property belongs 
here. I kind of second you on making it binding:vnic-type.

They all seem to be non plugin or MD specific. Of course, a MD that supports 
802.1br would enforce profileid. But in terms of persisting them, I don't feel 
like they should be done in the plugin. On the other hand, the examples you 
gave me do show that these plugins are responsible for storing plugin-specific 
binding:profile in the DB. And in the case of —provider:* for neutron network, 
it's the individual plugins that persist it, and duplicate the code. Therefore, 
we may not have options other than following the existing examples.


thanks,
Robert



On 1/29/14 12:17 PM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
Please see inline, I’ll try to post my understanding.


From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 6:03 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi Irena,

I'm now even more confused. I must have missed something. See inline….

Thanks,
Robert

On 1/29/14 10:19 AM, Irena Berezovsky 
ire...@mellanox.commailto:ire...@mellanox.com wrote:

Hi Robert,
I think that I can go with Bob’s suggestion, but think it makes sense to cover 
the vnic_type and PCI-passthru via two separate patches. Adding vnic_type will 
probably impose changes to existing Mech. Drivers while PCI-passthru is about 
introducing some pieces for new SRIOV supporting Mech. Drivers.

More comments inline

BR,
IRena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 29, 2014 4:47 PM
To: Irena Berezovsky; rkuk...@redhat.commailto:rkuk...@redhat.com; Sandhya 
Dasu (sadasu); OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

Hi folks,

I'd like to do a recap on today's meeting, and if possible we should continue 
the discussion in this thread so that we can be more productive in tomorrow's 
meeting.

Bob suggests that we have these BPs:
One generic covering implementing binding:profile in ML2, and one specific to 
PCI-passthru, defining the vnic-type (wherever it goes) and any keys for 
binding:profile.


Irena suggests that we have three BPs:
1. generic ML2 support for binding:profile (corresponding to Bob's covering 
implementing binding:profile in ML2 ?)
2. add vnic_type support for binding Mech Drivers in ML2 plugin
3. support PCI slot via profile (corresponding to Bob's any keys for 
binding:profile ?)

Both proposals sound similar, so it's great that we are converging. I think 
that it's important that we put more details in each BP on what's exactly 
covered by it. One question I have is about where binding:profile will be 
implemented. I see that portbinding is defined/implemented under its extension 
and neutron.db. So when both of you guys are saying that implementing 
binding:profile in ML2, I'm kind of confused. Please let me know what I'm 
missing here. My understanding is that non-ML2 plugin can use it as well.
[IrenaB] Basically you  are right. Currently ML2 does not inherit the DB Mixin 
for port binding. So it supports the port binding extension, but uses its own 
DB table to store relevant attributes. Making it work for ML2 means not adding 
this support to PortBindingMixin.

[ROBERT] does that mean binding:profile for PCI can't be used by non-ml2 plugin?
[IrenaB] binding:profile is can be used by any plugin that supports binding 
extension. To persist the binding:profile (in the DB), plugin should add DB 
table for this . The PortBindingMixin does not persist the binding:profile for 

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Li (baoli)
Hi Bob,

that's a good find. profileid as part of IEEE 802.1br needs to be in
binding:profile, and can be specified by a normal user, and later possibly
the pci_flavor. Would it be wrong to say something as in below in the
policy.json?
 create_port:binding:vnic_type: rule:admin_or_network_owner
 create_port:binding:profile:profileid: rule:admin_or_network_owner

If it's not appropriate, then I agree with you we may need another
extension. 


--Robert

On 1/29/14 4:57 PM, Robert Kukura rkuk...@redhat.com wrote:

On 01/29/2014 09:46 AM, Robert Li (baoli) wrote:

 Another issue that came up during the meeting is about whether or not
 vnic-type should be part of the top level binding or part of
 binding:profile. In other words, should it be defined as
 binding:vnic-type or binding:profile:vnic-type.

I'd phrase that choice as top-level attribute vs. key/value pair
within the binding:profile attribute. If we go with a new top-level
attribute, it may or may not end up being part of the portbindings
extension.

Although I've been advocating making vnic_type a key within
binding:profile (minimizing effort), it just occurred to me that
policy.json contains:

create_port:binding:profile: rule:admin_only,
get_port:binding:profile: rule:admin_only,
update_port:binding:profile: rule:admin_only,

This means that only administrative users (including nova's integration
with neutron) can read or write the binding:profile attribute by default.

But my (limited) understanding of the PCI-passthru use cases is that
normal users need to specify vnic_type because this is what determines
the NIC type that their VMs see for the port. If that is correct, then I
think this tips the balance towards vnic_type being a new top-level
attribute to which normal users have read/write access. Comments?

If I'm mistaken on the above, please ignore the rest of this email...

If vnic_type is a new top-level attribute accessible to normal users,
then I'm not sure it belongs in the portbindings extension. First,
everything else in that extension is only visible to administrative
users. Second, from the normal user's point of view, vnic_type has to do
with the type of NIC they want within their VM, not with how the port is
bound outside their VM to some underlying network segment and networking
mechanism they aren't even aware of. So we need a new extension for
vnic_type, which has the advantage of not requiring any change to
existing plugins that don't support that extension.

If vnic_type is a new top-level attribute in a new API extension, it
deserves its own neutron BP covering defining the extension and
implementing it in ML2. This is probably an update of Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
Implementations for other plugins could follow via separate BPs as they
choose to implement the extension.

If anything else we've been planning to put in binding:profile needs
normal user access, it could be defined in this new extension instead.
For now, I'm assuming other input data for PCI-passthru (such as the
slot info from nova) is only accessible to administrators and will go in
binding:profile. I'll submit a separate BP for generically implementing
the binding:profile attribute in ML2, as we've discussed.

This leaves us with potentially 3 separate generic neutron/Ml2 BPs
providing the infrastructure for PCI-passthru:

1) Irena's
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
2) My BP to implement binding:profile in ML2
3) Definition/implementation of binding:vif_details based on Nachi's
binding:vif_security patch, for which I could submit a BP.

-Bob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Robert Kukura
On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
 Hi Bob,
 
 that's a good find. profileid as part of IEEE 802.1br needs to be in
 binding:profile, and can be specified by a normal user, and later possibly
 the pci_flavor. Would it be wrong to say something as in below in the
 policy.json?
  create_port:binding:vnic_type: rule:admin_or_network_owner
  create_port:binding:profile:profileid: rule:admin_or_network_owner

Maybe, but a normal user that owns a network has no visibility into the
underlying details (such as the providernet extension attributes.

It seems to me that profileid is something that only make sense to an
administrator of the underlying cloud environment. Where would a normal
cloud user get a value to use for this?

Also, would a normal cloud user really know what pci_flavor to use?
Isn't all this kind of detail hidden from a normal user within the nova
VM flavor (or host aggregate or whatever) pre-configured by the admin?

-Bob

 
 If it's not appropriate, then I agree with you we may need another
 extension. 
 
 
 --Robert
 
 On 1/29/14 4:57 PM, Robert Kukura rkuk...@redhat.com wrote:
 
 On 01/29/2014 09:46 AM, Robert Li (baoli) wrote:

 Another issue that came up during the meeting is about whether or not
 vnic-type should be part of the top level binding or part of
 binding:profile. In other words, should it be defined as
 binding:vnic-type or binding:profile:vnic-type.

 I'd phrase that choice as top-level attribute vs. key/value pair
 within the binding:profile attribute. If we go with a new top-level
 attribute, it may or may not end up being part of the portbindings
 extension.

 Although I've been advocating making vnic_type a key within
 binding:profile (minimizing effort), it just occurred to me that
 policy.json contains:

create_port:binding:profile: rule:admin_only,
get_port:binding:profile: rule:admin_only,
update_port:binding:profile: rule:admin_only,

 This means that only administrative users (including nova's integration
 with neutron) can read or write the binding:profile attribute by default.

 But my (limited) understanding of the PCI-passthru use cases is that
 normal users need to specify vnic_type because this is what determines
 the NIC type that their VMs see for the port. If that is correct, then I
 think this tips the balance towards vnic_type being a new top-level
 attribute to which normal users have read/write access. Comments?

 If I'm mistaken on the above, please ignore the rest of this email...

 If vnic_type is a new top-level attribute accessible to normal users,
 then I'm not sure it belongs in the portbindings extension. First,
 everything else in that extension is only visible to administrative
 users. Second, from the normal user's point of view, vnic_type has to do
 with the type of NIC they want within their VM, not with how the port is
 bound outside their VM to some underlying network segment and networking
 mechanism they aren't even aware of. So we need a new extension for
 vnic_type, which has the advantage of not requiring any change to
 existing plugins that don't support that extension.

 If vnic_type is a new top-level attribute in a new API extension, it
 deserves its own neutron BP covering defining the extension and
 implementing it in ML2. This is probably an update of Irena's
 https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
 Implementations for other plugins could follow via separate BPs as they
 choose to implement the extension.

 If anything else we've been planning to put in binding:profile needs
 normal user access, it could be defined in this new extension instead.
 For now, I'm assuming other input data for PCI-passthru (such as the
 slot info from nova) is only accessible to administrators and will go in
 binding:profile. I'll submit a separate BP for generically implementing
 the binding:profile attribute in ML2, as we've discussed.

 This leaves us with potentially 3 separate generic neutron/Ml2 BPs
 providing the infrastructure for PCI-passthru:

 1) Irena's
 https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
 2) My BP to implement binding:profile in ML2
 3) Definition/implementation of binding:vif_details based on Nachi's
 binding:vif_security patch, for which I could submit a BP.

 -Bob

 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican Incubation Review

2014-01-29 Thread Anne Gentle
On Wed, Jan 29, 2014 at 2:42 PM, Jarret Raim jarret.r...@rackspace.comwrote:


 All,

 Barbican, the key management service for OpenStack, requested incubation
 before the holidays. After the initial review, there were several issues
 brought up by various individuals that needed to be resolved
 pre-incubation. At this point, we have completed the work on those tasks.
 I'd like to request a final review before a vote on our incubation at the
 next TC meeting, which should be on 2/4.

 The list of tasks and their status is documented as part of our incubation
 request, which is on the openstack wiki:
 https://wiki.openstack.org/wiki/Barbican/Incubation


 The only outstanding PR on the list is our devstack integration. I'd love
 it if we could get some eyes on that patch. Things seem to be working for
 us in our testing, but it'd be great to get some feedback from -infra to
 make sure we aren¹t going to cause any headaches for the gate. The review
 is here:
 https://review.openstack.org/#/c/69962


 During our initial request, there was a conversation about our being a
 mostly Rackspace driven effort. While it was decided that diversifying the
 team isn't a requirement for incubation, it is for integration and we've
 made some headway on that effort. At this point, we have external
 contributors from eVault, HP and RedHat that have submitted code and / or
 blueprints for the system. There are other folks that have expressed
 interest in contributing, so I'm hopeful that our team will continue to
 diversify over the course of our incubation period.

 Our general page is here:
 https://wiki.openstack.org/wiki/Barbican

 Our GitHub documentation:
 https://github.com/cloudkeep/barbican
 https://github.com/cloudkeep/barbican/wiki

 We are currently working on moving this documentation to the OpenStack
 standard docbook format. We have a ways to go on this front, but the
 staging area for that work can be found here:
 http://docs.cloudkeep.io/barbican-devguide/content/preface.html


 Hi Jarret -
Please don't use the OpenStack branding on your output prior to permission
through this process.
Thanks,
Anne


 The team hangs out in the #openstack-barbican channel on freenode. If you
 want to talk, stop on by.


 Thanks,

 Jarret Raim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer]Cumulative metrics resetting

2014-01-29 Thread Adrian Turjak

A question about this bug:
https://bugs.launchpad.net/ceilometer/+bug/1061817

In the billing program we are currently writing we've partly accounted 
for resetting of values in a given period for the cumulative metrics, 
but since we need high accuracy especially for metrics like 
network.incoming/outgoing, we are likely to lose chargeable data if 
someone resets a VM, or a VM goes down.


example:
10min pipeline interval, a reset/shutdown happens 7 mins after the last 
poll. The data for those 7 mins is gone. Even terminating a VM will mean 
we lose the data in that last interval.


Fixing the bug so resets don't happen is likely a lot of work, and I 
have a feeling will require work in Nova, and probably won't account for 
the terminate case.


On the other hand, would it be possible to setup a notification based 
metric that updates cumulative metrics, or triggers a poll right before 
the reset/shutdown/suspension/terminate, so we have an entry right 
before it resets and don't lose any data? This would pretty much solve 
the issue, and as long as it is documented that the cumulative metrics 
reset, this would solve most problems.


Since the 'process_notification' function only gets passed a 'message', 
I don't know if there is a way to pull the needed data out from nova 
using only what is in the message though.


Any thoughts, or suggestions as to where to start experimenting?

-Adrian Turjak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 29th

2014-01-29 Thread Ian Wells
On 29 January 2014 23:50, Robert Kukura rkuk...@redhat.com wrote:

 On 01/29/2014 05:44 PM, Robert Li (baoli) wrote:
  Hi Bob,
 
  that's a good find. profileid as part of IEEE 802.1br needs to be in
  binding:profile, and can be specified by a normal user, and later
 possibly
  the pci_flavor. Would it be wrong to say something as in below in the
  policy.json?
   create_port:binding:vnic_type: rule:admin_or_network_owner
   create_port:binding:profile:profileid: rule:admin_or_network_owner

 Maybe, but a normal user that owns a network has no visibility into the
 underlying details (such as the providernet extension attributes.


I'm with Bob on this, I think - I would expect that vnic_type is passed in
by the user (user readable, and writeable, at least if the port is not
attached) and then may need to be reflected back, if present, in the
'binding' attribute via the port binding extension (unless Nova can just go
look for it - I'm not clear on what's possible here).


 Also, would a normal cloud user really know what pci_flavor to use?
 Isn't all this kind of detail hidden from a normal user within the nova
 VM flavor (or host aggregate or whatever) pre-configured by the admin?


Flavors are user-visible, analogous to Nova's machine flavors, they're just
not user editable.  I'm not sure where port profiles come from.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican Incubation Review

2014-01-29 Thread Justin Santa Barbara
Jarret Raim  wrote:

I'm presuming that this is our last opportunity for API review - if
this isn't the right occasion to bring this up, ignore me!

 I wouldn't agree here. The barbican API will be evolving over time as we
 add new functionality. We will, of course, have to deal with backwards
 compatibility and version as we do so.

I suggest that writing bindings for every major language, maintaining
them through API revisions, and dealing with all the software that
depends on your service is a much bigger undertaking than e.g. writing
Barbican itself ;-)  So it seems much more efficient to get v1 closer
to right.

I don't think this need turn into a huge upfront design project
either; I'd just like to see the TC approve your project with an API
that the PTLs have signed off on as meeting their known needs, rather
than one that we know will need changes.  Better to delay take-off
than commit ourselves to rebuilding the engine in mid-flight.

We don't need the functionality to be implemented in your first
release, but the API should allow the known upcoming changes.

 We're also looking at adopting the
 model that Keystone uses for API blueprints where the API changes are
 separate blueprints that are reviewed by a larger group than the
 implementations.

I think you should aspire to something greater than the adoption of Keystone V3.

I'm sorry to pick on your project - I think it is much more important
to OpenStack than many others, though that's a big part of why it is
important to avoid API churn.  The instability of our APIs is a huge
barrier to OpenStack adoption.  I'd love to see the TC review all
breaking API changes, but I don't think we're set up that way.

Justin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >