Re: [openstack-dev] Where should Schema files live?

2014-11-30 Thread Duncan Thomas
Duncan Thomas
On Nov 27, 2014 10:32 PM, Sandy Walsh sandy.wa...@rackspace.com wrote:


 We were thinking each service API would expose their schema via a new
/schema resource (or something). Nova would expose its schema. Glance its
own. etc. This would also work well for installations still using older
deployments.

This feels like externally exposing info that need not be external (since
the notifications are not external to the deploy) and it sounds like it
will potentially leak fine detailed version and maybe deployment config
details that you don't want to make public - either for commercial reasons
or to make targeted attacks harder
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] internalURL and adminURL of endpoints should not be visible to ordinary user

2014-11-30 Thread Duncan Thomas
The internal URL is used for more than just admin actions, and admin is no
longer a global flag, so this restriction is not suitable.

Duncan Thomas
On Nov 29, 2014 6:08 AM, joehuang joehu...@huawei.com wrote:

 Hello,

 if an ordinary user sent a get-token request to KeyStone, internalURL and
 adminURL of endpoints will also be returned. It'll expose the internal high
 privilege access address and some internal network topology information to
 the ordinary user, and leads to the risk for malicious user to attack or
 hijack the system.

 the request to get token for ordinary user:
 curl -d '{auth:{passwordCredentials:{username: huawei, password:
 2014},tenantName:huawei}}' -H Content-type: application/json
 http://localhost:5000/v2.0/tokens

 the response will include internalURL and adminURL of endpoints:
 {access: {token: {issued_at: 2014-11-27T02:30:59.218772,
 expires: 2014-11-27T03:30:59Z, id:
 b8684d2b68ab49d5988da9197f38a878, tenant: {description: normal
 Tenant, enabled: true, id: 7ed3351cd58349659f0bfae002f76a77, name:
 huawei}, audit_ids: [Ejn3BtaBTWSNtlj7beE9bQ]}, serviceCatalog:
 [{endpoints: [{adminURL: 
 http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77;, region:
 regionOne, internalURL: 
 http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77;, id:
 170a3ae617a1462c81bffcbc658b7746, publicURL: 
 http://10.67.148.27:8774/v2/7ed3351cd58349659f0bfae002f76a77}],
 endpoints_links: [], type: compute, name: nova}, {endpoints:
 [{adminURL: http://10.67.148.27:9696;, region: regionOne,
 internalURL: http://10.67.148.27:9696;, id:
 7c0f28aa4710438bbd84fd25dbe4daa6, publicURL: http://10.67.148.27:9696}],
 endpoints_links: [], type: network, name: neutron}, {endpoints:
 [{adminURL: http://10.67.148.27:9292;, region: regionOne,
 internalURL: http://10.67.148.27:9292;, id:
 576f41fc8ef14b4f90e516bb45897491, publicURL: http://10.67.148.27:9292}],
 endpoints_links: [], type: image, name: glance}, {endpoints:
 [{adminURL: http://10.67.148.27:8777;, region: regionOne,
 internalURL: http://10.67.148.27:8777;, id:
 77d464e146f242aca3c50e10b6cfdaa0, publicURL: http://10.67.148.27:8777}],
 endpoints_links: [], type: metering, name: ceilometer},
 {endpoints: [{adminURL: http://10.67.148.27:6385;, region:
 regionOne, internalURL: http://10.67.148.27:6385;, id:
 1b8177826e0c426fa73e5519c8386589, publicURL: http://10.67.148.27:6385}],
 endpoints_links: [], type: baremetal, name: ironic},
 {endpoints: [{adminURL: http://10.67.148.27:35357/v2.0;, region:
 regionOne, internalURL: http://10.67.148.27:5000/v2.0;, id:
 435ae249fd2a427089cb4bf2e6c0b8e9, publicURL: 
 http://10.67.148.27:5000/v2.0}], endpoints_links: [], type:
 identity, name: keystone}], user: {username: huawei,
 roles_links: [], id: a88a40a635334e5da2ac3523d9780ed3, roles:
 [{name: _member_}], name: huawei}, metadata: {is_admin: 0,
 roles: [73b0a1ac6b0c48cb90205c53f2b9e48d]}}}

 At least, the internalURL and adminURL of endpoints should not be returned
 to ordinary users, only if the admin configured the policy to allow
 ordinary user has the right to see it.

 Best Regards
 Chaoyi Huang ( Joe Huang )


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-30 Thread Vitaly Kramskikh
2014-11-28 23:20 GMT+04:00 Dmitriy Shulyak dshul...@mirantis.com:


- environment_config.yaml should contain exact config which will be
mixed into cluster_attributes. No need to implicitly generate any controls
like it is done now.

  Initially i had the same thoughts and wanted to use it the way it is,
 but now i completely agree with Evgeniy that additional DSL will cause a lot
 of problems with compatibility between versions and developer experience.

As far as I understand, you want to introduce another approach to describe
UI part or plugins?

 We need to search for alternatives..
 1. for UI i would prefer separate tab for plugins, where user will be able
 to enable/disable plugin explicitly.

Of course, we need a separate page for plugin management.

 Currently settings tab is overloaded.
 2. on backend we need to validate plugins against certain env before
 enabling it,
and for simple case we may expose some basic entities like network_mode.
 For case where you need complex logic - python code is far more flexible
 that new DSL.


- metadata.yaml should also contain is_removable field. This field
is needed to determine whether it is possible to remove installed plugin.
It is impossible to remove plugins in the current implementation.
This field should contain an expression written in our DSL which we 
 already
use in a few places. The LBaaS plugin also uses it to hide the checkbox if
Neutron is not used, so even simple plugins like this need to utilize it.
This field can also be autogenerated, for more complex plugins plugin
writer needs to fix it manually. For example, for Ceph it could look like
settings:storage.volumes_ceph.value == false and
settings:storage.images_ceph.value == false.

 How checkbox will help? There is several cases of plugin removal..

It is not a checkbox, this is condition that determines whether the plugin
is removable. It allows plugin developer specify when plguin can be safely
removed from Fuel if there are some environments which were created after
the plugin had been installed.

 1. Plugin is installed, but not enabled for any env - just remove the
 plugin
 2. Plugin is installed, enabled and cluster deployed - forget about it for
 now..
 3. Plugin is installed and only enabled - we need to maintain state of db
 consistent after plugin is removed, it is problematic, but possible

My approach also allows to eliminate enableness of plugins which will
cause UX issues and issues like you described above. vCenter and Ceph also
don't have enabled state. vCenter has hypervisor and storage, Ceph
provides backends for Cinder and Glance which can be used simultaneously or
only one of them can be used.

 My main point that plugin is enabled/disabled explicitly by user, after
 that we can decide ourselves can it be removed or not.


- For every task in tasks.yaml there should be added new condition
field with an expression which determines whether the task should be run.
In the current implementation tasks are always run for specified roles. 
 For
example, vCenter plugin can have a few tasks with conditions like
settings:common.libvirt_type.value == 'vcenter' or
settings:storage.volumes_vmdk.value == true. Also, AFAIU, similar
approach will be used in implementation of Granular Deployment feature.

 I had some thoughts about using DSL, it seemed to me especially helpfull
 when you need to disable part of embedded into core functionality,
 like deploying with another hypervisor, or network dirver (contrail for
 example). And DSL wont cover all cases here, this quite similar to
 metadata.yaml, simple cases can be covered by some variables in tasks (like
 group, unique, etc), but complex is easier to test and describe in python.

Could you please provide example of such conditions? vCenter and Ceph can
be turned into plugins using this approach.

Also, I'm not against python version of plugins. It could look like a
python class with exactly the same fields form YAML files, but conditions
will be written in python.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Vitaly Kramskikh,
Software Engineer,
Mirantis, Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-11-30 Thread Vitaly Kramskikh
Dmitry,

2014-11-29 1:01 GMT+04:00 Dmitry Borodaenko dborodae...@mirantis.com:

 Vitaly,

 It's there a document or spec or a wiki page that describes the current
 status of this discussion in the context of the whole pluggable
 architecture design?

There is a spec for the current implementation
https://github.com/stackforge/fuel-specs/blob/master/specs/6.0/cinder-neutron-plugins-in-fuel.rst.
Here I'm trying to propose changes which allow to turn more complex things
like Ceph and vCenter into plugins. That's it.

 Jumping into this thread without having the whole picture is hard. Knowing
 what is already agreed, what is implemented so far, and having a structured
 summary of points of disagreement with pro and contra arguments would help
 a lot.

Well, there is a problem with pro and contra arguments because currently
the discussion looks like Your proposal is wrong and complicated and
stuff, but I still don't have my own proposal. So I think it could be a
better idea to wait for proposal from Evgeniy and then we'll be able to
make a list of pro and contra arguments.


 On Nov 28, 2014 9:48 AM, Vitaly Kramskikh vkramsk...@mirantis.com
 wrote:

 Folks,

 Please participate in this discussion. We already have a few meetings on
 this topic and there is still no decision. I understand entry level is
 pretty high, but please find some time for this.

 Evgeniy,

 Responses inline:

 2014-11-28 20:03 GMT+03:00 Evgeniy L e...@mirantis.com:

  Yes, but is already used in a few places. I want to notice once
 again - even a simple LBaaS plugin with a single checkbox needed to utilize
 this functionality.

 Yes, but you don't need to specify it in each task.

 Just by adding conditions to tasks we will be able to pluginize all
 current functionality that can be pluginized. On the other hand, 1 line
 will be added to task definition and you are concerned about this that much
 that you want to create a separate interface for complex plugins. Am I
 right?


  So, you're still calling this interface complicated. Ok, I'm looking
 forward to seeing your proposal about dealing with complex plugins.

 All my concerns were related to simple plugins and that we should
 find a way not to force a plugin developer to do this copy-paste work.

 I don't understand what copy-paste work you are talking about. Copying
 conditions from tasks to is_removable? Yes, it will be so in most cases,
 but not always, so we need to give a plugin writer a way to define
 is_removable manually. If you are talking about copypasting conditions
 between tasks (though I don't understand why we need a few tasks with the
 same conditions), YAML links can be used - we use them a lot in
 openstack.yaml.


  If you have several checkboxes, then it is a complex plugin with
 complex configuration ...

 Here we need a definition of s simple plugins, in the current
 release with simple plugins you can define some fields on the UI (not a
 single checkbox) and run several tasks if plugin is enabled.

 Ok, we can define simple plugin as a plugin which doesn't require
 modification of generated YAML files at all. But with proposed approach
 there is no need to somehow separate simple and complex plugins.


 Thanks,


 On Fri, Nov 28, 2014 at 7:01 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:

 Evgeniy,

 Responses inline:

 2014-11-28 18:31 GMT+03:00 Evgeniy L e...@mirantis.com:

 Hi Vitaly,

 I agree with you that conditions can be useful in case of complicated
 plugins, but
 at the same time in case of simple cases it adds a huge amount of
 complexity.
 I would like to avoid forcing user to know about any conditions if he
 wants
 to add several text fields on the UI.

 I have several reasons why we shouldn't do that:
 1. conditions are described with yet another language with it's own
 syntax

 Yes, but is already used in a few places. I want to notice once again -
 even a simple LBaaS plugin with a single checkbox needed to utilize this
 functionality.

 2. the language is not documented (solvable)

 It is documented:
 http://docs.mirantis.com/fuel-dev/develop/nailgun/customization/settings.html#expression-syntax

 3. complicated interface will lead to a lot of bugs for the end user,
 and it will be
 a Fuel team's problem

 So, you're still calling this interface complicated. Ok, I'm looking
 forward to seeing your proposal about dealing with complex plugins.

 4. in case of several checkboxes you'll have to write a huge
 conditions with
 a lot of and statements and it'll be really easy to forget about
 some of them

 If you have several checkboxes, then it is a complex plugin with
 complex configuration, so I see no problem here. There will be many more
 places where you can forget stuff.


 As result in simple cases plugin developer will have to specify the
 same
 condition of every task in tasks.yaml file, add it to metadata.yaml.
 If you add new checkbox, you should go through all of this files,
 add and lbaas:new_checkbox_name statement.

 

Re: [openstack-dev] sqlalchemy-migrate call for reviews

2014-11-30 Thread Mike Bayer
I’ve +2’ed it, it was caused by https://review.openstack.org/#/c/81955/.


 On Nov 29, 2014, at 9:54 PM, Davanum Srinivas dava...@gmail.com wrote:
 
 Looks like there is a review in the queue -
 https://review.openstack.org/#/c/111485/
 
 -- dims
 
 On Sat, Nov 29, 2014 at 6:28 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 To anyone who reviews sqlalchemy-migrate changes, there are people
 talking to themselves on GitHub about long-overdue bug fixes because
 the Gerrit review queue for it is sluggish and they apparently don't
 realize the SQLAM reviewers don't look at Google Code issues[1] and
 GitHub pull request comments[2].
 
 [1] https://code.google.com/p/sqlalchemy-migrate/issues/detail?id=171
 [2] https://github.com/stackforge/sqlalchemy-migrate/pull/5
 
 --
 Jeremy Stanley
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Davanum Srinivas :: https://twitter.com/dims
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Board election

2014-11-30 Thread Anita Kuno
On 11/30/2014 09:44 AM, Gary Kotton wrote:
 Hi,
 When I log into the site I am unable to nominate people. Any ideas? I get: 
 Your account credentials do not allow you to nominate candidates.
 Any idea how to address that?
 Thanks
 Gary
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
That's not good. Just in case the powers that be that can address your
issue don't see this email, might I suggest filing a bug against the
openstack.org website: https://launchpad.net/openstack-org

The foundation database is not within -infra control so the best we can
do is commiserate and hope the issue is fixed in time for you to
complete your desired actions.

Thanks Gary,
Anita.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Board election

2014-11-30 Thread Anne Gentle
Hi Gary and Anita,
We do have a bug tracker for www.openstack.org at:

https://bugs.launchpad.net/openstack-org

Log a bug there to make sure the web devs at the Foundation get it.
Thanks,
Anne

On Sun, Nov 30, 2014 at 8:44 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 When I log into the site I am unable to nominate people. Any ideas? I get:
 *Your account credentials do not allow you to nominate candidates.**”*
  Any idea how to address that?
  Thanks
  Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to debug test using pdb

2014-11-30 Thread Robert Collins
On 1 December 2014 at 00:08, Saju M sajup...@gmail.com wrote:
 Hi,

 How to debug test using pdb

 I want to debug tests and tried following methods, but didn't work
 (could not see pdb console).
 I could see only the message Tests running... and command got stuck.

 I tried this with python-neutronclient, that does not have run_test.sh

 Method-1:
 #source .tox/py27/bin/activate
 #.tox/py27/bin/python -m testtools.run
 neutronclient.tests.unit.test_cli20_network.CLITestV20NetworkJSON.test_create_network

 Method-2:
 #testr list-tests '(CLITestV20NetworkJSON.test_create_network)'  my-list
 #python -m testtools.run discover --load-list my-list


RIght - testr owns stdin and stdout on the test processes; which is
very much needed for paralllel backends. Single-threaded testing it
could potentially just pass them through, and in principle
multiplexing is possible for multiple backends, but thats not
implemented yet.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] image create mysql error

2014-11-30 Thread liuxinguo
When our CI run devstack, it occurs error when run  image create mysql. Log 
is pasted as following:

  22186 2014-11-29 21:11:48.611 | ++ basename 
/opt/stack/new/devstack/files/mysql.qcow2 .qcow2
  22187 2014-11-29 21:11:48.623 | + image_name=mysql
  22188 2014-11-29 21:11:48.624 | + disk_format=qcow2
  22189 2014-11-29 21:11:48.624 | + container_format=bare
  22190 2014-11-29 21:11:48.624 | + is_arch ppc64
  22191 2014-11-29 21:11:48.628 | ++ uname -m
  22192 2014-11-29 21:11:48.710 | + [[ i686 == \p\p\c\6\4 ]]
  22193 2014-11-29 21:11:48.710 | + '[' bare = bare ']'
  22194 2014-11-29 21:11:48.710 | + '[' '' = zcat ']'
  22195 2014-11-29 21:11:48.710 | + openstack --os-token 
5387fe9c6f6d4182b09461fe232501db --os-url 
http://127.0.0.1:9292http://127.0.0.1:9292/ image create mysql --public 
--container-format=bare --disk-format qcow2
  22196 2014-11-29 21:11:57.275 | ERROR: openstack html
  22197 2014-11-29 21:11:57.275 |  head
  22198 2014-11-29 21:11:57.275 |   title401 Unauthorized/title
  22199 2014-11-29 21:11:57.275 |  /head
  22200 2014-11-29 21:11:57.275 |  body
  22201 2014-11-29 21:11:57.275 |   h1401 Unauthorized/h1
  22202 2014-11-29 21:11:57.275 |   This server could not verify that you are 
authorized to access the document you requested. Either you supplied the wrong 
credentials (e.g., bad password), or your browser does not understand how to 
supply the credentials required.br /br /
  22203 2014-11-29 21:11:57.275 |
  22204 2014-11-29 21:11:57.276 |  /body
  22205 2014-11-29 21:11:57.276 | /html (HTTP 401)
  22206 2014-11-29 21:11:57.344 | + exit_trap

* Any one can give me some hintapp:ds:hint?
* Thanks.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Re: [neutron] the hostname regex pattern fix also changed behaviour :(

2014-11-30 Thread Angus Lees
On Fri Nov 28 2014 at 10:49:21 PM Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 28/11/14 01:26, Angus Lees wrote:
  Context: https://review.openstack.org/#/c/135616
 
  If we're happy disabling the check for components being all-digits, then
  a minimal change to the existing regex that could be backported might be
  something like
r'(?=^.{1,254}$)(^(?:[a-zA-Z0-9_](?:[a-zA-Z0-9_-]{,61}[a-zA-
 Z0-9])\.)*(?:[a-zA-Z]{2,})$)'
 
  Alternatively (and clearly preferable for Kilo), Kevin has a replacement
  underway that rewrites this entirely to conform to modern RFCs in
  I003cf14d95070707e43e40d55da62e11a28dfa4e

 With the change, will existing instances work as before?


Yes, this cuts straight to the heart of the matter:  What's the purpose of
these validation checks?  Specifically, if someone is using an invalid
hostname that passed the previous check but doesn't pass an
improved/updated check, should we continue to allow it?
I figure our role here is either to allow exactly what the relevant
standards say, and deliberately reject/break anything that falls outside
that - or be liberal, restrict only to some sort of 'safe' input and then
let the underlying system perform the final validation.  I can see plenty
of reasons for either approach, but somewhere in the middle doesn't seem to
make much sense - and I think the approach chosen also dictates any
migration path.

As they currently stand, I think both Kevin's and my alternative above
_should_ be more liberal than the original (before the fix) regex.
Specifically, they now allow all-digits hostname components - in line with
newer RFCs.

However, TLD handling is a little different between the variants:
- Kevin's continues to reject an all-digits TLD (also following RFC
guidance)
- mine and the original force TLDs to be all alpha (a-z; no digits or
dash/underscore)

The TLD handling is more interesting because an unqualified hostname (with
no '.' characters) hits the TLD logic in all variants, but the original has
a \.? quirk that means an unqualified hostname is forced to end with at
least 2 alpha-only chars.  As written above, mine is probably too
restrictive for unqualified names, and this would need to be fixed.

As the above shows, describing regex patterns in prose is long, boring and
inaccurate.  Someone who is going to have to approve the change should just
dictate what they want here and then we'll go do that :P
I suggest they also consider the DoS-fix-backport and the Kilo-and-forwards
cases separately.

 - Gus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Board election

2014-11-30 Thread Adam Lawson
Everyone on my team are also seeing the same errors. Will submit a bug but
that's a lot of people to file bugs. ; )


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Sun, Nov 30, 2014 at 8:17 AM, Anne Gentle a...@openstack.org wrote:

 Hi Gary and Anita,
 We do have a bug tracker for www.openstack.org at:

 https://bugs.launchpad.net/openstack-org

 Log a bug there to make sure the web devs at the Foundation get it.
 Thanks,
 Anne

 On Sun, Nov 30, 2014 at 8:44 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 When I log into the site I am unable to nominate people. Any ideas? I
 get: *Your account credentials do not allow you to nominate candidates.*
 *”*
  Any idea how to address that?
  Thanks
  Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] sahara Integration tests issue

2014-11-30 Thread lu jander
Hi Sahara dev,

I am working on the integration tests in sahara(with nova network), when I
am using tox -e cdh  to run cdh tests, it failed with error below, then I
check the sahara log, it says heat error.  This error reminds me a bug
which is not merged https://bugs.launchpad.net/sahara/+bug/1392738 (I
checkout this patch, and it seems still does't work)   so I manually set
auto_security_group false in test_cdh_gating.py, but still met this error
and not successfully passed the integration test.

below is Integration tests error and sahara log


Tests Error:

Traceback (most recent call last):
  File
/opt/stack/sahara/sahara/tests/integration/tests/gating/test_cdh_gating.py,
line 305, in test_cdh_plugin_gating
self._create_cluster()
  File sahara/tests/integration/tests/base.py, line 49, in wrapper
ITestCase.print_error_log(message, e)
  File
/opt/stack/sahara/.tox/integration/local/lib/python2.7/site-packages/oslo/utils/excutils.py,
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File sahara/tests/integration/tests/base.py, line 46, in wrapper
fct(*args, **kwargs)
  File
/opt/stack/sahara/sahara/tests/integration/tests/gating/test_cdh_gating.py,
line 191, in _create_cluster
self.poll_cluster_state(self.cluster_id)
  File sahara/tests/integration/tests/base.py, line 237, in
poll_cluster_state
self.fail('Cluster state == \'Error\'.')
  File
/opt/stack/sahara/.tox/integration/local/lib/python2.7/site-packages/unittest2/case.py,
line 666, in fail
raise self.failureException(msg)
AssertionError: Cluster state == 'Error'.
Ran 1 tests in 147.204s (+21.220s)
FAILED (id=47, failures=1)
error: testr failed (1)
ERROR: InvocationError: '/opt/stack/sahara/.tox/integration/bin/python
setup.py test --slowest --testr-args=cdh'


Sahara Log:

2014-12-01 09:02:44.001 ERROR sahara.service.ops [-] Error during operating
cluster 'test-cluster-cdh' (reason: Heat stack failed with status
CREATE_FAILED)
2014-12-01 09:02:44.001 TRACE sahara.service.ops Traceback (most recent
call last):
2014-12-01 09:02:44.001 TRACE sahara.service.ops   File
/opt/stack/sahara/sahara/service/ops.py, line 141, in wrapper
2014-12-01 09:02:44.001 TRACE sahara.service.ops f(cluster_id, *args,
**kwds)
2014-12-01 09:02:44.001 TRACE sahara.service.ops   File
/opt/stack/sahara/sahara/service/ops.py, line 227, in _provision_cluster
2014-12-01 09:02:44.001 TRACE sahara.service.ops
INFRA.create_cluster(cluster)
2014-12-01 09:02:44.001 TRACE sahara.service.ops   File
/opt/stack/sahara/sahara/service/heat_engine.py, line 57, in
create_cluster
2014-12-01 09:02:44.001 TRACE sahara.service.ops
launcher.launch_instances(cluster, target_count)
2014-12-01 09:02:44.001 TRACE sahara.service.ops   File
/opt/stack/sahara/sahara/service/heat_engine.py, line 209, in
launch_instances
2014-12-01 09:02:44.001 TRACE sahara.service.ops
heat.wait_stack_completion(stack.heat_stack)
2014-12-01 09:02:44.001 TRACE sahara.service.ops   File
/opt/stack/sahara/sahara/utils/openstack/heat.py, line 60, in
wait_stack_completion
2014-12-01 09:02:44.001 TRACE sahara.service.ops
2014-12-01 09:02:44.001 TRACE sahara.service.ops HeatStackException: Heat
stack failed with status CREATE_FAILED
2014-12-01 09:02:44.001 TRACE sahara.service.ops
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Board election

2014-11-30 Thread Anita Kuno
On 11/30/2014 08:28 PM, Adam Lawson wrote:
 Everyone on my team are also seeing the same errors. Will submit a bug but
 that's a lot of people to file bugs. ; )
Or search by most recently reported bug and add to the bug report for
whomever reports first:
https://bugs.launchpad.net/openstack-org/+bugs?orderby=-idstart=0

Thanks Adam,
Anita.
 
 
 *Adam Lawson*
 
 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (844) 4-AQORN-NOW ext. 101
 International: +1 302-387-4660
 Direct: +1 916-246-2072
 
 
 On Sun, Nov 30, 2014 at 8:17 AM, Anne Gentle a...@openstack.org wrote:
 
 Hi Gary and Anita,
 We do have a bug tracker for www.openstack.org at:

 https://bugs.launchpad.net/openstack-org

 Log a bug there to make sure the web devs at the Foundation get it.
 Thanks,
 Anne

 On Sun, Nov 30, 2014 at 8:44 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 When I log into the site I am unable to nominate people. Any ideas? I
 get: *Your account credentials do not allow you to nominate candidates.*
 *”*
  Any idea how to address that?
  Thanks
  Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][oslo] Handling contexts and policy enforcement in services

2014-11-30 Thread Jamie Lennox
TL;DR: I think we can handle most of oslo.context with some additions to
auth_token middleware and simplify policy enforcement (from a service
perspective) at the same time.

There is currently a push to release oslo.context as a
library, for reference:
https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py

Whilst I love the intent to standardize this
functionality I think that many of the requirements in there
are incorrect and don't apply to all services. It is my
understanding for example that read_only, show_deleted are
essentially nova requirements, and the use of is_admin needs
to be killed off, not standardized.

Currently each service builds a context based on headers
made available from auth_token middleware and some
additional interpretations based on that user
authentication. Each service does this slightly differently
based on its needs/when it copied it from nova.

I propose that auth_token middleware essentially handle the
creation and management of an authentication object that
will be passed and used by all services. This will
standardize so much of the oslo.context library that I'm not
sure it will be still needed. I bring this up now as I am
wanting to push this way and don't want to change things
after everyone has adopted oslo.context.

The current release of auth_token middleware creates and
passes to services (via env['keystone.token_auth']) an auth
plugin that can be passed to clients to use the current user
authentication. My intention here is to expand that object
to expose all of the authentication information required for
the services to operate.

There are two components to context that I can see:

 - The current authentication information that is retrieved
   from auth_token middleware.
 - service specific context added based on that user
   information eg read_only, show_deleted, is_admin,
   resource_id

Regarding the first point of current authentication there
are three places I can see this used:

 - communicating with other services as that user
 - associating resources with a user/project
 - policy enforcement

Addressing each of the 'current authentication' needs:

 - As mentioned for service to service communication
   auth_token middleware already provides an auth_plugin
   that can be used with (at this point most) of the
   clients. This greatly simplifies reusing an existing
   token and correctly using the service catalog as each
   client would do this differently. In future this plugin
   will be extended to provide support for concepts such as
   filling in the X-Service-Token [1] on behalf of the
   service, managing the request id, and generally
   standardizing service-service communication without
   requiring explicit support from every project and client.

 - Given that this authentication plugin is built within
   auth_token middleware it is a fairly trivial step to
   provide public properties on this object to give access
   to the current user_id, project_id and other relevant
   authentication data that the services can access. This is
   fairly well handled today but it means it is done without
   the service having to fetch all these objects from
   headers.

 - With upcoming changes to policy to handle features such
   as the X-Service-Token the existing context will need to
   gain a bunch of new entries. With the keystone team
   looking to wrap policy enforcement into its own
   standalone library it makes more sense to provide this
   authentication object directly to policy enforcement.
   This will allow the keystone team to manipulate policy
   data from both auth_token and the enforcement side,
   letting us introduce new features to policy transparent
   to the services. It will also standardize the naming of
   variables within these policy files.

What is left for a context object after this is managing
serialization and deserialization of this auth object and
any additional fields (read_only etc) that are generally
calculated at context creation time. This would be a very
small library.

There are still a number of steps to getting there:

 - Adding enough data to the existing authentication plugin
   to allow policy enforcement and general usage.
 - Making the authentication object serializable for
   transmitting between services.
 - Extracting policy enforcement into a library.

However I think that this approach brings enough benefits to
hold off on releasing and standardizing the use of the
current context objects.

I'd love to hear everyone thoughts on this, and where it
would fall down. I see there could be some issues with how
the context would fit into nova's versioned objects for
example - but I think this would be the same issues that an
oslo.context library would face anyway.

Jamie


[1] This is where service-service communication includes
the service token as well as the user token to allow smarter
policy and resource access. For example, a user can't access
certain neutron functions directly 

Re: [openstack-dev] Board election

2014-11-30 Thread Stefano Maffulli
On 11/30/2014 06:44 AM, Gary Kotton wrote:
 When I log into the site I am unable to nominate people. Any ideas? I
 get: *Your account credentials do not allow you to nominate candidates.**”*

That means that the account you're using is not the account of an
Individual Member of OpenStack Foundation. Only member of the Foundation
can participate in elections.

 Any idea how to address that?

For you and others, the only way to proceed is to file a bug on

https://bugs.launchpad.net/openstack-org

so the web team can help out.

if you want to become a Member of the Foundation and since there is no
UI to change an account from simply a registered user of openstack.org
to  Individual Member) you should provide in the bug report your name
and state I agree to the terms stated on
https://www.openstack.org/join/register, I want to register as an
Individual Member of the OpenStack Foundation.

HTH
stef

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][neutron] Proposal to split Neutron into separate repositories

2014-11-30 Thread henry hly
FWaas is typically classified to L4-L7. But if they are developed
standalone, it would be very difficult for implementing with a
distributed manner. For example, with W-E traffic control in DVR mode,
we can't rely on a external python client rest api call, the policy
execution module must be loaded as the L3 agent extension, or another
service-policy agent in the compute node.

My suggestion is that starting with LB and VPN as a trial, which can
never be distributed. FW is very tightly coupled with L3, so leaving
it for discuss some time later may be more smooth.

On Wed, Nov 19, 2014 at 6:31 AM, Mark McClain m...@mcclain.xyz wrote:
 All-

 Over the last several months, the members of the Networking Program have
 been discussing ways to improve the management of our program.  When the
 Quantum project was initially launched, we envisioned a combined service
 that included all things network related.  This vision served us well in the
 early days as the team mostly focused on building out layers 2 and 3;
 however, we’ve run into growth challenges as the project started building
 out layers 4 through 7.  Initially, we thought that development would float
 across all layers of the networking stack, but the reality is that the
 development concentrates around either layer 2 and 3 or layers 4 through 7.
 In the last few cycles, we’ve also discovered that these concentrations have
 different velocities and a single core team forces one to match the other to
 the detriment of the one forced to slow down.

 Going forward we want to divide the Neutron repository into two separate
 repositories lead by a common Networking PTL.  The current mission of the
 program will remain unchanged [1].  The split would be as follows:

 Neutron (Layer 2 and 3)
 - Provides REST service and technology agnostic abstractions for layer 2 and
 layer 3 services.

 Neutron Advanced Services Library (Layers 4 through 7)
 - A python library which is co-released with Neutron
 - The advance service library provides controllers that can be configured to
 manage the abstractions for layer 4 through 7 services.

 Mechanics of the split:
 - Both repositories are members of the same program, so the specs repository
 would continue to be shared during the Kilo cycle.  The PTL and the drivers
 team will retain approval responsibilities they now share.
 - The split would occur around Kilo-1 (subject to coordination of the Infra
 and Networking teams). The timing is designed to enable the proposed REST
 changes to land around the time of the December development sprint.
 - The core team for each repository will be determined and proposed by Kyle
 Mestery for approval by the current core team.
 - The Neutron Server and the Neutron Adv Services Library would be co-gated
 to ensure that incompatibilities are not introduced.
 - The Advance Service Library would be an optional dependency of Neutron, so
 integrated cross-project checks would not be required to enable it during
 testing.
 - The split should not adversely impact operators and the Networking program
 should maintain standard OpenStack compatibility and deprecation cycles.

 This proposal to divide into two repositories achieved a strong consensus at
 the recent Paris Design Summit and it does not conflict with the current
 governance model or any proposals circulating as part of the ‘Big Tent’
 discussion.

 Kyle and mark

 [1]
 https://git.openstack.org/cgit/openstack/governance/plain/reference/programs.yaml

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Do we need an IntrospectionInterface?

2014-11-30 Thread Shivanand Tendulker
+1 for  separate interface.

--Shivanand

On Fri, Nov 28, 2014 at 7:20 PM, Lucas Alvares Gomes lucasago...@gmail.com
wrote:

 Hi,

 Thanks for putting it up Dmitry. I think the idea is fine too, I
 understand that people may want to use in-band discovery for drivers like
 iLO or DRAC and having those on a separated interface allow us to composite
 a driver to do it (which is ur use case 2. ).

 So, +1.

 Lucas

 On Wed, Nov 26, 2014 at 3:45 PM, Imre Farkas ifar...@redhat.com wrote:

 On 11/26/2014 02:20 PM, Dmitry Tantsur wrote:

 Hi all!

 As our state machine and discovery discussion proceeds, I'd like to ask
 your opinion on whether we need an IntrospectionInterface
 (DiscoveryInterface?). Current proposal [1] suggests adding a method for
 initiating a discovery to the ManagementInterface. IMO it's not 100%
 correct, because:
 1. It's not management. We're not changing anything.
 2. I'm aware that some folks want to use discoverd-based discovery [2]
 even for DRAC and ILO (e.g. for vendor-specific additions that can't be
 implemented OOB).

 Any ideas?

 Dmitry.

 [1] https://review.openstack.org/#/c/100951/
 [2] https://review.openstack.org/#/c/135605/


 Hi Dmitry,

 I see the value in using the composability of our driver interfaces, so I
 vote for having a separate IntrospectionInterface. Otherwise we wouldn't
 allow users to use eg. the DRAC driver with an in-band but more powerful hw
 discovery.

 Imre


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Programatically re-starting OpenStack services

2014-11-30 Thread Pradip Mukhopadhyay
Hello,


Are there ways (which pattern might be preferred) by which one can
programatically restart different OpenStack services?

For example: if one wants to restart cinder-scheduler or heat-cfn?



Thanks in advance,
Pradip
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Programatically re-starting OpenStack services

2014-11-30 Thread Sadia Bashir
Hi,

What do you mean by programatically? Do you want to restart services via a
script or want to orchestrate restarting services from within openstack?

If it is via script, you can write a bash script as follows:

service cinder-scheduler restart
service heat-api-cfn restart



On Mon, Dec 1, 2014 at 9:03 AM, Pradip Mukhopadhyay 
pradip.inte...@gmail.com wrote:

 Hello,


 Are there ways (which pattern might be preferred) by which one can
 programatically restart different OpenStack services?

 For example: if one wants to restart cinder-scheduler or heat-cfn?



 Thanks in advance,
 Pradip


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] proper syncing of cinder volume state

2014-11-30 Thread John Griffith
On Fri, Nov 28, 2014 at 11:25 AM, D'Angelo, Scott scott.dang...@hp.com wrote:
 A Cinder blueprint has been submitted to allow the python-cinderclient to
 involve the back end storage driver in resetting the state of a cinder
 volume:

 https://blueprints.launchpad.net/cinder/+spec/reset-state-with-driver

 and the spec:

 https://review.openstack.org/#/c/134366



 This blueprint contains various use cases for a volume that may be listed in
 the Cinder DataBase in state detaching|attaching|creating|deleting.

 The Proposed solution involves augmenting the python-cinderclient command
 ‘reset-state’, but other options are listed, including those that

 involve Nova, since the state of a volume in the Nova XML found in
 /etc/libvirt/qemu/instance_id.xml may also be out-of-sync with the

 Cinder DB or storage back end.



 A related proposal for adding a new non-admin API for changing volume status
 from ‘attaching’ to ‘error’ has also been proposed:

 https://review.openstack.org/#/c/137503/



 Some questions have arisen:

 1) Should ‘reset-state’ command be changed at all, since it was originally
 just to modify the Cinder DB?

 2) Should ‘reset-state’ be fixed to prevent the naïve admin from changing
 the CinderDB to be out-of-sync with the back end storage?

 3) Should ‘reset-state’ be kept the same, but augmented with new options?

 4) Should a new command be implemented, with possibly a new admin API to
 properly sync state?

 5) Should Nova be involved? If so, should this be done as a separate body of
 work?



 This has proven to be a complex issue and there seems to be a good bit of
 interest. Please provide feedback, comments, and suggestions.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Hey Scott,

Thanks for posting this to the ML, I stated my opinion on the spec,
but for completeness:
My feeling is that reset-state has morphed into something entirely
different than originally intended.  That's actually great, nothing
wrong there at all.  I strongly disagree with the statements that
setting the status in the DB only is almost always the wrong thing to
do.  The whole point was to allow the state to be changed in the DB
so the item could in most cases be deleted.  There was never an intent
(that I'm aware of) to make this some sort of uber resync and heal API
call.

All of that history aside, I think it would be great to add some
driver interaction here.  I am however very unclear on what that would
actually include.  For example, would you let a Volume's state be
changed from Error-Attaching to In-Use and just run through the
process of retyring an attach?  To me that seems like a bad idea.  I'm
much happier with the current state of changing the state form Error
to Available (and NOTHING else) so that an operation can be retried,
or the resource can be deleted.  If you start allowing any state
transition (which sadly we've started to do) you're almost never going
to get things correct.  This also covers almost every situation even
though it means you have to explicitly retry operations or steps (I
don't think that's a bad thing) and make the code significantly more
robust IMO (we have some issues lately with things being robust).

My proposal would be to go back to limiting the things you can do with
reset-state (basicly make it so you can only release the resource back
to available) and add the driver interaction to clean up any mess if
possible.  This could be a simple driver call added like
make_volume_available whereby the driver just ensures that there are
no attachments and well; honestly nothing else comes to mind as
being something the driver cares about here. The final option then
being to add some more power to force-delete.

Is there anything other than attach that matters from a driver?  If
people are talking error-recovery that to me is a whole different
topic and frankly I think we need to spend more time preventing errors
as opposed to trying to recover from them via new API calls.

Curious to see if any other folks have input here?

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Gate error

2014-11-30 Thread Eli Qiao

Got gate error today.
how can we kick infrastructure team to fix it?

HTTP error 404 while getting 
http://pypi.IAD.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e 
http://pypi.iad.openstack.org/packages/source/l/logilab-common/logilab-common-0.63.2.tar.gz#md5=2bf4599ae1f2ccf4603ca02c5d7e798e(from 
http://pypi.IAD.openstack.org/simple/logilab-common/ 
http://pypi.iad.openstack.org/simple/logilab-common/)


[1]http://logs.openstack.org/03/120703/24/check/gate-nova-pep8/b745055/console.html


--
Thanks,
Eli (Li Yong) Qiao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Programatically re-starting OpenStack services

2014-11-30 Thread Pradip Mukhopadhyay
Yeah, I meant from Orchestration. Sorry if the earlier one is not clear.


To be little more specific: from the Life Cycle management functions of the
custom Heat resource definition.


Thanks,
Pradip



On Mon, Dec 1, 2014 at 9:54 AM, Sadia Bashir 11msccssbas...@seecs.edu.pk
wrote:

 Hi,

 What do you mean by programatically? Do you want to restart services via a
 script or want to orchestrate restarting services from within openstack?

 If it is via script, you can write a bash script as follows:

 service cinder-scheduler restart
 service heat-api-cfn restart



 On Mon, Dec 1, 2014 at 9:03 AM, Pradip Mukhopadhyay 
 pradip.inte...@gmail.com wrote:

 Hello,


 Are there ways (which pattern might be preferred) by which one can
 programatically restart different OpenStack services?

 For example: if one wants to restart cinder-scheduler or heat-cfn?



 Thanks in advance,
 Pradip


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-11-30 Thread Thai Q Tran
I agree that keeping the API layer thin would be ideal. I should add that
having discrete API calls would allow dynamic population of table. However,
I will make a case where it might be necessary to add additional APIs.
Consider that you want to delete 3 items in a given table.

If you do this on the client side, you would need to perform: n * (1 API
request + 1 AJAX request)
If you have some logic on the server side that batch delete actions: n * (1
API request) + 1 AJAX request

Consider the following:
n = 1, client = 2 trips, server = 2 trips
n = 3, client = 6 trips, server = 4 trips
n = 10, client = 20 trips, server = 11 trips
n = 100, client = 200 trips, server 101 trips

As you can see, this does not scale very well something to consider...




From:   Richard Jones r1chardj0...@gmail.com
To: Tripp, Travis S travis.tr...@hp.com, OpenStack List
openstack-dev@lists.openstack.org
Date:   11/27/2014 05:38 PM
Subject:Re: [openstack-dev] [horizon] REST and Django



On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S travis.tr...@hp.com
wrote:
  Hi Richard,

  You are right, we should put this out on the main ML, so copying thread
  out to there.  ML: FYI that this started after some impromptu IRC
  discussions about a specific patch led into an impromptu google hangout
  discussion with all the people on the thread below.

Thanks Travis!


  As I mentioned in the review[1], Thai and I were mainly discussing the
  possible performance implications of network hops from client to horizon
  server and whether or not any aggregation should occur server side.   In
  other words, some views  require several APIs to be queried before any
  data can displayed and it would eliminate some extra network requests
  from client to server if some of the data was first collected on the
  server side across service APIs.  For example, the launch instance wizard
  will need to collect data from quite a few APIs before even the first
  step is displayed (I’ve listed those out in the blueprint [2]).

  The flip side to that (as you also pointed out) is that if we keep the
  API’s fine grained then the wizard will be able to optimize in one place
  the calls for data as it is needed. For example, the first step may only
  need half of the API calls. It also could lead to perceived performance
  increases just due to the wizard making a call for different data
  independently and displaying it as soon as it can.

Indeed, looking at the current launch wizard code it seems like you
wouldn't need to load all that data for the wizard to be displayed, since
only some subset of it would be necessary to display any given panel of the
wizard.


  I tend to lean towards your POV and starting with discrete API calls and
  letting the client optimize calls.  If there are performance problems or
  other reasons then doing data aggregation on the server side could be
  considered at that point.

I'm glad to hear it. I'm a fan of optimising when necessary, and not
beforehand :)


  Of course if anybody is able to do some performance testing between the
  two approaches then that could affect the direction taken.

I would certainly like to see us take some measurements when performance
issues pop up. Optimising without solid metrics is bad idea :)


    Richard


  [1]
  https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py
  [2]
  https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign

  -Travis

  From: Richard Jones r1chardj0...@gmail.com
  Date: Wednesday, November 26, 2014 at 11:55 PM
  To: Travis Tripp travis.tr...@hp.com, Thai Q Tran/Silicon Valley/IBM 
  tqt...@us.ibm.com, David Lyle dkly...@gmail.com, Maxime Vidori 
  maxime.vid...@enovance.com, Wroblewski, Szymon 
  szymon.wroblew...@intel.com, Wood, Matthew David (HP Cloud - Horizon)
  matt.w...@hp.com, Chen, Shaoquan sean.ch...@hp.com, Farina, Matt
  (HP Cloud) matthew.far...@hp.com, Cindy Lu/Silicon Valley/IBM 
  c...@us.ibm.com, Justin Pomeroy/Rochester/IBM jpom...@us.ibm.com, Neill
  Cox neill@ingenious.com.au
  Subject: Re: REST and Django

  I'm not sure whether this is the appropriate place to discuss this, or
  whether I should be posting to the list under [Horizon] but I think we
  need to have a clear idea of what goes in the REST API and what goes in
  the client (angular) code.

  In my mind, the thinner the REST API the better. Indeed if we can get
  away with proxying requests through without touching any *client code,
  that would be great.

  Coding additional logic into the REST API means that a developer would
  need to look in two places, instead of one, to determine what was
  happening for a particular call. If we keep it thin then the API
  presented to the client developer is very, very similar to the API
  presented by the services. Minimum surprise.

  Your thoughts?


       Richard


  On Wed Nov 26 2014 at 2:40:52 PM Richard Jones r1chardj0...@gmail.com
  wrote:
   Thanks for the 

[openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Jay Lau
When I review a patch for OpenStack, after review finished, I want to check
more patches for this project and then after click the Project content
for this patch, it will **not** jump to all patches but project
description. I think it is not convenient for a reviewer if s/he wants to
review more patches for this project.

[image: 内嵌图片 1]

[image: 内嵌图片 2]

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Chen CH Ji
+1, I also found this inconvenient point before ,thanks Jay for bring up :)

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Jay Lau jay.lau@gmail.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date:   12/01/2014 01:56 PM
Subject:[openstack-dev] [gerrit] Gerrit review problem




When I review a patch for OpenStack, after review finished, I want to check
more patches for this project and then after click the Project content
for this patch, it will **not** jump to all patches but project
description. I think it is not convenient for a reviewer if s/he wants to
review more patches for this project.

内嵌图片 1

内嵌图片 2

--
Thanks,

Jay___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Steve Martinelli
Clicking on the magnifying glass icon (next to the project name) lists
all recent patches for that project (closed and open).

I have a bookmark that lists all open reviews of a project and
always try to keep it open in a tab, and open specific code reviews in
new tabs.

Steve

Chen CH Ji jiche...@cn.ibm.com wrote on 12/01/2014 12:59:52 AM:

 From: Chen CH Ji jiche...@cn.ibm.com
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 12/01/2014 01:09 AM
 Subject: Re: [openstack-dev] [gerrit] Gerrit review problem
 
 +1, I also found this inconvenient point before ,thanks Jay for bring up 
:)
 
 Best Regards! 
 
 Kevin (Chen) Ji 纪 晨
 
 Engineer, zVM Development, CSTL
 Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
 Phone: +86-10-82454158
 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
 District, Beijing 100193, PRC 
 
 [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a 
 patch for OpenStack, after review finished, I want to check more 
 patches for this pr
 
 From: Jay Lau jay.lau@gmail.com
 To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
 Date: 12/01/2014 01:56 PM
 Subject: [openstack-dev] [gerrit] Gerrit review problem
 
 
 
 
 When I review a patch for OpenStack, after review finished, I want 
 to check more patches for this project and then after click the 
 Project content for this patch, it will **not** jump to all 
 patches but project description. I think it is not convenient for a 
 reviewer if s/he wants to review more patches for this project.
 
 [image removed] 
 
 [image removed] 
 
 -- 
 Thanks,
 
 Jay___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Deploy GlusterFS server

2014-11-30 Thread Bharat Kumar

Hi All,

Regarding the patch Deploy GlusterFS Server 
(https://review.openstack.org/#/c/133102/).

Submitted this patch long back, this patch also got Code Review +2.

I think it is waiting for Workflow approval. Another task is dependent 
on this patch.

Please review (Workflow) this patch and help me to merge this patch.

--
Thanks  Regards,
Bharat Kumar K


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Jay Lau
Thanks all, yes, there are ways I can get all on-going patches for one
project, I was complaining this because I can always direct to the right
page before gerrit review upgrade, the upgrade broken this feature which
makes not convenient for reviewers...

2014-12-01 14:13 GMT+08:00 Steve Martinelli steve...@ca.ibm.com:

 Clicking on the magnifying glass icon (next to the project name) lists
 all recent patches for that project (closed and open).

 I have a bookmark that lists all open reviews of a project and
 always try to keep it open in a tab, and open specific code reviews in
 new tabs.

 Steve

 Chen CH Ji jiche...@cn.ibm.com wrote on 12/01/2014 12:59:52 AM:

  From: Chen CH Ji jiche...@cn.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:09 AM
  Subject: Re: [openstack-dev] [gerrit] Gerrit review problem
 
  +1, I also found this inconvenient point before ,thanks Jay for bring up
 :)
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
  District, Beijing 100193, PRC
 
  [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a
  patch for OpenStack, after review finished, I want to check more
  patches for this pr
 
  From: Jay Lau jay.lau@gmail.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:56 PM
  Subject: [openstack-dev] [gerrit] Gerrit review problem
 
 
 
 
  When I review a patch for OpenStack, after review finished, I want
  to check more patches for this project and then after click the
  Project content for this patch, it will **not** jump to all
  patches but project description. I think it is not convenient for a
  reviewer if s/he wants to review more patches for this project.
 
  [image removed]
 
  [image removed]
 
  --
  Thanks,
 
  Jay___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread OpenStack Dev
Jay this has been informed  discussed, pre  post the gerrit upgrade :)

On Mon Dec 01 2014 at 12:00:02 PM Jay Lau jay.lau@gmail.com wrote:

 Thanks all, yes, there are ways I can get all on-going patches for one
 project, I was complaining this because I can always direct to the right
 page before gerrit review upgrade, the upgrade broken this feature which
 makes not convenient for reviewers...

 2014-12-01 14:13 GMT+08:00 Steve Martinelli steve...@ca.ibm.com:

 Clicking on the magnifying glass icon (next to the project name) lists
 all recent patches for that project (closed and open).

 I have a bookmark that lists all open reviews of a project and
 always try to keep it open in a tab, and open specific code reviews in
 new tabs.

 Steve

 Chen CH Ji jiche...@cn.ibm.com wrote on 12/01/2014 12:59:52 AM:

  From: Chen CH Ji jiche...@cn.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:09 AM
  Subject: Re: [openstack-dev] [gerrit] Gerrit review problem
 
  +1, I also found this inconvenient point before ,thanks Jay for bring
 up :)
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
  District, Beijing 100193, PRC
 
  [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a
  patch for OpenStack, after review finished, I want to check more
  patches for this pr
 
  From: Jay Lau jay.lau@gmail.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:56 PM
  Subject: [openstack-dev] [gerrit] Gerrit review problem
 
 
 
 
  When I review a patch for OpenStack, after review finished, I want
  to check more patches for this project and then after click the
  Project content for this patch, it will **not** jump to all
  patches but project description. I think it is not convenient for a
  reviewer if s/he wants to review more patches for this project.
 
  [image removed]
 
  [image removed]
 
  --
  Thanks,
 
  Jay___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] is there a way to simulate thousands or millions of compute nodes?

2014-11-30 Thread Gareth
@Michael

Okay, focusing on 'thousands' now, I know 'millions' is not good metaphor
here. I also know 'cells' functionality is nova's solution for large scale
deployment. But it also makes sense to find and re-produce large scale
problems in relatively small scale deployment.

@Sandy

All-in-all, I think you'd be better off load testing each piece
independently on a fixed hardware platform and faking out all the
incoming/outgoing services

I understand and this is what I want to know. Is anyone doing the work like
this? If yes, I would like to join :)



On Fri, Nov 28, 2014 at 8:36 AM, Sandy Walsh sandy.wa...@rackspace.com
wrote:

 From: Michael Still [mi...@stillhq.com] Thursday, November 27, 2014 6:57
 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] is there a way to simulate thousands
 or millions of compute nodes?
 
 I would say that supporting millions of compute nodes is not a current
 priority for nova... We are actively working on improving support for
 thousands of compute nodes, but that is via cells (so each nova deploy
 except the top is still in the hundreds of nodes).

 ramble on

 Agreed, it wouldn't make much sense to simulate this on a single machine.

 That said, if one *was* to simulate this, there are the well known
 bottlenecks:

 1. the API. How much can one node handle with given hardware specs? Which
 operations hit the DB the hardest?
 2. the Scheduler. There's your API bottleneck and big load on the DB for
 Create operations.
 3. the Conductor. Shouldn't be too bad, essentially just a proxy.
 4. child-to-global-cell updates. Assuming a two-cell deployment.
 5. the virt driver. YMMV.
 ... and that's excluding networking, volumes, etc.

 The virt driver should be load tested independently. So FakeDriver would
 be fine (with some delays added for common operations as Gareth suggests).
 Something like Bees-with-MachineGuns could be used to get a baseline metric
 for the API. Then it comes down to DB performance in the scheduler and
 conductor (for a single cell). Finally, inter-cell loads. Who blows out the
 queue first?

 All-in-all, I think you'd be better off load testing each piece
 independently on a fixed hardware platform and faking out all the
 incoming/outgoing services. Test the API with fake everything. Test the
 Scheduler with fake API calls and fake compute nodes. Test the conductor
 with fake compute nodes (not FakeDriver). Test the compute node directly.

 Probably all going to come down to the DB and I think there is some good
 performance data around that already?

 But I'm just spit-ballin' ... and I agree, not something I could see the
 Nova team taking on in the near term ;)

 -S




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-11-30 Thread Anant Patil
On 27-Nov-14 18:03, Murugan, Visnusaran wrote:
 Hi Zane,
 
  
 
 At this stage our implementation (as mentioned in wiki
 https://wiki.openstack.org/wiki/Heat/ConvergenceDesign) achieves your
 design goals.
 
  
 
 1.   In case of a parallel update, our implementation adjusts graph
 according to new template and waits for dispatched resource tasks to
 complete.
 
 2.   Reason for basing our PoC on Heat code:
 
 a.   To solve contention processing parent resource by all dependent
 resources in parallel.
 
 b.  To avoid porting issue from PoC to HeatBase. (just to be aware
 of potential issues asap)
 
 3.   Resource timeout would be helpful, but I guess its resource
 specific and has to come from template and default values from plugins.
 
 4.   We see resource notification aggregation and processing next
 level of resources without contention and with minimal DB usage as the
 problem area. We are working on the following approaches in *parallel.*
 
 a.   Use a Queue per stack to serialize notification.
 
 b.  Get parent ProcessLog (ResourceID, EngineID) and initiate
 convergence upon first child notification. Subsequent children who fail
 to get parent resource lock will directly send message to waiting parent
 task (topic=stack_id.parent_resource_id)
 
 Based on performance/feedback we can select either or a mashed version.
 
  
 
 Advantages:
 
 1.   Failed Resource tasks can be re-initiated after ProcessLog
 table lookup.
 
 2.   One worker == one resource.
 
 3.   Supports concurrent updates
 
 4.   Delete == update with empty stack
 
 5.   Rollback == update to previous know good/completed stack.
 
  
 
 Disadvantages:
 
 1.   Still holds stackLock (WIP to remove with ProcessLog)
 
  
 
 Completely understand your concern on reviewing our code, since commits
 are numerous and there is change of course at places.  Our start commit
 is [c1b3eb22f7ab6ea60b095f88982247dd249139bf] though this might not help J
 
  
 
 Your Thoughts.
 
  
 
 Happy Thanksgiving.
 
 Vishnu.
 
  
 
 *From:*Angus Salkeld [mailto:asalk...@mirantis.com]
 *Sent:* Thursday, November 27, 2014 9:46 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown
 
  
 
 On Thu, Nov 27, 2014 at 12:20 PM, Zane Bitter zbit...@redhat.com
 mailto:zbit...@redhat.com wrote:
 
 A bunch of us have spent the last few weeks working independently on
 proof of concept designs for the convergence architecture. I think
 those efforts have now reached a sufficient level of maturity that
 we should start working together on synthesising them into a plan
 that everyone can forge ahead with. As a starting point I'm going to
 summarise my take on the three efforts; hopefully the authors of the
 other two will weigh in to give us their perspective.
 
 
 Zane's Proposal
 ===
 
 https://github.com/zaneb/heat-convergence-prototype/tree/distributed-graph
 
 I implemented this as a simulator of the algorithm rather than using
 the Heat codebase itself in order to be able to iterate rapidly on
 the design, and indeed I have changed my mind many, many times in
 the process of implementing it. Its notable departure from a
 realistic simulation is that it runs only one operation at a time -
 essentially giving up the ability to detect race conditions in
 exchange for a completely deterministic test framework. You just
 have to imagine where the locks need to be. Incidentally, the test
 framework is designed so that it can easily be ported to the actual
 Heat code base as functional tests so that the same scenarios could
 be used without modification, allowing us to have confidence that
 the eventual implementation is a faithful replication of the
 simulation (which can be rapidly experimented on, adjusted and
 tested when we inevitably run into implementation issues).
 
 This is a complete implementation of Phase 1 (i.e. using existing
 resource plugins), including update-during-update, resource
 clean-up, replace on update and rollback; with tests.
 
 Some of the design goals which were successfully incorporated:
 - Minimise changes to Heat (it's essentially a distributed version
 of the existing algorithm), and in particular to the database
 - Work with the existing plugin API
 - Limit total DB access for Resource/Stack to O(n) in the number of
 resources
 - Limit overall DB access to O(m) in the number of edges
 - Limit lock contention to only those operations actually contending
 (i.e. no global locks)
 - Each worker task deals with only one resource
 - Only read resource attributes once
 
 
 Open questions:
 - What do we do when we encounter a resource that is in progress
 from a previous update while doing a subsequent update? Obviously we
 don't 

Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Jay Lau
I'm OK if all reviewers agree on this proposal, I may need to bookmark the
projects that I want to review. ;-)

2014-12-01 14:35 GMT+08:00 OpenStack Dev cools...@gmail.com:

 Jay this has been informed  discussed, pre  post the gerrit upgrade :)


 On Mon Dec 01 2014 at 12:00:02 PM Jay Lau jay.lau@gmail.com wrote:

 Thanks all, yes, there are ways I can get all on-going patches for one
 project, I was complaining this because I can always direct to the right
 page before gerrit review upgrade, the upgrade broken this feature which
 makes not convenient for reviewers...

 2014-12-01 14:13 GMT+08:00 Steve Martinelli steve...@ca.ibm.com:

 Clicking on the magnifying glass icon (next to the project name) lists
 all recent patches for that project (closed and open).

 I have a bookmark that lists all open reviews of a project and
 always try to keep it open in a tab, and open specific code reviews in
 new tabs.

 Steve

 Chen CH Ji jiche...@cn.ibm.com wrote on 12/01/2014 12:59:52 AM:

  From: Chen CH Ji jiche...@cn.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:09 AM
  Subject: Re: [openstack-dev] [gerrit] Gerrit review problem
 
  +1, I also found this inconvenient point before ,thanks Jay for bring
 up :)
 
  Best Regards!
 
  Kevin (Chen) Ji 纪 晨
 
  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
  Phone: +86-10-82454158
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
  District, Beijing 100193, PRC
 
  [image removed] Jay Lau ---12/01/2014 01:56:48 PM---When I review a
  patch for OpenStack, after review finished, I want to check more
  patches for this pr
 
  From: Jay Lau jay.lau@gmail.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Date: 12/01/2014 01:56 PM
  Subject: [openstack-dev] [gerrit] Gerrit review problem
 
 
 
 
  When I review a patch for OpenStack, after review finished, I want
  to check more patches for this project and then after click the
  Project content for this patch, it will **not** jump to all
  patches but project description. I think it is not convenient for a
  reviewer if s/he wants to review more patches for this project.
 
  [image removed]
 
  [image removed]
 
  --
  Thanks,
 
  Jay___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gerrit] Gerrit review problem

2014-11-30 Thread Andreas Jaeger
On 12/01/2014 08:16 AM, Jay Lau wrote:
 I'm OK if all reviewers agree on this proposal, I may need to bookmark
 the projects that I want to review. ;-)


Add the projects you want to review to your watch list (via
Settings-Watched Projects) and see all open patches in your watched
changes list: https://review.openstack.org/#/q/is:watched+status:open,n,z

You can also create a personal dashboard using
http://git.openstack.org/cgit/stackforge/gerrit-dash-creator/ . Several
projects have their own dashboards, see for example:
http://is.gd/openstackdocsreview


Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF:Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev