[openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
Hello, oslo cores.

I've finished polishing up oslo.concurrency repo at [0] - please take a
look at it. I used my new version of graduate.sh [1] to generate it, so
history looks a bit different from what you might be used to.

I've made as little changes as possible, so there're still some steps left
that should be done after new repo is created:
- fix PEP8 errors H405 and E126;
- use strutils from oslo.utils;
- remove eventlet dependency (along with random sleeps), but proper testing
with eventlet should remain;
- fix for bug [2] should be applied from [3] (although it needs some
improvements);
- oh, there's really no limit for this...

I'll finalize and publish relevant change request to openstack-infra/config
soon.

Looking forward to any feedback!

[0] https://github.com/YorikSar/oslo.concurrency
[1] https://review.openstack.org/109779
[2] https://bugs.launchpad.net/oslo/+bug/1327946
[3] https://review.openstack.org/108954

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:58 PM, Yuriy Taraday yorik@gmail.com wrote:

 Hello, oslo cores.

 I've finished polishing up oslo.concurrency repo at [0] - please take a
 look at it. I used my new version of graduate.sh [1] to generate it, so
 history looks a bit different from what you might be used to.

 I've made as little changes as possible, so there're still some steps left
 that should be done after new repo is created:
 - fix PEP8 errors H405 and E126;
 - use strutils from oslo.utils;
 - remove eventlet dependency (along with random sleeps), but proper
 testing with eventlet should remain;
 - fix for bug [2] should be applied from [3] (although it needs some
 improvements);
 - oh, there's really no limit for this...

 I'll finalize and publish relevant change request to
 openstack-infra/config soon.


Here it is: https://review.openstack.org/112666

Looking forward to any feedback!

 [0] https://github.com/YorikSar/oslo.concurrency
 [1] https://review.openstack.org/109779
 [2] https://bugs.launchpad.net/oslo/+bug/1327946
  [3] https://review.openstack.org/108954

 --

 Kind regards, Yuriy.




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:28 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 08/06/2014 05:41 PM, Zane Bitter wrote:

 On 06/08/14 18:12, Yuriy Taraday wrote:

 Well, as per Git author, that's how you should do with not-CVS. You have
 cheap merges - use them instead of erasing parts of history.


 This is just not true.

 http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html

 Choice quotes from the author of Git:

 * 'People can (and probably should) rebase their _private_ trees'
 * 'you can go wild on the git rebase thing'
 * 'we use git rebase etc while we work on our problems.'
 * 'git rebase is not wrong.'


 Also relevant:

 ...you must never pull into a branch that isn't already
 in good shape.

 Don't merge upstream code at random points.

 keep your own history clean


And in the very same thread he says I don't like how you always rebased
patches and none of these rules should be absolutely black-and-white.
But let's not get driven into discussion of what Linus said (or I'll have
to rewatch his ages old talk in Google to get proper quotes).
In no way I want to promote exposing private trees with all those
intermediate changes. And my proposal is not against rebasing (although we
could use -R option for git-review more often to publish what we've tested
and to let reviewers see diffs between patchsets). It is for letting people
keep history of their work towards giving you a crystal-clean change
request series.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 7:36 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 05:35 PM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 11:00 PM, Ben Nemec openst...@nemebean.com
 wrote:
  You keep mentioning detached HEAD and reflog.  I have never had to deal
  with either when doing a rebase, so I think there's a disconnect here.
  The only time I see a detached HEAD is when I check out a change from
  Gerrit (and I immediately stick it in a local branch, so it's a
  transitive state), and the reflog is basically a safety net for when I
  horribly botch something, not a standard tool that I use on a daily
 basis.
 
 
  It usually takes some time for me to build trust in utility that does a
 lot
  of different things at once while I need only one small piece of that.
 So I
  usually do smth like:
  $ git checkout HEAD~2
  $ vim
  $ git commit
  $ git checkout mybranch
  $ git rebase --onto HEAD@{1} HEAD~2
  instead of almost the same workflow with interactive rebase.

 I'm sorry, but I don't trust the well-tested, widely used tool that Git
 provides to make this easier so I'm going to reimplement essentially the
 same thing in a messier way myself is a non-starter for me.  I'm not
 surprised you dislike rebases if you're doing this, but it's a solved
 problem.  Use git rebase -i.


I'm sorry, I must've mislead you by using word 'trust' in that sentence.
It's more like understanding. I like to understand how things work. I don't
like treating tools as black boxes. And I also don't like when tool does a
lot of things at once with no way back. So yes, I decompose 'rebase -i' a
bit and get slightly (1 command, really) longer workflow. But at least I
can stop at any point and think if I'm really finished at this step. And
sometimes interactive rebase works for me better than this, sometimes it
doesn't. It all depends on situation.

I don't dislike rebases because I sometimes use a bit longer version of it.
I would be glad to avoid them because they destroy history that can help me
later.

I think I've said all I'm going to say on this.


I hope you don't think that this thread was about rebases vs merges. It's
about keeping track of your changes without impact on review process.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Fri, Aug 8, 2014 at 3:03 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 08/07/2014 04:52 PM, Yuriy Taraday wrote:

  I hope you don't think that this thread was about rebases vs merges.
 It's about keeping track of your changes without impact on review process.


 But if you rebase, what is stopping you from keeping whatever private
 history you want and then rebase the desired changes onto the version that
 the current review tools are using?


That's almost what my proposal is about - allowing developer to keep
private history and store uploaded changes separately.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Yuriy Taraday
On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow harlo...@outlook.com wrote:

 One question from me:

 Will there be later fixes to remove oslo.config dependency/usage from
 oslo.concurrency?

 I still don't understand how oslo.concurrency can be used as a library
 with the configuration being set in a static manner via oslo.config (let's
 use the example of `lock_path` @ https://github.com/YorikSar/
 oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). For
 example:

 Library X inside application Z uses lockutils (via the nice
 oslo.concurrency library) and sets the configuration `lock_path` to its
 desired settings, then library Y (also a user of oslo.concurrency) inside
 same application Z sets the configuration for `lock_path` to its desired
 settings. Now both have some unknown set of configuration they have set and
 when library X (or Y) continues to use lockutils they will be using some
 mix of configuration (likely some mish mash of settings set by X and Y);
 perhaps to a `lock_path` that neither actually wants to be able to write
 to...

 This doesn't seem like it will end well; and will just cause headaches
 during debug sessions, testing, integration and more...

 The same question can be asked about the `set_defaults()` function, how is
 library Y or X expected to use this (are they?)??

 I hope one of the later changes is to remove/fix this??

 Thoughts?

 -Josh


I'd be happy to remove lock_path config variable altogether. It's basically
never used. There are two basic branches in code wrt lock_path:
- when you provide lock_path argument to lock (and derivative functions),
file-based lock is used and CONF.lock_path is ignored;
- when you don't provide lock_path in arguments, semaphore-based lock is
used and CONF.lock_path is just a prefix for its name (before hashing).

I wonder if users even set lock_path in their configs as it has almost no
effect. So I'm all for removing it, but...
From what I understand, every major change in lockutils drags along a lot
of headache for everybody (and risk of bugs that would be discovered very
late). So is such change really worth it? And if so, it will require very
thorough research of lockutils usage patterns.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-02 Thread Yuriy Taraday
Hello.

Currently for alpha releases of oslo libraries we generate either universal
or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
releases in projects where Python 3.x is supported and verified in the
gate. I've ran into this in change request [1] generated after
global-requirements change [2]. There we have oslotest library that can't
be built as a universal wheel because of different requirements (mox vs
mox3 as I understand is the main difference). Because of that py33 job in
[1] failed and we can't bump oslotest version in requirements.

I propose to change infra scripts that generate and upload wheels to create
py3 wheels as well as py2 wheels for projects that support Python 3.x (we
can use setup.cfg classifiers to find that out) but don't support universal
wheels. What do you think about that?

[1] https://review.openstack.org/117940
[2] https://review.openstack.org/115643

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-03 Thread Yuriy Taraday
On Tue, Sep 2, 2014 at 11:17 PM, Clark Boylan cboy...@sapwetik.org wrote:

 On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
  Hello.
 
  Currently for alpha releases of oslo libraries we generate either
  universal
  or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
  releases in projects where Python 3.x is supported and verified in the
  gate. I've ran into this in change request [1] generated after
  global-requirements change [2]. There we have oslotest library that can't
  be built as a universal wheel because of different requirements (mox vs
  mox3 as I understand is the main difference). Because of that py33 job in
  [1] failed and we can't bump oslotest version in requirements.
 
  I propose to change infra scripts that generate and upload wheels to
  create
  py3 wheels as well as py2 wheels for projects that support Python 3.x (we
  can use setup.cfg classifiers to find that out) but don't support
  universal
  wheels. What do you think about that?
 
  [1] https://review.openstack.org/117940
  [2] https://review.openstack.org/115643
 
  --
 
  Kind regards, Yuriy.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 We may find that we will need to have py3k wheels in addition to the
 existing wheels at some point, but I don't think this use case requires
 it. If oslo.test needs to support python2 and python3 it should use mox3
 in both cases which claims to support python2.6, 2.7 and 3.2. Then you
 can ship a universal wheel. This should solve the immediate problem.


Yes, I think, it's the way to go with oslotest specifically. Created a
change request for this: https://review.openstack.org/118551

It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.


We can make eventlet an optional dependency of oslo.messaging (through
setuptools' extras). In fact I don't quite understand the need for eventlet
as direct dependency there since we can just write code that uses threading
library and it'll get monkeypatched if consumer app wants to use eventlet.

The setup.cfg classifiers should be able to do that for us, though PBR
 may need updating?


I don't think so - it loads all classifiers from setup.cfg, they should be
available through some distutils machinery.

We will also need to learn to upload potentially 1
 wheel in our wheel jobs. That bit is likely straight foward. The last
 thing that we need to make sure we do is that we have some testing in
 place for the special wheels. We currently have the requirements
 integration test which runs under python2 checking that we can actually
 install all the things together. This ends up exercising our wheels and
 checking that they actually work. We don't have a python3 equivalent for
 that job. It may be better to work out some explicit checking of the
 wheels we produce that applies to both versions of python. I am not
 quite sure how we should approach that yet.


I guess we can just repeat that check with Python 3.x. If I see it right,
all we need is to repeat loop in pbr/tools/integration.sh with different
Python version. The problem might occur that we'll be running this test
with Python 3.4 that is default on trusty but all our unittests jobs run on
3.3 instead. May be we should drop 3.3 already?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-04 Thread Yuriy Taraday
On Wed, Sep 3, 2014 at 7:24 PM, Doug Hellmann d...@doughellmann.com wrote:

 On Sep 3, 2014, at 5:27 AM, Yuriy Taraday yorik@gmail.com wrote:

 On Tue, Sep 2, 2014 at 11:17 PM, Clark Boylan cboy...@sapwetik.org
 wrote:

 It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.


 We can make eventlet an optional dependency of oslo.messaging (through
 setuptools' extras). In fact I don't quite understand the need for eventlet
 as direct dependency there since we can just write code that uses threading
 library and it'll get monkeypatched if consumer app wants to use eventlet.


 There is code in the messaging library that makes calls directly into
 eventlet now, IIRC. It sounds like that could be changed, but that’s
 something to consider for a future version.


Yes, I hope to see unified threading/eventlet executor there
(futures-based, I guess) some day.

The last time I looked at setuptools extras they were a documented but
 unimplemented specification. Has that changed?


According to docs [1] it works in pip (and has been working in setuptools
for ages), and according to bug [2], it has been working for couple years.

[1] http://pip.readthedocs.org/en/latest/reference/pip_install.html#examples
(#6)
[2] https://github.com/pypa/pip/issues/7

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-04 Thread Yuriy Taraday
On Wed, Sep 3, 2014 at 8:21 PM, Doug Hellmann d...@doughellmann.com wrote:

  On Sep 3, 2014, at 11:57 AM, Clark Boylan cboy...@sapwetik.org wrote:
  On Wed, Sep 3, 2014, at 08:22 AM, Doug Hellmann wrote:
 
  On Sep 2, 2014, at 3:17 PM, Clark Boylan cboy...@sapwetik.org wrote:
  The setup.cfg classifiers should be able to do that for us, though PBR
  may need updating? We will also need to learn to upload potentially 1
 
  How do you see that working? We want all of the Oslo libraries to,
  eventually, support both python 2 and 3. How would we use the
 classifiers
  to tell when to build a universal wheel and when to build separate
  wheels?
 
  The classifiers provide info on the versions of python we support. By
  default we can build python2 wheel if only 2 is supported, build python3
  wheel if only 3 is supported, build a universal wheel if both are
  supported. Then we can add a setup.cfg flag to override the universal
  wheel default to build both a python2 and python3 wheel instead. Dstufft
  and mordred should probably comment on this idea before we implement
  anything.

 OK. I’m not aware of any python-3-only projects, and the flag to override
 the universal wheel is the piece I was missing. I think there’s already a
 setuptools flag related to whether or not we should build universal wheels,
 isn’t there?


I think we should rely on wheel.universal flag from setup.cfg if it's
there. If it's set, we should always build universal wheels. If it's not
set, we should look in specifiers and build wheels for Python versions that
are mentioned there.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-04 Thread Yuriy Taraday
On Thu, Sep 4, 2014 at 4:47 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-09-03 13:27:55 +0400 (+0400), Yuriy Taraday wrote:
 [...]
  May be we should drop 3.3 already?

 It's in progress. Search review.openstack.org for open changes in
 all projects with the topic py34. Shortly I'll also have some
 infra config changes up to switch python33 jobs out for python34,
 ready to drop once the j-3 milestone has been tagged and is finally
 behind us.


Great! Looking forward to purging python 3.3 from my system.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kesytone][multidomain] - Time to leave LDAP backend?

2014-09-10 Thread Yuriy Taraday
On Tue, Sep 9, 2014 at 8:25 AM, Nathan Kinder nkin...@redhat.com wrote:

 On 09/01/2014 01:43 AM, Marcos Fermin Lobo wrote:
  Hi all,
 
 
 
  I found two functionalities for keystone that could be against each
 other.
 
 
 
  Multi-domain feature (This functionality is new in Juno.)
 
  ---
 
  Link:
 
 http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers
 
 
  Keystone supports the option to specify identity driver configurations
  on a domain by domain basis, allowing, for example, a specific domain to
  have its own LDAP or SQL server. So, we can use different backends for
  different domains. But, as Henry Nash said “it has not been validated
  with multiple SQL drivers”
  https://bugs.launchpad.net/keystone/+bug/1362181/comments/2
 
 
 
  Hierarchical Multitenancy
 
  
 
  Link:
 
 https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
 
  This is nested projects feature but, only for SQL, not LDAP.
 
 
 
  So, if you are using LDAP and you want “nested projects” feature, you
  should to migrate from LDAP to SQL but, I you want to get multi-domain
  feature too you can’t use 2 SQL backends (you need at least one LDAP
  backend) because is not validated for multiple SQL drivers…
 
 
 
  Maybe I’m losing something, please, correct me if I’m wrong.
 
 
 
  Here my questions:
 
 
 
  -  If I want Multi-domain and Hierarchical Multitenancy
  features, which are my options? What should I do (migrate or not migrate
  to SQL)?
 
  -  Is LDAP going to deprecated soon?

 I think you need to keep in mind that there are two separate backends
 that support LDAP: identity and assignment.

 From everyone I have talked to on the Keystone team, SQL is preferred
 for the assignment backend.  Storing assignment information in LDAP
 seems to be a non-standard use case.

 For the identity backend, LDAP is preferred.  Many people have users and
 groups already in an LDAP server, and Keystone should be able to take
 advantage of those existing users and credentials for centralized
 authentication.  In addition, every LDAP server I know have has better
 security features than the SQL identity backend offers, such as password
 policies and account lockout.

 The multiple domain support for multiple LDAP servers was really
 designed to allow for separate groups of users from separate identity
 LDAP servers to be usable in a single Keystone instance.

 Given that the Keystone team considers SQL as the preferred assignment
 backend, the hierarchical project blueprint was targeted against it.
 The idea is that you would use LDAP server(s) for your users and have
 hierarchical projects in SQL.

 My personal feeling is that the LDAP assignment backend should
 ultimately be deprecated.  I don't think the LDAP assignment backend
 really offers any benefit of SQL, and you have to define some
 non-standard LDAP schema to represent projects, roles, etc., or you end
 up trying to shoehorn the data into standard LDAP schema that was really
 meant for something else.

 It would be interesting to create a poll like Morgan did for the
 Keystone token format to see how widely the LDAP assignments backend is.
  Even more interesting would be to know the reasons why people are using
 it over SQL.


Please don't consider LDAP assignment backend as and outcast. It is used
and we have use cases where it's the only way to go.

Some enterprises with strict security policies require all security-related
tasks to be done through AD, and project/roles assignment is one of them.
LDAP assignment backend is a right fit here.
Storing such info in AD provides additional benefit of providing not only
single management point, but also an enterprise-ready cross-datacenter
replication. (Galera or other MySQL replications arguably don't quite work
for this)
From what I see, the only obstruction here is need for a custom LDAP schema
for AD (which doesn't fly with strict enterprise constraints). That can be
mitigated by using AD-native objectClass'es for projects and groups instead
of 'groupOfNames' and 'organizationalRole': 'organizationalUnit' and
'group'. These object can be managed by commonly used AD tools (not LDAP
editor), but require some changes in Keystone to work. We've hacked
together some patches to Keystone that should make it work and will propose
them in Kilo cycle.
Another missing feature is domains/hierarchical projects. It's not
impossible to implement this in LDAP backend, but we need someone to step
up here. With OUs it should be rather obvious how to store these in LDAP,
but we'll need some algorithmic support as well.

We shouldn't give up on LDAP backend. It's used by a lot of private clouds
and some public ones. The problem is that its users usually aren't ready to
make necessary changes to make it work and so have to bend their rules to
make existing backend work. Some of them already are giving back:
connection 

Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-10 Thread Yuriy Taraday
On Tue, Sep 9, 2014 at 9:58 PM, Doug Hellmann d...@doughellmann.com wrote:


 On Sep 9, 2014, at 10:51 AM, Sean Dague s...@dague.net wrote:

  On 09/09/2014 10:41 AM, Doug Hellmann wrote:
 
  On Sep 8, 2014, at 8:18 PM, James E. Blair cor...@inaugust.com wrote:
 
  Sean Dague s...@dague.net writes:
 
  The crux of the issue is that zookeeper python modules are C
 extensions.
  So you have to either install from packages (which we don't do in unit
  tests) or install from pip, which means forcing zookeeper dev packages
  locally. Realistically this is the same issue we end up with for mysql
  and pg, but given their wider usage we just forced that pain on
 developers.
  ...
  Which feels like we need some decoupling on our requirements vs. tox
  targets to get there. CC to Monty and Clark as our super awesome tox
  hackers to help figure out if there is a path forward here that makes
 sense.
 
  From a technical standpoint, all we need to do to make this work is to
  add the zookeeper python client bindings to (test-)requirements.txt.
  But as you point out, that makes it more difficult for developers who
  want to run unit tests locally without having the requisite libraries
  and header files installed.
 
  I don’t think I’ve ever tried to run any of our unit tests on a box
 where I hadn’t also previously run devstack to install all of those sorts
 of dependencies. Is that unusual?
 
  It is for Linux users, running local unit tests is the norm for me.

 To be clear, I run the tests on the same host where I ran devstack, not in
 a VM. I just use devstack as a way to bootstrap all of the libraries needed
 for the unit test dependencies. I guess I’m just being lazy. :-)


You can't run devstack everywhere you code (and want to run tests). I, for
example, can't run devstack on my work laptop because I use Gentoo there.
And I have MacOS X on my home laptop, so no devstack there too. The latter
should be more frequent case in the community.

That said I never had a problem with emerging (on either of systems)
necessary C libraries for tests to run. As long as they don't pull a lot of
(or any) Linux-specific dependencies, it's fine.

For me this issue is the case for setuptools' extras. The only problem with
them is that we can't specify them in requirement.txt files currently, so
we'd have to add another hack to pbr to gather extra dependencies from
files like requirements-extra_name.txt or smth like that.
Then we can provide different tox venvs for diferent extras sets.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OK to Use Flufl.enum

2013-12-13 Thread Yuriy Taraday
Hello, Adam.

On Tue, Dec 10, 2013 at 6:55 PM, Adam Young ayo...@redhat.com wrote:

  With only a change to the import and requirements, it builds and runs,
 but raises:


 Traceback (most recent call last):
   File keystone/tests/test_revoke.py, line 65, in test_list_is_sorted
 valid_until=valid_until))
   File keystone/contrib/revoke/core.py, line 74, in __init__
 setattr(self, k, v)
   File keystone/contrib/revoke/core.py, line 82, in scope_type
 self._scope_type = ScopeType[value]
   File
 /opt/stack/keystone/.venv/lib/python2.7/site-packages/enum/__init__.py,
 line 352, in __getitem__
 return cls._member_map_[name]
 KeyError: 1


Looks like you're doing this the wrong way. Python 3.4's enums work either
as EnumClass(value) or as EnumClass[name], not as EnumClass[value] as it
seems your test is doing and flufl is allowing it to.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Blocking issue with ring rebalancing

2013-12-19 Thread Yuriy Taraday
On Thu, Dec 19, 2013 at 8:22 PM, Nikolay Markov nmar...@mirantis.comwrote:

 I created a bug on launchpad regarding this:
 https://bugs.launchpad.net/swift/+bug/1262166

 Could anybody please participate in discussion on how to overcome it?


Hello.

I know you've decided to dig a bit deeper on IRC, but I've made a round
over rebalance code anyway.
Please take a look at https://review.openstack.org/63315. It won't
magically make 2 mins out of 8 but it might shorten them to 6.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Naming of a deployment

2014-01-18 Thread Yuriy Taraday
Hi all.

I might be a little out of context, but isn't that thing deployed on some
kind of cloud?


 * cluster -- is too generic, but also has connotations in HPC and
 various other technologies (databases, MQs, etc).

 * installation -- reminds me of a piece of performance art ;)

 * instance -- too much cross-terminology with server instance in Nova
 and Ironic


In which case I'd suggest borrowing another option from TripleO:
overcloud.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-18 Thread Yuriy Taraday
On Tue, Jan 14, 2014 at 6:09 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:

 On Mon, Jan 13, 2014 at 9:36 PM, Jamie Lennox jamielen...@redhat.comwrote:

 On Mon, 2014-01-13 at 10:05 -0500, Doug Hellmann wrote:
  What requirement(s) led to keystone supporting this feature?

 I've got no idea where the requirement came from however it is something
 that is
 supported now and so not something we can back out of.


 If it's truly a requirement, we can look into how to make that work. The
 data is obviously present in the request, so we would just need to preserve
 it.


We've seen a use case for arbitrary attributes in Keystone objects. Cloud
administrator might want to store some metadata along with a user object.
For example, customer name/id and couple additional fields for contact
information. The same might be applied to projects and  domains.

So this is a very nice feature that should be kept around. It might be
wrapped in some way (like in explicit unchecked metadata attribute) in a
new API version though.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]About creating vms without ip address

2014-01-22 Thread Yuriy Taraday
Hello.

On Tue, Jan 21, 2014 at 12:52 PM, Dong Liu willowd...@gmail.com wrote:

 What's your opinion?


We've just discussed a use case for this today. I want to create a sandbox
for Fuel but I can't do it with OpenStack.
The reason is a bit different from telecom case: Fuel needs to manage nodes
directly via DHCP and PXE and you can't do that with Neutron since you
can't make its dnsmasq service quiet.

So, it's a great idea. We can have either VMs with no IP address associated
or networks with no fixed IP range, either could work.
There can be a problem with handling floating IPs though.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]About creating vms without ip address

2014-01-22 Thread Yuriy Taraday
On Thu, Jan 23, 2014 at 12:04 AM, CARVER, PAUL pc2...@att.com wrote:

  Can you elaborate on what you mean by this? You can turn of Neutron’s
 dnsmasq on a per network basis, correct? Do you mean something else by
 “make its dnsmasq service quiet”?


What I meant is for dnsmasq to not send offers to specific VMs so that
Fuel's DHCP service will serve them. We shouldn't shut off network's DHCP
entirely though since we still need Fuel VM to receive some address for
external connectivity.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] new keystone developer

2014-01-22 Thread Yuriy Taraday
Hello.

On Thu, Jan 23, 2014 at 8:06 AM, Steve Martinelli steve...@ca.ibm.comwrote:

 #3 - I'll leave this to others

 [image: Inactive hide details for Mario Adessi ---01/22/2014 09:37:48
 PM---I'd like to begin contributing to the keystone project. Keys]Mario
 Adessi ---01/22/2014 09:37:48 PM---I'd like to begin contributing to the
 keystone project. Keystone, along with all the other major inf
 (3) Is there a way to import large chunks (or, preferably, all) of
 keystone into iPython? This makes debugging super easy and would fit in
 nicely with my existing workflow with other projects.

I think you might find ipdb https://pypi.python.org/pypi/ipdb useful for
you: it's run just like pdb but opens an iPython shell instead.

-- 

Kind regards, Yuriy.
graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] unit test code is too less

2014-01-22 Thread Yuriy Taraday
Hello.


On Thu, Jan 23, 2014 at 6:47 AM, ZhiQiang Fan aji.zq...@gmail.com wrote:

 I noticed that in openstack-dev/hacking project, there is very little test
 code, is there any particular reason why it is in such situation?


Yes, there is. Every rule have a docstring that not only provides examples
of good and bad code but also is run as a doctest here:
https://github.com/openstack-dev/hacking/blob/master/hacking/tests/test_doctest.py


 https://github.com/openstack-dev/hacking/blob/master/hacking/core.py#L345
  it cannot detect

 \bprint$
 \bprint xxx, (\s+


I'm not sure how can it not detect the second one since the regular
expression used there to detect bad strings is  \bprint\s+[^\(] and it
catches print xxx, ( string.
The first one is a good catch though.

If I want to improve this rule, how can I verify that my change is good?


Just change that regular expression as is needed and add a line to the
docstring like these:
Okay: print()
H233: print

Happy coding!

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Yuriy Taraday
Hello.


On Tue, Feb 4, 2014 at 5:38 PM, victor stinner
victor.stin...@enovance.comwrote:

 I would like to replace eventlet with asyncio in OpenStack for the
 asynchronous programming. The new asyncio module has a better design and is
 less magical. It is now part of python 3.4 arguably becoming the de-facto
 standard for asynchronous programming in Python world.


I think that before doing this big move to yet another asynchronous
framework we should ask the main question: Do we need it? Why do we
actually need async framework inside our code?
There most likely is some historical reason why (almost) every OpenStack
project runs every its process with eventlet hub, but I think we should
reconsider this now when it's clear that we can't go forward with eventlet
(because of py3k mostly) and we're going to put considerable amount of
resources into switching to another async framework.

Let's take Nova for example.

There are two kinds of processes there: nova-api and others.

- nova-api process forks to a number of workers listening on one socket and
running a single greenthread for each incoming request;
- other services (workers) constantly poll some queue and spawn a
greenthread for each incoming request.

Both kinds to basically the same job: receive a request, run a handler in a
greenthread. Sounds very much like a job for some application server that
does just that and does it good.
If we remove all dependencies from eventlet or any other async framework,
we would not only be able to write Python code without need to keep in mind
that we're running in some reactor (that's why eventlet was chosen over
Twisted IIRC), but we can also forget about all these frameworks altogether.

I suggest approach like this:
- for API services use dead-simple threaded WSGI server (we have one in the
stdlib by the way - in wsgiref);
- for workers use simple threading-based oslo.messaging loop (it's on its
way).

Of course, it won't be production-ready. Dumb threaded approach won't scale
but we don't have to write our own scaling here. There are other tools
around to do this: Apache httpd, Gunicorn, uWSGI, etc. And they will work
better in production environment than any code we write because they are
proven with time and on huge scales.

So once we want to go to production, we can deploy things this way for
example:
- API services can be deployed within Apache server or any other HTTP
server with WSGI backend (Keystone already can be deployed within Apache);
- workers can be deployed in any non-HTTP application server, uWSGI is a
great example of one that can work in this mode.

With this approach we can leave the burden of process management, load
balancing, etc. to the services that are really good at it.

What do you think about this?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Yuriy Taraday
On Thu, Feb 6, 2014 at 10:34 PM, Joshua Harlow harlo...@yahoo-inc.comwrote:

  Its a good question, I see openstack as mostly like the following 2
 groups of applications.

  Group 1:

  API entrypoints using [apache/nginx]+wsgi (nova-api, glance-api…)

  In this group we can just let the underlying framework/app deal with the
 scaling and just use native wsgi as it was intended. Scale more
 [apache/nginx] if u need more requests per second. For any kind of long
 term work these apps should be dropping all work to be done on a MQ and
 letting someone pick that work up to be finished in some future time.


They should and from what I see they do. API services either provide some
work to workers or do some DB work, nothing more.


 Group 2:

  Workers that pick things up off MQ. In this area we are allowed to be a
 little more different and change as we want, but it seems like the simple
 approach we have been doing is the daemon model (forking N child worker
 processes). We've also added eventlet in these children (so it becomes more
 like NxM where M is the number of greenthreads). For the usages where
 workers are used has it been beneficial to add those M greenthreads? If we
 just scaled out more N (processes) how bad would it be? (I don't have the
 answers here actually, but it does make you wonder why we couldn't just
 eliminate eventlet/asyncio altogether and just use more N processes).


If you really want greenthreads within your worker processes, you can use
greenable server for it. For example Gunicorn can work with eventlet,
uWSGI has its uGreen. Btw, you don't have to import eventlet every time you
need to spawn a thread or sleep a bit - you can just monkey-patch world
 (like almost everybody using eventlet in OpenStack do) if and when you
actually need it.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-06 Thread Yuriy Taraday
Hello, Kevin.

On Fri, Feb 7, 2014 at 12:32 AM, Kevin Conway kevinjacobcon...@gmail.comwrote:

 There's an incredibly valid reason why we use green thread abstractions
 like eventlet and gevent in Python. The CPython implementation is
 inherently single threaded so we need some other form of concurrency to get
 the most effective use out of our code. You can import threading all you
 want but it won't work the way you expect it to. If you are considering
 doing anything threading related in Python then
 http://www.youtube.com/watch?v=Obt-vMVdM8s is absolutely required
 watching.


I suggest use threading module and let it be either eventlet's greethreads
(after monkey-patching) or built-in (OS) threads depending on deployment
scenario you use.

Green threads give us a powerful way to manage concurrency where it counts:
 I/O.


And that's exactly where GIL is released and other native threads are
executed, too. So they do not provide benefits because of overcoming GIL
but because of smart work with network connections.


 Everything in openstack is waiting on something else in openstack. That is
 our natural state of being. If your plan for increasing the number of
 concurrent requests is fork more processes then you're in for a rude
 awakening when your hosts start kernel panicking from a lack of memory.


There are threaded WSGI servers, there are even some greenthreaded ones. We
shouldn't burden ourselves with managing those processes, threads and
greenthreads.


 With green threads, on the other hand, we maintain the use of one process,
 one thread but are able to manage multiple, concurrent network operations.


But we still get one thread of execution, just as with native threads
(because of GIL).

In the case of API nodes: yes, they should (at most) do some db work and
 drop a message on the queue. That means they almost exclusively deal with
 I/O. Expecting your wsgi server to scale that up for you is wrong and, in
 fact, the reason we have eventlet in the first place.


But I'm sure it's not using eventlet's potential. In fact, I'm sure it
doesn't since DB calls (they are the most often ones in API, aren't they?)
block anyway and eventlet or any other coroutine-based framework can't do
much about it while application server can spawn more processes and/or
threads to handle the load.

I would like to refer to Adam Young here:
http://adam.younglogic.com/2012/03/keystone-should-move-to-apache-httpd/ -
as he provides more point in favor of external WSGI server (native calls,
IPv6, extensibility, stability and security).

Please take a look at this well-known benchmark:
http://nichol.as/benchmark-of-python-web-servers, where mod_wsgi performs
better than eventlet in the simple case and eventlet is not present in the
second case because of lack of HTTP/1.1 support.

Of course it's a matter for benchmarking. My point is that we can develop
our services with a simple threaded server and as long as they work
correctly we can always bring in greenthreads by monkey-patching later if
and only if they prove themselves better than other options in the market.
Our codebase should not be dependent on one single eventlet's or anyone's
other WSGI server or reactor loop.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Can't install WSME 0.6 due to ipaddr library

2014-02-07 Thread Yuriy Taraday
According to ipaddr's homepage: https://code.google.com/p/ipaddr-py/ ,
they've recently moved releases to Google Drive. And if you try to download
them with your browser, you get:

Sorry, you can't view or download this file at this time.
 Too many users have viewed or downloaded this file recently. Please try
 accessing the file again later. If the file you are trying to access is
 particularly large or is shared with many people, it may take up to 24
 hours to be able to view or download the file. If you still can't access a
 file after 24 hours, contact your domain administrator.


Looks like the problem should be sorted out with ipaddr's maintainer.


On Fri, Feb 7, 2014 at 5:25 PM, Ilya Shakhat ishak...@mirantis.com wrote:

 Hi Doug,

 I'm trying to install WSME 0.6, but today it fails due to inability to
 install ipaddr dependency.

 Pip output:
 Downloading/unpacking ipaddr (from WSME)
   You are installing a potentially insecure and unverifiable file. Future
 versions of pip will default to disallowing insecure files.
   HTTP error 403 while getting
 https://googledrive.com/host/0Bwh63zyus-UlZ1dxQ08zczVRbXc/ipaddr-2.1.11.tar.gz(from
 http://code.google.com/p/ipaddr-py/)
   Could not install requirement ipaddr (from WSME) because of error HTTP
 Error 403: Forbidden

 ipaddr is distributed via Google Drive and it appears that the quota on
 file downloading is reached: Sorry, you can't view or download this file
 at this time. Too many users have viewed or downloaded this file
 recently message is shown if url is opened in browser.

 The dependency was introduced by commit
 https://github.com/stackforge/wsme/commit/f191f32a722ef0c2eaad71dd33da4e7787ac2424ipaddr
  is used for IP validation purposes.

 Can ipaddr be replaced by some other library? I suspect the validation
 code should already exist at least in Neutron.

 Thanks,
 Ilya



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Can't install WSME 0.6 due to ipaddr library

2014-02-07 Thread Yuriy Taraday
The simplest way to do so is to add caching to pip and put the file to
appropriate place in the cache.
You can add this to /root/.pip/pip.conf:

[global]
download_cache = /var/cache/pip

And then put the file to
/var/cache/pip/https%3A%2F%2Fgoogledrive.com%2Fhost%2F0Bwh63zyus-UlZ1dxQ08zczVRbXc%2Fipaddr-2.1.11.tar.gz
Then add content-type file:
echo -n application/x-gzip 
/var/cache/pip/https%3A%2F%2Fgoogledrive.com%2Fhost%2F0Bwh63zyus-UlZ1dxQ08zczVRbXc%2Fipaddr-2.1.11.tar.gz.content-type

Pip should use this file on the next run.

Let me know if this works for you.


On Fri, Feb 7, 2014 at 8:31 PM, Matt Wagner matt.wag...@redhat.com wrote:

 On Fri Feb  7 08:25:24 2014, Ilya Shakhat wrote:
  Hi Doug,
 
  I'm trying to install WSME 0.6, but today it fails due to inability to
  install ipaddr dependency.
 
  Pip output:
  Downloading/unpacking ipaddr (from WSME)
You are installing a potentially insecure and unverifiable file.
  Future versions of pip will default to disallowing insecure files.
HTTP error 403 while getting
 
 https://googledrive.com/host/0Bwh63zyus-UlZ1dxQ08zczVRbXc/ipaddr-2.1.11.tar.gz
  (from http://code.google.com/p/ipaddr-py/)
Could not install requirement ipaddr (from WSME) because of error
  HTTP Error 403: Forbidden
 
  ipaddr is distributed via Google Drive and it appears that the quota
  on file downloading is reached: Sorry, you can't view or download
  this file at this time. Too many users have viewed or downloaded this
  file recently message is shown if url is opened in browser.
 
  The dependency was introduced by
  commit
 https://github.com/stackforge/wsme/commit/f191f32a722ef0c2eaad71dd33da4e7787ac2424
  ipaddr is used for IP validation purposes.
 
  Can ipaddr be replaced by some other library? I suspect the validation
  code should already exist at least in Neutron.

 Hi Ilya,

 I'm tripping over the very same issue right now.

 I was able to obtain the file by copying it to my own Google Drive and
 then downloading it, as a short-term workaround. But when I manually
 install it, the venv installation still tries to download the Google
 Drive version and fails. I'm still pretty new to pip; is there a way to
 force it to use a different location? Is there any value in me making
 the version I downloaded available to people?

 --
 Matt Wagner
 Software Engineer, Red Hat


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-07 Thread Yuriy Taraday
Hello, Vish!

I hope you can provide some historical data.

On Fri, Feb 7, 2014 at 9:37 PM, Vishvananda Ishaya vishvana...@gmail.comwrote:

 To be clear, since many people weren’t around in ye olde days, nova
 started using tornado. We exchanged tornado for twisted, and finally moved
 to eventlet. People have suggested gevent and threads in the past, and now
 asyncio. There are advantages to all of these other solutions, but a change
 at this point is going to be a huge pain, even the abstracting one you
 mention above.


Can you remember what were pros and cons for threads in that time? Did
anyone consider using external HTTP server as opposed to running one in
process?


 If we are going to invest the time in making another change, I think we
 need a REALLY good reason to do so. Some reasons that might be good enough
 to be worth considering:

 a) the cost of porting the library to a maintained python version (3.X at
 some point) is greater than replacing it with something else


I think, eventlet hits this one.


 b) the performance of the other option is an order of magnitude better.
 I’m really talking 10X here.


Will you consider other technological benefits? For example, as it happened
with Keystone and Apache HTTPD (IPv6, HTTP/1.1, Kerberos).

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [WSME] Can't install WSME 0.6 due to ipaddr library

2014-02-07 Thread Yuriy Taraday
According to
https://groups.google.com/d/msg/ipaddr-py-dev/T8jV4csZUE4/cOjEdimzRD4J ,
2.1.11 version just got uploaded to PyPI, so things should get back to
normal.


On Fri, Feb 7, 2014 at 9:36 PM, Yuriy Taraday yorik@gmail.com wrote:

 The simplest way to do so is to add caching to pip and put the file to
 appropriate place in the cache.
 You can add this to /root/.pip/pip.conf:

 [global]
 download_cache = /var/cache/pip

 And then put the file to
 /var/cache/pip/https%3A%2F%2Fgoogledrive.com%2Fhost%2F0Bwh63zyus-UlZ1dxQ08zczVRbXc%2Fipaddr-2.1.11.tar.gz
 Then add content-type file:
 echo -n application/x-gzip 
 /var/cache/pip/https%3A%2F%2Fgoogledrive.com%2Fhost%2F0Bwh63zyus-UlZ1dxQ08zczVRbXc%2Fipaddr-2.1.11.tar.gz.content-type

 Pip should use this file on the next run.

 Let me know if this works for you.


 On Fri, Feb 7, 2014 at 8:31 PM, Matt Wagner matt.wag...@redhat.comwrote:

 On Fri Feb  7 08:25:24 2014, Ilya Shakhat wrote:
  Hi Doug,
 
  I'm trying to install WSME 0.6, but today it fails due to inability to
  install ipaddr dependency.
 
  Pip output:
  Downloading/unpacking ipaddr (from WSME)
You are installing a potentially insecure and unverifiable file.
  Future versions of pip will default to disallowing insecure files.
HTTP error 403 while getting
 
 https://googledrive.com/host/0Bwh63zyus-UlZ1dxQ08zczVRbXc/ipaddr-2.1.11.tar.gz
  (from http://code.google.com/p/ipaddr-py/)
Could not install requirement ipaddr (from WSME) because of error
  HTTP Error 403: Forbidden
 
  ipaddr is distributed via Google Drive and it appears that the quota
  on file downloading is reached: Sorry, you can't view or download
  this file at this time. Too many users have viewed or downloaded this
  file recently message is shown if url is opened in browser.
 
  The dependency was introduced by
  commit
 https://github.com/stackforge/wsme/commit/f191f32a722ef0c2eaad71dd33da4e7787ac2424
  ipaddr is used for IP validation purposes.
 
  Can ipaddr be replaced by some other library? I suspect the validation
  code should already exist at least in Neutron.

 Hi Ilya,

 I'm tripping over the very same issue right now.

 I was able to obtain the file by copying it to my own Google Drive and
 then downloading it, as a short-term workaround. But when I manually
 install it, the venv installation still tries to download the Google
 Drive version and fails. I'm still pretty new to pip; is there a way to
 force it to use a different location? Is there any value in me making
 the version I downloaded available to people?

 --
 Matt Wagner
 Software Engineer, Red Hat


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Kind regards, Yuriy.




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] time.sleep is affected by eventlet.monkey_patch()

2014-03-06 Thread Yuriy Taraday
Hello.


On Fri, Mar 7, 2014 at 10:34 AM, 黎林果 lilinguo8...@gmail.com wrote:

 2014-03-07 *11:55:49*  the sleep time = past time + 30


With that eventlet doesn't break the promise of waking your greenthread
after at least 30 seconds. Have you tried doing the same test, but with
moving clock forwards instead of backwards?

All in all it sounds like an eventlet bug. I'm not sure how it can be dealt
with though.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] time.sleep is affected by eventlet.monkey_patch()

2014-03-07 Thread Yuriy Taraday
On Fri, Mar 7, 2014 at 11:20 AM, Yuriy Taraday yorik@gmail.com wrote:

 All in all it sounds like an eventlet bug. I'm not sure how it can be
 dealt with though.


Digging into it I found out that eventlet uses time.time() by default that
is not monotonic. There's no clear way to replace it, but you can
workaround this:
1. Get monotonic clock function here:
http://stackoverflow.com/a/1205762/238308 (note that for FreeBSD or MacOS
you'll have to use different constant).
2. Make eventlet's hub use it:
eventlet.hubs._threadlocal.hub =
eventlet.hubs.get_default_hub().Hub(monotonic_time)

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Yuriy Taraday
Hello.

On Wed, Mar 5, 2014 at 6:42 PM, Miguel Angel Ajo majop...@redhat.comwrote:

 2) What alternatives can we think about to improve this situation.

0) already being done: coalescing system calls. But I'm unsure that's
 enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60
 ~=3 minutes overhead on a 10min operation).

a) Rewriting rules into sudo (to the extent that it's possible), and
 live with that.
b) How secure is neutron about command injection to that point? How
 much is user input filtered on the API calls?
c) Even if b is ok , I suppose that if the DB gets compromised, that
 could lead to command injection.

d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and live
 with sudo with simple filtering. (we kill the python/rootwrap startup
 overhead).


Another option would be to allow rootwrap to run in daemon mode and provide
RPC interface. This way Neutron can spawn rootwrap (with its CPython
startup overhead) once and send new commands to be run later over UNIX
socket.
This way we won't need learn new language (C/C++), adopt new toolchain
(RPython, Cython, whatever else) and still get secure way to run commands
with root priviledges.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-07 Thread Yuriy Taraday
On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
stephen.g...@theguardian.comwrote:

 Hi,

 Given that Yuriy says explicitly 'unix socket', I dont think he means 'MQ'
 when he says 'RPC'.  I think he just means a daemon listening on a unix
 socket for execution requests.  This seems like a reasonably sensible idea
 to me.


Yes, you're right.


 On 07/03/14 12:52, Miguel Angel Ajo wrote:


 I thought of this option, but didn't consider it, as It's somehow
 risky to expose an RPC end executing priviledged (even filtered) commands.


subprocess module have some means to do RPC securely over UNIX sockets. I
does this by passing some token along with messages. It should be secure
because with UNIX sockets we don't need anything stronger since MITM
attacks are not possible.

If I'm not wrong, once you have credentials for messaging, you can
 send messages to any end, even filtered, I somehow see this as a higher
 risk option.


As Stephen noted, I'm not talking about using MQ for RPC. Just some local
UNIX socket with very simple RPC over it.


  And btw, if we add RPC in the middle, it's possible that all those
 system call delays increase, or don't decrease all it'll be desirable.


Every call to rootwrap would require the following.

Client side:
- new client socket;
- one message sent;
- one message received.

Server side:
- accepting new connection;
- one message received;
- one fork-exec;
- one message sent.

This looks like way simpler than passing through sudo and rootwrap that
requires three exec's and whole lot of configuration files opened and
parsed.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo majop...@redhat.comwrote:

 I'm not familiar with unix domain sockets at low level, but , I wonder
 if authentication could be achieved just with permissions (only users in
 group neutron or group rootwrap accessing this service.


It can be enforced, but it is not needed at all (see below).


 I find it an interesting alternative, to the other proposed solutions, but
 there are some challenges associated with this solution, which could make
 it more complicated:

 1) Access control, file system permission based or token based,


If we pass the token to the calling process through a pipe bound to stdout,
it won't be intercepted so token-based authentication for further requests
is secure enough.

2) stdout/stderr/return encapsulation/forwarding to the caller,
if we have a simple/fast RPC mechanism we can use, it's a matter
of serializing a dictionary.


RPC implementation in multiprocessing module uses either xmlrpclib or
pickle-based RPC. It should be enough to pass output of a command.
If we ever hit performance problem with passing long strings we can even
pass opened pipe's descriptors over UNIX socket to let caller interact with
spawned process directly.


 3) client side implementation for 1 + 2.


Most of the code should be placed in oslo.rootwrap. Services using it
should replaces calls to root_helper with appropriate client calls like
this:

if run_as_root:
  if CONF.use_rootwrap_daemon:
oslo.rootwrap.client.call(cmd)

All logic around spawning rootwrap daemon and interacting with it should be
hidden so that changes to services will be minimum.

4) It would need to accept new domain socket connections in green threads
 to avoid spawning a new process to handle a new connection.


We can do connection pooling if we ever run into performance problems with
connecting new socket for every rootwrap call (which is unlikely).
On the daemon side I would avoid using fancy libraries (eventlet) because
of both new fat requirement for oslo.rootwrap (it depends on six only
currently) and running more possibly buggy and unsafe code with elevated
privileges.
Simple threaded daemon should be enough given it will handle needs of only
one service process.


 The advantages:
* we wouldn't need to break the only-python-rule.
* we don't need to rewrite/translate rootwrap.

 The disadvantages:
   * it needs changes on the client side (neutron + other projects),


As I said, changes should be minimal.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 All,

 I was writing down a summary of all of this and decided to just do it
 on an etherpad.  Will you help me capture the big picture there?  I'd
 like to come up with some actions this week to try to address at least
 part of the problem before Icehouse releases.

 https://etherpad.openstack.org/p/neutron-agent-exec-performance


Great idea! I've added some details on my proposal there.

As of your proposed multitool, I'm very concerned about moving logic to a
bash script. I think we should not deviate from having Python-based agent,
not bash-based.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-13 Thread Yuriy Taraday
On Thu, Mar 13, 2014 at 12:18 PM, Miguel Angel Ajo majop...@redhat.comwrote:

 Yuri, could you elaborate your idea in detail? , I'm lost at some
 points with your unix domain / token authentication.

 Where does the token come from?,

 Who starts rootwrap the first time?

 If you could write a full interaction sequence, on the etherpad, from
 rootwrap daemon start ,to a simple call to system happening, I think that'd
 help my understanding.


Here it is: https://etherpad.openstack.org/p/rootwrap-agent
Please take a look.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-18 Thread Yuriy Taraday
On Mon, Mar 17, 2014 at 1:01 PM, IWAMOTO Toshihiro iwam...@valinux.co.jpwrote:

 I've added a couple of security-related comments (pickle decoding and
 token leak) on the etherpad.
 Please check.


Hello. Thanks for your input.

- We can avoid pickle using xmlrpclib.
- Token won't leak because we have direct pipe to parent process.

I'm in process of implementing it now so thanks for early notice.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday yorik@gmail.com wrote:

 I'm aiming at ~100 new lines of code for daemon. Of course I'll use some
 batteries included with Python stdlib but they should be safe already.
 It should be rather easy to audit them.


Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
  ip a :   4.565ms
 sudo ip a :  13.744ms
   sudo rootwrap conf ip a : 102.571ms
daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
  sudo ip netns exec bench_ns ip a : 162.098ms
sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
 daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running sudo.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 https://etherpad.openstack.org/p/neutron-agent-exec-performance


I've added info on how we can speedup work with namespaces by setting
namespaces by ourselves using setns() without ip netns exec overhead.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones rick.jon...@hp.com wrote:

 On 03/20/2014 05:41 AM, Yuriy Taraday wrote:

 Benchmark included showed on my machine these numbers (average over 100
  iterations):

 Running 'ip a':
ip a :   4.565ms
   sudo ip a :  13.744ms
 sudo rootwrap conf ip a : 102.571ms
  daemon.run('ip a') :   8.973ms
 Running 'ip netns exec bench_ns ip a':
sudo ip netns exec bench_ns ip a : 162.098ms
  sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
   daemon.run('ip netns exec bench_ns ip a') : 129.876ms

 So it looks like running daemon is actually faster than running sudo.


 Interesting result.  Which versions of sudo and ip and with how many
 interfaces on the system?


Here are the numbers:

% sudo -V
Sudo version 1.8.6p7
Sudoers policy plugin version 1.8.6p7
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p7
% ip -V
ip utility, iproute2-ss130221
% ip a | grep '^[^ ]' | wc -l
5


 For consistency's sake (however foolish it may be) and purposes of others
 being able to reproduce results and all that, stating the number of
 interfaces on the system and versions and such would be a Good Thing.


Ok, I'll add them to benchmark output.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo majop...@redhat.comwrote:


Wow Yuriy, amazing and fast :-), benchmarks included ;-)

The daemon solution only adds 4.5ms, good work. I'll add some comments
 in a while.

Recently I talked with another engineer in Red Hat (working
 in ovirt/vdsm), and they have something like this daemon, and they
 are using BaseManager too.

In our last conversation he told me that the BaseManager has
 a couple of bugs  race conditions that won't be fixed for python2.x,
 I'm waiting for details on those bugs, I'll post them to the thread
 as soon as I have the details.


Looking at log of managers.py and connection.py I don't see any significant
changes landed after 2.7.6 was released (Nov, 10). So it looks like those
bugs should be fixed in 2.7.

   If this coupled to neutron in a way that it can be accepted for
 Icehouse (we're killing a performance bug), or that at least it can
 be y backported, you'd be covering both the short  long term needs.


As I said on the meeting I plan to provide change request to Neutron with
some integration with this patch.
I'm also going to engage people involved in rootwrap about my change
request.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 8:23 PM, Rick Jones rick.jon...@hp.com wrote:

 On 03/20/2014 09:07 AM, Yuriy Taraday wrote:

 On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones rick.jon...@hp.com
 mailto:rick.jon...@hp.com wrote:
 Interesting result.  Which versions of sudo and ip and with how many
 interfaces on the system?


 Here are the numbers:

 % sudo -V
 Sudo version 1.8.6p7
 Sudoers policy plugin version 1.8.6p7
 Sudoers file grammar version 42
 Sudoers I/O plugin version 1.8.6p7
 % ip -V
 ip utility, iproute2-ss130221
 % ip a | grep '^[^ ]' | wc -l
 5

 For consistency's sake (however foolish it may be) and purposes of
 others being able to reproduce results and all that, stating the
 number of interfaces on the system and versions and such would be a
 Good Thing.


 Ok, I'll add them to benchmark output.


 Since there are only five interfaces on the system, it likely doesn't make
 much of a difference in your specific benchmark but the top-of-trunk
 version of sudo has the fix/enhancement to allow one to tell it via
 sudo.conf to not grab the list of interfaces on the system.

 Might be worthwhile though to take the interface count out to 2000 or more
 in the name of doing things at scale.  Namespace count as well.


Given that this benchmark is created to show that my changes are worth
doing and they already show that my approach is almost 2x faster than sudo,
slowing down sudo will only enhance this difference. I don't think we
should add this to the benchmark itself.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Yuriy Taraday
On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez thie...@openstack.orgwrote:

 Yuriy Taraday wrote:
  Benchmark included showed on my machine these numbers (average over 100
  iterations):
 
  Running 'ip a':
ip a :   4.565ms
   sudo ip a :  13.744ms
 sudo rootwrap conf ip a : 102.571ms
  daemon.run('ip a') :   8.973ms
  Running 'ip netns exec bench_ns ip a':
sudo ip netns exec bench_ns ip a : 162.098ms
  sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
   daemon.run('ip netns exec bench_ns ip a') : 129.876ms
 
  So it looks like running daemon is actually faster than running sudo.

 That's pretty good! However I fear that the extremely simplistic filter
 rule file you fed on the benchmark is affecting numbers. Could you post
 results from a realistic setup (like same command, but with all the
 filter files normally found on a devstack host ?)


I don't have a devstack host at hands but I gathered all filters from Nova,
Cinder and Neutron and got this:
method  :min   avg   max   dev
   ip a :   3.741ms   4.443ms   7.356ms 500.660us
  sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
 daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms

Then I switched back to one file and got:
method  :min   avg   max   dev
   ip a :   4.176ms   4.976ms  22.910ms   1.821ms
  sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
 daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms

There is a difference but it looks like it's because of config files
parsing, not applying filters themselves.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Yuriy Taraday
On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 Don't discard the first number so quickly.

 For example, say we use a timeout mechanism for the daemon running
 inside namespaces to avoid using too much memory with a daemon in
 every namespace.  That means we'll pay the startup cost repeatedly but
 in a way that amortizes it down.

 Even if it is really a one time cost, then if you collect enough
 samples then the outlier won't have much affect on the mean anyway.


It actually affects all numbers but mean (e.g. deviation is gross).


 I'd say keep it in there.

 Carl

 On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo majop...@redhat.com
 wrote:
 
 
  It's the first call starting the daemon / loading config files, etc?,
 
  May be that first sample should be discarded from the mean for all
 processes
  (it's an outlier value).


I thought about cutting max from counting deviation and/or showing
second-max value. But I don't think it matters much and there's not much
people here who're analyzing deviation. It's pretty clear what happens with
the longest run with this case and I think we can let it be as is. It's
mean value that matters most here.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Decorator behavior

2014-04-01 Thread Yuriy Taraday
Hello.


On Mon, Mar 31, 2014 at 9:32 PM, Dan Smith d...@danplanet.com wrote:

  
  (self, context, [], {'migration': migration, 'image': image,
  'instance': instance, 'reservations': reservations})
 
  while when running a test case, they see these arguments:
 
  (self, context, [instance, image, reservations, migration,
  instance_type], {})

 All RPC-called methods get called with all of their arguments as keyword
 arguments. I think this explains the runtime behavior you're seeing.
 Tests tend to differ in this regard because test writers are human and
 call the methods in the way they normally expect, passing positional
 arguments when appropriate.


It might be wise to add something like
https://pypi.python.org/pypi/kwonlyto all methods that are used in RPC
and modify tests appropriately to avoid
such confusion in future.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-24 Thread Yuriy Taraday
+1 on the topic

How about we catch them in hacking so that they won't ever come back?


On Thu, Oct 24, 2013 at 4:53 PM, Davanum Srinivas dava...@gmail.com wrote:

 +1 to remove them.

 -- dims

 On Thu, Oct 24, 2013 at 8:44 AM, Monty Taylor mord...@inaugust.com
 wrote:
 
 
  On 10/24/2013 08:38 AM, Joe Gordon wrote:
  Since the beginning of OpenStack we have had vim modelines all over the
  codebase, but after seeing this
  patch https://review.opeenstack.org/#/c/50891/
  https://review.openstack.org/#/c/50891/ I took a further look into
 vim
  modelines and think we should remove them. Before going any further, I
  should point out these lines don't bother me too much but I figured if
  we could get consensus, then we could shrink our codebase by a little
 bit.
 
  Sidenote: This discussion is being moved to the mailing list because it
  'would be better to have a mailing list thread about this rather than
  bits and pieces of discussion in gerrit' as this change requires
  multiple patches.  https://review.openstack.org/#/c/51295/.
 
 
  Why remove them?
 
  * Modelines aren't supported by default in debian or ubuntu due to
  security reasons: https://wiki.python.org/moin/Vim
  * Having modelines for vim means if someone wants we should support
  modelines for emacs
  (
 http://www.gnu.org/software/emacs/manual/html_mono/emacs.html#Specifying-File-Variables
 )
  etc. as well.  And having a bunch of headers for different editors in
  each file seems like extra overhead.
  * There are other ways of making sure tabstop is set correctly for
  python files, see  https://wiki..python.org/moin/Vim
  https://wiki.python.org/moin/Vim.  I am a vIm user myself and have
  never used modelines.
  * We have vim modelines in only 828 out of 1213 python files in nova
  (68%), so if anyone is using modelines today, then it only works 68% of
  the time in nova
  * Why have the same config 828 times for one repo alone?  This violates
  the DRY principle (Don't Repeat Yourself).
 
 
  Related Patches:
  https://review.openstack.org/#/c/51295/
 
 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:noboilerplate,n,z
 
  I agree with everything - both not caring about this topic really, and
  that we should just kill them and be done with it. Luckily, this is a
  suuper easy global search and replace.
 
  Also, since we gate on pep8, if your editor is configured incorrectly,
  you'll figure it out soon enough.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Weekly IRC team meeting

2013-10-24 Thread Yuriy Taraday
+1


On Thu, Oct 24, 2013 at 5:43 PM, Nikolay Starodubtsev 
nstarodubt...@mirantis.com wrote:

 +1, but we need to wait for Dina. She has more problems with schedule than
 me

  Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.

 Skype: dark_harlequine1


 On Thu, Oct 24, 2013 at 4:11 PM, Swann Croiset swann.croi...@bull.netwrote:

  +1

 Le 24/10/2013 09:45, Sylvain Bauza a écrit :

 Hi all,

 Climate is growing and time is coming for having a weekly meeting in
 between all of us.
 There is a huge number of reviews in progress, and at least the first
 agenda will be triaging those, making sure they are either coming to trunk
 as soon as possible, or splitted into smaller chunks of code.

 The Icehouse summit is also coming, and I would like to take opportunity
 to discuss about any topics we could raise during the Summit.

 Is Mondays 10:00am UTC [1] a convenient time for you ?

 http://www.timeanddate.com/worldclock/meetingdetails.html?year=2013month=10day=28hour=10min=0sec=0p1=195p2=166

 -Sylvain



 --
 Swann Croiset



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Yuriy Taraday
Looking at implementations in Keystone and Nova, I found the only use for
is_admin but it is essential.

Whenever in code you need to run a piece of code with admin privileges, you
can create a new context with  is_admin=True keeping all other parameters
as is, run code requiring admin access and then revert context back.
My first though was: Hey, why don't they just add 'admin' role then?. But
what if in current deployment admin role is named like
'TheVerySpecialAdmin'? What if user has tweaked policy.json to better suite
one's needs?

So my current understanding is (and I suggest to follow this logic):
- 'admin' role in context.roles can vary, it's up to cloud admin to set
necessary value in policy.json;
- 'is_admin' flag is used to elevate privileges from code and it's name is
fixed;
- policy check should assume that user is admin if either special role is
present or is_admin flag is set.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Yuriy Taraday
On Wed, Nov 20, 2013 at 3:21 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

 Yes indeed, that's something coming into my mind. Looking at Nova, I found
 a context_is_admin policy in policy.json allowing you to say which role
 is admin or not [1] and is matched in policy.py [2], which itself is called
 when creating a context [3].

 I'm OK copying that, any objections to it ?


I would suggest not to copy this stuff from Nova. There's a lot of legacy
there and it's based on old openstack.common.policy version. We should rely
on openstack.common.policy alone, no need to add more checks here.


 [1] https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L2


This rule is here just to support


 [2] https://github.com/openstack/nova/blob/master/nova/policy.py#L116


this, which is used only


 [3] https://github.com/openstack/nova/blob/master/nova/context.py#L102


here. This is not what I would call a consistent usage of policies.


If we need to check access rights to some method, we should use an
appropriate decorator or helper method and let it check appropriate policy
rule that would contain rule:admin_required, just like in Keystone:
https://github.com/openstack/keystone/blob/master/etc/policy.json.

context.is_admin should not be checked directly from code, only through
policy rules. It should be set only if we need to elevate privileges from
code. That should be the meaning of it.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Yuriy Taraday
Hello, Dolph.

On Wed, Nov 20, 2013 at 8:42 PM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Wed, Nov 20, 2013 at 10:24 AM, Yuriy Taraday yorik@gmail.comwrote:


 context.is_admin should not be checked directly from code, only through
 policy rules. It should be set only if we need to elevate privileges from
 code. That should be the meaning of it.


 is_admin is a short sighted and not at all granular -- it needs to die, so
 avoid imitating it.


 I suggest keeping it in case we need to elevate privileges from code. In
this case we can't rely on roles so just one flag should work fine.
As I said before, we should avoid setting or reading is_admin directly from
code. It should be set only in context.elevated and read only by
admin_required policy rule.

Does this sound reasonable?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-20 Thread Yuriy Taraday
On Wed, Nov 20, 2013 at 9:57 PM, Dolph Mathews dolph.math...@gmail.comwrote:

 On Wed, Nov 20, 2013 at 10:52 AM, Yuriy Taraday yorik@gmail.comwrote:

  On Wed, Nov 20, 2013 at 8:42 PM, Dolph Mathews 
 dolph.math...@gmail.comwrote:

 is_admin is a short sighted and not at all granular -- it needs to die,
 so avoid imitating it.


  I suggest keeping it in case we need to elevate privileges from code.


 Can you expand on this point? It sounds like you want to ignore the
 deployer-specified authorization configuration...


No, we're not ignoring it. In Keystone we have two options to become an
admin: either have 'admin'-like role (set in policy.json by deployer) or
have 'is_admin' set (the only way in Keystone is to pass configured
admin_token). We don't have bootstrap problem in any other services, so we
don't need any admin_token. But we might need to run code that requires
admin privileges for user that don't have them. Other projects use
get_admin_context() or smth like that for this.
I suggest we keep the option to have such 'in-code sudo' using is_admin
that will be mentioned in policy.json, but limit is_admin usage to just
that.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] How we agree to determine that an user has admin rights ?

2013-11-21 Thread Yuriy Taraday
On Thu, Nov 21, 2013 at 12:37 PM, Sylvain Bauza sylvain.ba...@bull.netwrote:

  Hi Yuriy, Dolph et al.

 I'm implementing a climate.policy.check_is_admin(ctx) which will look at
 policy.json entry 'context_is_admin' for knowing which roles do have
 elevated rights for Climate.

 This check must be called when creating a context for knowing if we can
 allow extra rights. The is_admin flag is pretty handsome because it can be
 triggered upon that check.

 If we say that one is bad, how should we manage that ?

 -Sylvain


There should be no need for is_admin and some special policy rule like
context_is_admin.
Every action that might require granular access control (for controllers it
should be every action at all, I guess) should call enforce() from
openstack.common.policy to check appropriate rule in policy.json.
Rules for actions that require user to be admin should contain a reference
to some basic rule like admin_required in Keystone (see
https://github.com/openstack/keystone/blob/master/etc/policy.json).

We should not check from code if the user is an admin. We should always ask
openstack.common.policy if the user have access to the action.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Improving oslo-incubator update.py

2013-11-26 Thread Yuriy Taraday
Hello.


On Fri, Nov 22, 2013 at 1:11 PM, Flavio Percoco fla...@redhat.com wrote:

1) Store the commit sha from which the module was copied from.


I would suggest we don't duplicate this in every project's repo. It is
possible to find using the contents of the modules currently in project's
repo the corresponding commit from oslo-incubator (see [1] for a hint, feel
free to ask me about Git details). So there's no need to store this
information in a separate file.

   2) Add an 'auto-commit' parameter to the update script that will
generate a commit message with the short log of the commits where
the modules being updated were modified. Soemthing like:

Syncing oslo-incubator modules

log.py:
commit1: short-message
commit2: short-message
commit3: short-message

lockutils:
commit4: short-message
commit5: short-message


The script can use this information as well to get the last synced commit
for each module. We should to agree to use this format for all commit
messages and make commit detection easier.

I would drop the colon though to make it look like the usual 'git log
--oneline' output.

[1] http://stackoverflow.com/questions/223678/which-commit-has-this-blob

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] [cinder] upgrade issues in lock_path in cinder after oslo utils sync (was: creating a default for oslo config variables within a project?)

2013-12-06 Thread Yuriy Taraday
Hello, Sean.

I get the issue with upgrade path. User doesn't want to update config
unless one is forced to do so.
But introducing code that weakens security and let it stay is an
unconditionally bad idea.
It looks like we have to weigh two evils: having troubles upgrading and
lessening security. That's obvious.

Here are my thoughts on what we can do with it:
1. I think we should definitely force user to do appropriate configuration
to let us use secure ways to do locking.
2. We can wait one release to do so, e.g. issue a deprecation warning now
and force user to do it the right way later.
3. If we are going to do 2. we should do it in the service that is affected
not in the library because library shouldn't track releases of an
application that uses it. It should do its thing and do it right (secure).

So I would suggest to deal with it in Cinder by importing 'lock_path'
option after parsing configs and issuing a deprecation warning and setting
it to tempfile.gettempdir() if it is still None.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][ceilometer][glance][all] Loading clients from a CONF object

2014-06-15 Thread Yuriy Taraday
On Fri, Jun 13, 2014 at 3:27 AM, Jamie Lennox jamielen...@redhat.com
wrote:

   And as we're going to have to live with this for a while, I'd rather use
  the more clear version of this in keystone instead of the Heat stanzas.

 Anyone else have an opinion on this?


I like keeping sections' names simple and clear, but it looks like you
should add some common section ([services_common]?) since 6 out of 6
options in your example will very probable be repeated for every client.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-06-16 Thread Yuriy Taraday
Hello, Shlomi.


On Tue, Mar 25, 2014 at 7:07 PM, Shlomi Sasson shlo...@mellanox.com wrote:

  I want to share with the community the following challenge:

 Currently, Vendors who have their iSCSI driver, and want to add RDMA
 transport (iSER), cannot leverage their existing plug-in which inherit from
 iSCSI

 And must modify their driver or create an additional plug-in driver which
 inherit from iSER, and copy the exact same code.



 Instead I believe a simpler approach is to add a new attribute to
 ISCSIDriver to support other iSCSI transports besides TCP, which will allow
 minimal changes to support iSER.

 The existing ISERDriver code will be removed, this will eliminate
 significant code and class duplication, and will work with all the iSCSI
 vendors who supports both TCP and RDMA without the need to modify their
 plug-in drivers.


I remember Ann working on https://review.openstack.org/#/c/45393 and it has
landed since.

That change leaves ISERDriver just for backward compatibility and
allows ISCSIDriver and any its descendant to use iscsi_helper='iseradm' to
provide iSER usage.

Aren't those changes enough for this? What else is needed here?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-07-01 Thread Yuriy Taraday
Hello

On Fri, Jun 20, 2014 at 12:48 PM, Radoslav Gerganov rgerga...@vmware.com
wrote:

 Hi,

  On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton gkot...@vmware.com wrote:
   Hi,
   I have encountered a problem with string substitution with the nova
   configuration file. The motivation was to move all of the glance
 settings
   to
   their own section (https://review.openstack.org/#/c/100567/). The
   glance_api_servers had default setting that uses the current
 glance_host
   and
   the glance port. This is a problem when we move to the ‘glance’
 section.
   First and foremost I think that we need to decide on how we should
 denote
   the string substitutions for group variables and then we can dive into
   implementation details. Does anyone have any thoughts on this?
   My thinking is that when we use we should use a format of
 $group.key.
   An
   example is below.
  
 
  Do we need to set the variable off somehow to allow substitutions that
  need the literal '.' after a variable? How often is that likely to
  come up?

 I would suggest to introduce a different form of placeholder for this like:

   default=['${glance.host}:${glance.port}']

 similar to how variable substitutions are handled in Bash.  IMO, this is
 more readable and easier to parse.

 -Rado


I couldn't help but trying implement this:
https://review.openstack.org/103884

This change allows both ${glance.host} and ${.host} variants.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-02 Thread Yuriy Taraday
Hello.

On Fri, Jun 27, 2014 at 4:44 PM, Thierry Carrez thie...@openstack.org
wrote:

 For all those reasons, we decided at the last summit to use unique
 pre-release branches, named after the series (for example,
 proposed/juno). That branch finally becomes stable/juno at release
 time. In parallel, we abandoned the usage of release branches for
 development milestones, which are now tagged directly on the master
 development branch.


I know that this question has been raised before but I still would like to
clarify this.
Why do we need these short-lived 'proposed' branches in any form? Why can't
we just use release branches for this and treat them as stable when
appropriate tag is added to some commit in them?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-03 Thread Yuriy Taraday
On Thu, Jul 3, 2014 at 5:00 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-07-02 22:19:29 +0400 (+0400), Yuriy Taraday wrote:
 [...]
  It looks like mirrors will have to bear having a number of dead branches
 in
  them - one for each release.

 A release manager will delete proposed/juno when stable/juno is
 branched from it, and branch deletions properly propagate to our
 official mirrors (you may have to manually remove any local tracking
 branches you've created, but that shouldn't be much of a concern).


I mean other mirrors like we have in our local net. Given not so good
connection to upstream repos (the reason we have this mirror in the first
place) I can't think of reliable way to clean them up.
Where can I find scripts that propagate deletions to official mirrors?
Maybe I can get some idea from them?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repo

2014-07-04 Thread Yuriy Taraday
Every commit landing to every repo should be synchronized to GitHub. I
filed a bug to track this issue here:
https://bugs.launchpad.net/openstack-ci/+bug/1337735


On Fri, Jul 4, 2014 at 3:30 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 git.openstack.org has an up-to-date log:
 http://git.openstack.org/cgit/openstack/neutron-specs/log/

 Unfortunately I don't know what the policy is for syncing repos with
 github.

 Salvatore


 On 4 July 2014 00:34, Sumit Naiksatam sumitnaiksa...@gmail.com wrote:

 Is this still the right repo for this:
 https://github.com/openstack/neutron-specs

 The latest commit on the master branch shows June 25th timestamp, but
 we have had a lots of patches merging after that. Where are those
 going?

 Thanks,
 ~Sumit.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins] [Cinder] InvocationError in gate-cinder-python26 python27

2014-07-04 Thread Yuriy Taraday
On Fri, Jul 4, 2014 at 12:57 PM, Amit Das amit@cloudbyte.com wrote:

 Hi All,

 I can see a lot of cinder gerrit commits that pass through the
 gate-cinder-python26  gate-cinder-python27 successfully.

 ref - https://github.com/openstack/cinder/commits/master

 Whereas its not the case for my patch
 https://review.openstack.org/#/c/102511/.

 I updated the master  rebased that to my branch before doing a gerrit
 review.

 Am i missing any steps ?


Does 'tox -e py26' works on your local machine? It should fail just as one
in the gate.
You should follow instructions it provides in log just before
'InvocationError' - run tools/config/generate_sample.sh.
The issue is that you've added some options to your driver but didn't
update etc/cinder/cinder.conf.sample.
After generating new sample you should verify its diff (git diff
etc/cinder/cinder.conf.sample) and add it to your commit.


-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-09 Thread Yuriy Taraday
On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow harlo...@yahoo-inc.com
wrote:

 I think clints response was likely better than what I can write here, but
 I'll add-on a few things,


 How do you write such code using taskflow?
 
   @asyncio.coroutine
   def foo(self):
   result = yield from some_async_op(...)
   return do_stuff(result)

 The idea (at a very high level) is that users don't write this;

 What users do write is a workflow, maybe the following (pseudocode):

 # Define the pieces of your workflow.

 TaskA():
   def execute():
   # Do whatever some_async_op did here.

   def revert():
   # If execute had any side-effects undo them here.

 TaskFoo():
...

 # Compose them together

 flow = linear_flow.Flow(my-stuff).add(TaskA(my-task-a),
 TaskFoo(my-foo))


I wouldn't consider this composition very user-friendly.


 # Submit the workflow to an engine, let the engine do the work to execute
 it (and transfer any state between tasks as needed).

 The idea here is that when things like this are declaratively specified
 the only thing that matters is that the engine respects that declaration;
 not whether it uses asyncio, eventlet, pigeons, threads, remote
 workers[1]. It also adds some things that are not (imho) possible with
 co-routines (in part since they are at such a low level) like stopping the
 engine after 'my-task-a' runs and shutting off the software, upgrading it,
 restarting it and then picking back up at 'my-foo'.


It's absolutely possible with coroutines and might provide even clearer
view of what's going on. Like this:

@asyncio.coroutine
def my_workflow(ctx, ...):
project = yield from ctx.run_task(create_project())
# Hey, we don't want to be linear. How about parallel tasks?
volume, network = yield from asyncio.gather(
ctx.run_task(create_volume(project)),
ctx.run_task(create_network(project)),
)
# We can put anything here - why not branch a bit?
if create_one_vm:
yield from ctx.run_task(create_vm(project, network))
else:
# Or even loops - why not?
for i in range(network.num_ips()):
yield from ctx.run_task(create_vm(project, network))

There's no limit to coroutine usage. The only problem is the library that
would bind everything together.
In my example run_task will have to be really smart, keeping track of all
started tasks, results of all finished ones, skipping all tasks that have
already been done (and substituting already generated results).
But all of this is doable. And I find this way of declaring workflows way
more understandable than whatever would it look like with Flow.add's

Hope that helps make it a little more understandable :)

 -Josh


PS: I've just found all your emails in this thread in Spam folder. So it's
probable not everybody read them.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-10 Thread Yuriy Taraday
On Wed, Jul 9, 2014 at 7:39 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Yuriy Taraday's message of 2014-07-09 03:36:00 -0700:
  On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow harlo...@yahoo-inc.com
  wrote:
 
   I think clints response was likely better than what I can write here,
 but
   I'll add-on a few things,
  
  
   How do you write such code using taskflow?
   
 @asyncio.coroutine
 def foo(self):
 result = yield from some_async_op(...)
 return do_stuff(result)
  
   The idea (at a very high level) is that users don't write this;
  
   What users do write is a workflow, maybe the following (pseudocode):
  
   # Define the pieces of your workflow.
  
   TaskA():
 def execute():
 # Do whatever some_async_op did here.
  
 def revert():
 # If execute had any side-effects undo them here.
  
   TaskFoo():
  ...
  
   # Compose them together
  
   flow = linear_flow.Flow(my-stuff).add(TaskA(my-task-a),
   TaskFoo(my-foo))
  
 
  I wouldn't consider this composition very user-friendly.
 

 I find it extremely user friendly when I consider that it gives you
 clear lines of delineation between the way it should work and what
 to do when it breaks.


So does plain Python. But for plain Python you don't have to explicitly use
graph terminology to describe the process.


# Submit the workflow to an engine, let the engine do the work to
 execute
   it (and transfer any state between tasks as needed).
  
   The idea here is that when things like this are declaratively specified
   the only thing that matters is that the engine respects that
 declaration;
   not whether it uses asyncio, eventlet, pigeons, threads, remote
   workers[1]. It also adds some things that are not (imho) possible with
   co-routines (in part since they are at such a low level) like stopping
 the
   engine after 'my-task-a' runs and shutting off the software, upgrading
 it,
   restarting it and then picking back up at 'my-foo'.
  
 
  It's absolutely possible with coroutines and might provide even clearer
  view of what's going on. Like this:
 
  @asyncio.coroutine
  def my_workflow(ctx, ...):
  project = yield from ctx.run_task(create_project())
  # Hey, we don't want to be linear. How about parallel tasks?
  volume, network = yield from asyncio.gather(
  ctx.run_task(create_volume(project)),
  ctx.run_task(create_network(project)),
  )
  # We can put anything here - why not branch a bit?
  if create_one_vm:
  yield from ctx.run_task(create_vm(project, network))
  else:
  # Or even loops - why not?
  for i in range(network.num_ips()):
  yield from ctx.run_task(create_vm(project, network))
 

 Sorry but the code above is nothing like the code that Josh shared. When
 create_network(project) fails, how do we revert its side effects? If we
 want to resume this flow after reboot, how does that work?

 I understand that there is a desire to write everything in beautiful
 python yields, try's, finally's, and excepts. But the reality is that
 python's stack is lost the moment the process segfaults, power goes out
 on that PDU, or the admin rolls out a new kernel.

 We're not saying asyncio vs. taskflow. I've seen that mistake twice
 already in this thread. Josh and I are suggesting that if there is a
 movement to think about coroutines, there should also be some time spent
 thinking at a high level: how do we resume tasks, revert side effects,
 and control flow?

 If we embed taskflow deep in the code, we get those things, and we can
 treat tasks as coroutines and let taskflow's event loop be asyncio just
 the same. If we embed asyncio deep into the code, we don't get any of
 the high level functions and we get just as much code churn.

  There's no limit to coroutine usage. The only problem is the library that
  would bind everything together.
  In my example run_task will have to be really smart, keeping track of all
  started tasks, results of all finished ones, skipping all tasks that have
  already been done (and substituting already generated results).
  But all of this is doable. And I find this way of declaring workflows way
  more understandable than whatever would it look like with Flow.add's
 

 The way the flow is declared is important, as it leads to more isolated
 code. The single place where the flow is declared in Josh's example means
 that the flow can be imported, the state deserialized and inspected,
 and resumed by any piece of code: an API call, a daemon start up, an
 admin command, etc.

 I may be wrong, but it appears to me that the context that you built in
 your code example is hard, maybe impossible, to resume after a process
 restart unless _every_ task is entirely idempotent and thus can just be
 repeated over and over.


I must have not stressed this enough in the last paragraph. The point is to
make run_task method very smart. It should do smth like this (yes, I'm
better 

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-11 Thread Yuriy Taraday
On Thu, Jul 10, 2014 at 11:51 PM, Outlook harlo...@outlook.com wrote:

 On Jul 10, 2014, at 3:48 AM, Yuriy Taraday yorik@gmail.com wrote:

 On Wed, Jul 9, 2014 at 7:39 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Yuriy Taraday's message of 2014-07-09 03:36:00 -0700:
  On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow harlo...@yahoo-inc.com
  wrote:
 
   I think clints response was likely better than what I can write here,
 but
   I'll add-on a few things,
  
  
   How do you write such code using taskflow?
   
 @asyncio.coroutine
 def foo(self):
 result = yield from some_async_op(...)
 return do_stuff(result)
  
   The idea (at a very high level) is that users don't write this;
  
   What users do write is a workflow, maybe the following (pseudocode):
  
   # Define the pieces of your workflow.
  
   TaskA():
 def execute():
 # Do whatever some_async_op did here.
  
 def revert():
 # If execute had any side-effects undo them here.
  
   TaskFoo():
  ...
  
   # Compose them together
  
   flow = linear_flow.Flow(my-stuff).add(TaskA(my-task-a),
   TaskFoo(my-foo))
  
 
  I wouldn't consider this composition very user-friendly.
 


 So just to make this understandable, the above is a declarative structure
 of the work to be done. I'm pretty sure it's general agreed[1] in the
 programming world that when declarative structures can be used they should
 be (imho openstack should also follow the same pattern more than it
 currently does). The above is a declaration of the work to be done and the
 ordering constraints that must be followed. Its just one of X ways to do
 this (feel free to contribute other variations of these 'patterns' @
 https://github.com/openstack/taskflow/tree/master/taskflow/patterns).

 [1] http://latentflip.com/imperative-vs-declarative/ (and many many
 others).


I totally agree that declarative approach is better for workflow
declarations. I'm just saying that we can do it in Python with coroutines
instead. Note that declarative approach can lead to reinvention of entirely
new language and these flow.add can be the first step on this road.

  I find it extremely user friendly when I consider that it gives you
 clear lines of delineation between the way it should work and what
 to do when it breaks.


 So does plain Python. But for plain Python you don't have to explicitly
 use graph terminology to describe the process.



 I'm not sure where in the above you saw graph terminology. All I see there
 is a declaration of a pattern that explicitly says run things 1 after the
 other (linearly).


As long as workflow is linear there's no difference on whether it's
declared with .add() or with yield from. I'm talking about more complex
workflows like one I described in example.


# Submit the workflow to an engine, let the engine do the work to
 execute
   it (and transfer any state between tasks as needed).
  
   The idea here is that when things like this are declaratively
 specified
   the only thing that matters is that the engine respects that
 declaration;
   not whether it uses asyncio, eventlet, pigeons, threads, remote
   workers[1]. It also adds some things that are not (imho) possible with
   co-routines (in part since they are at such a low level) like
 stopping the
   engine after 'my-task-a' runs and shutting off the software,
 upgrading it,
   restarting it and then picking back up at 'my-foo'.
  
 
  It's absolutely possible with coroutines and might provide even clearer
  view of what's going on. Like this:
 
  @asyncio.coroutine
  def my_workflow(ctx, ...):
  project = yield from ctx.run_task(create_project())
  # Hey, we don't want to be linear. How about parallel tasks?
  volume, network = yield from asyncio.gather(
  ctx.run_task(create_volume(project)),
  ctx.run_task(create_network(project)),
  )
  # We can put anything here - why not branch a bit?
  if create_one_vm:
  yield from ctx.run_task(create_vm(project, network))
  else:
  # Or even loops - why not?
  for i in range(network.num_ips()):
  yield from ctx.run_task(create_vm(project, network))
 


 Sorry but the code above is nothing like the code that Josh shared. When
 create_network(project) fails, how do we revert its side effects? If we
 want to resume this flow after reboot, how does that work?

 I understand that there is a desire to write everything in beautiful
 python yields, try's, finally's, and excepts. But the reality is that
 python's stack is lost the moment the process segfaults, power goes out
 on that PDU, or the admin rolls out a new kernel.

 We're not saying asyncio vs. taskflow. I've seen that mistake twice
 already in this thread. Josh and I are suggesting that if there is a
 movement to think about coroutines, there should also be some time spent
 thinking at a high level: how do we resume tasks, revert side effects,
 and control flow?

 If we

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-12 Thread Yuriy Taraday
On Fri, Jul 11, 2014 at 10:34 PM, Joshua Harlow harlo...@outlook.com
wrote:

 S, how about we can continue this in #openstack-state-management (or
 #openstack-oslo).

 Since I think we've all made the point and different viewpoints visible
 (which was the main intention).

 Overall, I'd like to see asyncio more directly connected into taskflow so
 we can have the best of both worlds.

 We just have to be careful in letting people blow their feet off, vs.
 being to safe; but that discussion I think we can have outside this thread.


That's what I was about to reply to Clint: Let the user shoot ones feet,
one can always be creative in doing that anyway.

Sound good?


Sure. TBH I didn't think this thread is the right place for this discussion
but coroutines can't do that kind of set me off :)

-Josh

 On Jul 11, 2014, at 9:04 AM, Clint Byrum cl...@fewbar.com wrote:

  Excerpts from Yuriy Taraday's message of 2014-07-11 03:08:14 -0700:
  On Thu, Jul 10, 2014 at 11:51 PM, Josh Harlow harlo...@outlook.com
 wrote:
  2. Introspection, I hope this one is more obvious. When the coroutine
  call-graph is the workflow there is no easy way to examine it before it
  executes (and change parts of it for example before it executes). This
 is a
  nice feature imho when it's declaratively and explicitly defined, you
 get
  the ability to do this. This part is key to handling upgrades that
  typically happen (for example the a workflow with the 5th task was
 upgraded
  to a newer version, we need to stop the service, shut it off, do the
 code
  upgrade, restart the service and change 5th task from v1 to v1.1).
 
 
  I don't really understand why would one want to examine or change
 workflow
  before running. Shouldn't workflow provide just enough info about which
  tasks should be run in what order?
  In case with coroutines when you do your upgrade and rerun workflow,
 it'll
  just skip all steps that has already been run and run your new version
 of
  5th task.
 
 
  I'm kind of with you on this one. Changing the workflow feels like self
  modifying code.
 
  3. Dataflow: tasks in taskflow can not just declare workflow
 dependencies
  but also dataflow dependencies (this is how tasks transfer things from
 one
  to another). I suppose the dataflow dependency would mirror to
 coroutine
  variables  arguments (except the variables/arguments would need to be
  persisted somewhere so that it can be passed back in on failure of the
  service running that coroutine). How is that possible without an
  abstraction over those variables/arguments (a coroutine can't store
 these
  things in local variables since those will be lost)?It would seem like
 this
  would need to recreate the persistence  storage layer[5] that taskflow
  already uses for this purpose to accomplish this.
 
 
  You don't need to persist local variables. You just need to persist
 results
  of all tasks (and you have to do it if you want to support workflow
  interruption and restart). All dataflow dependencies are declared in the
  coroutine in plain Python which is what developers are used to.
 
 
  That is actually the problem that using declarative systems avoids.
 
 
 @asyncio.couroutine
 def add_ports(ctx, server_def):
 port, volume = yield from
 asyncio.gather(ctx.run_task(create_port(server_def)),
 
 ctx.run_task(create_volume(server_def))
 if server_def.wants_drbd:
 setup_drbd(volume, server_def)
 
 yield from ctx.run_task(boot_server(volume_az, server_def))
 
 
  Now we have a side effect which is not in a task. If booting fails, and
  we want to revert, we won't revert the drbd. This is easy to miss
  because we're just using plain old python, and heck it already even has
  a test case.
 
  I see this type of thing a lot.. we're not arguing about capabilities,
  but about psychological differences. There are pros and cons to both
  approaches.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-21 Thread Yuriy Taraday
Hello, Kyle.

As I can see, my spec got left behind. Should I give up any hope and move
it to Kilo dir?


On Mon, Jul 14, 2014 at 3:24 PM, Miguel Angel Ajo Pelayo 
mangel...@redhat.com wrote:

 The oslo-rootwrap spec counterpart of this
 spec has been approved:

 https://review.openstack.org/#/c/94613/

 Cheers :-)

 - Original Message -
  Yurly, thanks for your spec and code! I'll sync with Carl tomorrow on
 this
  and see how we can proceed for Juno around this.
 
 
  On Sat, Jul 12, 2014 at 10:00 AM, Carl Baldwin  c...@ecbaldwin.net 
 wrote:
 
 
 
 
  +1 This spec had already been proposed quite some time ago. I'd like to
 see
  this work get in to juno.
 
  Carl
  On Jul 12, 2014 9:53 AM, Yuriy Taraday  yorik@gmail.com  wrote:
 
 
 
  Hello, Kyle.
 
  On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery 
 mest...@noironetworks.com 
  wrote:
 
 
  Just a note that yesterday we passed SPD for Neutron. We have a
  healthy backlog of specs, and I'm working to go through this list and
  make some final approvals for Juno-3 over the next week. If you've
  submitted a spec which is in review, please hang tight while myself
  and the rest of the neutron cores review these. It's likely a good
  portion of the proposed specs may end up as deferred until K
  release, given where we're at in the Juno cycle now.
 
  Thanks!
  Kyle
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Please don't skip my spec on rootwrap daemon support:
  https://review.openstack.org/#/c/93889/
  It got -2'd my Mark McClain when my spec in oslo wasn't approved but now
  that's fixed but it's not easy to get hold of Mark.
  Code for that spec (also -2'd by Mark) is close to be finished and
 requires
  some discussion to get merged by Juno-3.
 
  --
 
  Kind regards, Yuriy.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-22 Thread Yuriy Taraday
Hello, Ben.

On Mon, Jul 21, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 Hi all,

 The oslo.serialization and oslo.concurrency graduation specs are both
 approved, but unfortunately I haven't made as much progress on them as I
 would like.  The serialization repo has been created and has enough acks
 to continue the process, and concurrency still needs to be started.

 Also unfortunately, I am unlikely to make progress on either over the
 next two weeks due to the tripleo meetup and vacation.  As discussed in
 the Oslo meeting last week
 (
 http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html
 )
 we would like to continue work on them during that time, so Doug asked
 me to look for volunteers to pick up the work and run with it.

 The current status and next steps for oslo.serialization can be found in
 the bp:
 https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization

 As mentioned, oslo.concurrency isn't started and has a few more pending
 tasks, which are enumerated in the spec:

 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst

 Any help would be appreciated.  I'm happy to pick this back up in a
 couple of weeks, but if someone could shepherd it along in the meantime
 that would be great!


I would be happy to work on graduating oslo.concurrency as well as
improving it after that. I like fiddling with OS's, threads and races :)
I can also help to finish work on oslo.serialization (it looks like some
steps are already finished there).

What would be needed to start working on that? I haven't been following
development of processes within Oslo. So I would need someone to answer
questions as they arise.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon mode support

2014-07-23 Thread Yuriy Taraday
Hello.

I'd like to propose making a spec freeze exception for rootwrap-daemon-mode
spec [1].

Its goal is to save agents' execution time by using daemon mode for
rootwrap and thus avoiding python interpreter startup time as well as sudo
overhead for each call. Preliminary benchmark shows 10x+ speedup of the
rootwrap interaction itself.

This spec have a number of supporters from Neutron team (Carl and Miguel
gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
The only thing that has been blocking its progress is Mark's -2 left when
oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
in oslo.rootwrap is steadily getting approved [5].

[1] https://review.openstack.org/93889
[2] https://review.openstack.org/82787
[3] https://review.openstack.org/84667
[4] https://review.openstack.org/107386
[5]
https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in the
 parsed configuration for the resulting options?


I can imagine something like this:
1. iterate over undefined groups in config;
2. select groups of interest (e.g. by prefix or some regular expression);
3. register options in them;
4. use those options.

Registered group can be passed to a plugin/library that would register its
options in it.

So the only thing that oslo.config lacks in its interface here is some way
to allow the first step. The rest can be overcomed with some sugar.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in the
 parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register its
 options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


Plugin would have to register its options under a fixed group. But what if
we want a number of plugin instances?



 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


I don't exactly know what the original author's intention is but I don't
generally like the fact that all libraries and plugins wanting to use
config have to influence global CONF instance.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Fri, Jul 25, 2014 at 12:05 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in
 the parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register
 its options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


 Plugin would have to register its options under a fixed group. But what if
 we want a number of plugin instances?


 Presumably something would know a name associated with each instance and
 could pass it to the plugin to use when registering its options.




 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


 I don't exactly know what the original author's intention is but I don't
 generally like the fact that all libraries and plugins wanting to use
 config have to influence global CONF instance.


 That is a common misconception. The use of a global configuration option
 is an application developer choice. The config library does not require it.
 Some of the other modules in the oslo incubator expect a global config
 object because they started life in applications with that pattern, but as
 we move them to libraries we are updating the APIs to take a ConfigObj as
 argument (see oslo.messaging and oslo.db for examples).


What I mean is that instead of passing ConfigObj and a section name in
arguments for some plugin/lib it would be cleaner to receive an object that
represents one section of config, not the whole config at once.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Fri, Jul 25, 2014 at 2:35 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 5:43 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Fri, Jul 25, 2014 at 12:05 AM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name
 known options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in
 the parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register
 its options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


 Plugin would have to register its options under a fixed group. But what
 if we want a number of plugin instances?


 Presumably something would know a name associated with each instance and
 could pass it to the plugin to use when registering its options.




 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


 I don't exactly know what the original author's intention is but I don't
 generally like the fact that all libraries and plugins wanting to use
 config have to influence global CONF instance.


 That is a common misconception. The use of a global configuration option
 is an application developer choice. The config library does not require it.
 Some of the other modules in the oslo incubator expect a global config
 object because they started life in applications with that pattern, but as
 we move them to libraries we are updating the APIs to take a ConfigObj as
 argument (see oslo.messaging and oslo.db for examples).


 What I mean is that instead of passing ConfigObj and a section name in
 arguments for some plugin/lib it would be cleaner to receive an object that
 represents one section of config, not the whole config at once.


 The new ConfigFilter class lets you do something like what you want [1].
 The options are visible only in the filtered view created by the plugin, so
 the application can’t see them. That provides better data separation, and
 prevents the options used by the plugin or library from becoming part of
 its API.

 Doug

 [1] http://docs.openstack.org/developer/oslo.config/cfgfilter.html


Yes, it looks like it. Didn't know about that, thanks!
I wonder who should wrap CONF object into ConfigFilter - core or plugin.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][gerrit] any way to see my votes?

2014-07-31 Thread Yuriy Taraday
On Thu, Jul 31, 2014 at 2:23 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hi all,

 in Gerrit UI, I would like to be able to see a separate column with my
 votes, so that I have a clear view of what was missed from my eye.
 I've looked in settings, and failed to find an option for this.

 Is there a way to achieve this?


You can use search for this. label:Code-Review=0,self will get you all
changes that don't have your -2,-1,+1 or +2. The same goes for other labels.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-31 Thread Yuriy Taraday
On Wed, Jul 30, 2014 at 11:52 AM, Kyle Mestery mest...@mestery.com wrote:
 and even less
 possibly rootwrap [3] if the security implications can be worked out.

Can you please provide some input on those security implications that are
not worked out yet?
I'm really surprised to see such comments in some ML thread not directly
related to the BP. Why is my spec blocked? Neither spec [1] nor code (which
is available for a really long time now [2] [3]) can get enough reviewers'
attention because of those groundless -2's. Should I abandon these change
requests and file new ones to get some eyes on my code and proposals? It's
just getting ridiculous. Let's take a look at timeline, shall we?

Mar, 25 - first version of the first part of Neutron code is published at
[2]
Mar, 28 - first reviewers come and it gets -1'd by Mark because of lack of
BP (thankful it wasn't -2 yet, so reviews continued)
Apr, 1 - Both Oslo [5] and Neturon [6] BPs are created;
Apr, 2 - first version of the second part of Neutron code is published at
[3];
May, 16 - first version of Neutron spec is published at [1];
May, 19 - Neutron spec gets frozen by Mark's -2 (because Oslo BP is not
approved yet);
May, 21 - first part of Neutron code [2] is found generally OK by reviewers;
May, 21 - first version of Oslo spec is published at [4];
May, 29 - a version of the second part of Neutron code [3] is published
that later raises only minor comments by reviewers;
Jun, 5 - both parts of Neutron code [2] [3] get frozen by -2 from Mark
because BP isn't approved yet;
Jun, 23 - Oslo spec [4] is mostly ironed out;
Jul, 8 - Oslo spec [4] is merged, Neutron spec immediately gets +1 and +2;
Jul, 20 - SAD kicks in, no comments from Mark or anyone on blocked change
requests;
Jul, 24 - in response to Kyle's suggestion I'm filing SAD exception;
Jul, 31 - I'm getting final decision as follows: Your BP will extremely
unlikely get to Juno.

Do you see what I see? Code and spec is mostly finished in May (!) where
the mostly part is lack of reviewers because of that Mark's -2. And 1
month later when all bureaucratic reasons fall off nothing happens. Don't
think I didn't try to approach Mark. Don't think I didn't approach Kyle on
this issue. Because I did. But nothing happens and another month passes by
and I get You know, may be later general response. Noone (but those who
knew about it originally) even looks at my code for 2 months already
because Mark doesn't think (I hope he did think) he should lift -2 and I'm
getting why not wait another 3 months?

What the hell is that? You don't want to land features that doesn't have
code 2 weeks before Juno-3, I get that. My code has almost finished code by
3.5 months before that! And you're considering to throw it to Kilo because
of some mystical issues that must've been covered in Oslo spec [4] and if
you like it they can be covered in Neutron spec [1] but you have to let
reviewers see it!

I don't think that Mark's actions (lack of them, actually) are what's
expected from core reviewer. No reaction to requests from developer whose
code got frozen by his -2. No reaction (at least no visible one) to PTL's
requests (and Kyle assured me he made those). Should we consider Mark
uncontrollable and unreachable? Why does he have -2 right in the first
place then? So how should I overcome his inaction? I can recreate new
change requests and hope he won't just -2 them with no comment at all. But
that would be just a sign of total failure of our shiny bureaucracy.

[1] https://review.openstack.org/93889 - Neutron spec
[2] https://review.openstack.org/82787 - first part of Neutron code
[3] https://review.openstack.org/84667 - second part of Neutron code
[4] https://review.openstack.org/94613 - Oslo spec
[5] https://blueprints.launchpad.net/oslo/+spec/rootwrap-daemon-mode
[6] https://blueprints.launchpad.net/neutron/+spec/rootwrap-daemon-mode

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-31 Thread Yuriy Taraday
On Thu, Jul 31, 2014 at 12:30 PM, Thierry Carrez thie...@openstack.org
wrote:

 Carl Baldwin wrote:
  Let me know if I can help resolve the concerns around rootwrap.  I
  think in this case, the return on investment could be high with a
  relatively low investment.

 I agree the daemon work around oslo.rootwrap is very promising, but this
 is a bit sensitive so we can't rush it. I'm pretty confident
 oslo.rootwrap 1.3 (or 2.0) will be available by the Juno release, but
 realistically I expect most projects to switch to using it during the
 kilo cycle, rather than in the very last weeks of Juno...


Neutron has always been considered to be the first adopter of daemon mode.
Given all the code on the Neutron side is mostly finished I think we can
safely switch Neutron first in Juno and wait for Kilo to switch other
projects.


-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 5:15 AM, Angus Salkeld angus.salk...@rackspace.com
wrote:

 On Tue, 2014-08-05 at 03:18 +0400, Yuriy Taraday wrote:
  Hello, git-review users!
 
 
  I'd like to gather feedback on a feature I want to implement that
  might turn out useful for you.
 
 
  I like using Git for development. It allows me to keep track of
  current development process, it remembers everything I ever did with
  the code (and more).
  I also really like using Gerrit for code review. It provides clean
  interfaces, forces clean histories (who needs to know that I changed
  one line of code in 3am on Monday?) and allows productive
  collaboration.
  What I really hate is having to throw away my (local, precious for me)
  history for all change requests because I need to upload a change to
  Gerrit.

 I just create a short-term branch to record this.


I tend to use branches that are squashed down to one commit after the first
upload and that's it. I'd love to keep all history during feature
development, not just the tip of it.


 
  That's why I want to propose making git-review to support the workflow
  that will make me happy. Imagine you could do smth like this:
 
 
  0. create new local branch;
 
 
  master: M--
   \
  feature:  *
 
 
  1. start hacking, doing small local meaningful (to you) commits;
 
 
  master: M--
   \
  feature:  A-B-...-C
 
 
  2. since hacking takes tremendous amount of time (you're doing a Cool
  Feature (tm), nothing less) you need to update some code from master,
  so you're just merging master in to your branch (i.e. using Git as
  you'd use it normally);
 
  master: M---N-O-...
   \\\
  feature:  A-B-...-C-D-...
 
 
  3. and now you get the first version that deserves to be seen by
  community, so you run 'git review', it asks you for desired commit
  message, and poof, magic-magic all changes from your branch is
  uploaded to Gerrit as _one_ change request;
 
  master: M---N-O-...
   \\\E* = uploaded
  feature:  A-B-...-C-D-...-E
 
 
  4. you repeat steps 1 and 2 as much as you like;
  5. and all consecutive calls to 'git review' will show you last commit
  message you used for upload and use it to upload new state of your
  local branch to Gerrit, as one change request.
 
 
  Note that during this process git-review will never run rebase or
  merge operations. All such operations are done by user in local branch
  instead.
 
 
  Now, to the dirty implementations details.
 
 
  - Since suggested feature changes default behavior of git-review,
  it'll have to be explicitly turned on in config
  (review.shadow_branches? review.local_branches?). It should also be
  implicitly disabled on master branch (or whatever is in .gitreview
  config).
  - Last uploaded commit for branch branch-name will be kept in
  refs/review-branches/branch-name.
  - For every call of 'git review' it will find latest commit in
  gerrit/master (or remote and branch from .gitreview), create a new one
  that will have that commit as its parent and a tree of current commit
  from local branch as its tree.
  - While creating new commit, it'll open an editor to fix commit
  message for that new commit taking it's initial contents from
  refs/review-branches/branch-name if it exists.
  - Creating this new commit might involve generating a temporary bare
  repo (maybe even with shared objects dir) to prevent changes to
  current index and HEAD while using bare 'git commit' to do most of the
  work instead of loads of plumbing commands.
 
 
  Note that such approach won't work for uploading multiple change
  request without some complex tweaks, but I imagine later we can
  improve it and support uploading several interdependent change
  requests from several local branches. We can resolve dependencies
  between them by tracking latest merges (if branch myfeature-a has been
  merged to myfeature-b then change request from myfeature-b will depend
  on change request from myfeature-a):
 
  master:M---N-O-...
  \\\-E*
  myfeature-a: A-B-...-C-D-...-E   \
\   \   J* = uploaded
  myfeature-b:   F-...-G-I-J
 
 
  This improvement would be implemented later if needed.
 
 
  I hope such feature seams to be useful not just for me and I'm looking
  forward to some comments on it.

 Hi Yuriy,

 I like my local history matching what is up for review and
 don't value the interim messy commits (I make a short term branch to
 save the history so I can go back to it - if I mess up a merge).


You'll still get this history in those special refs. But in your branch
you'll have your own history.



 Tho' others might love this idea.

 -Angus



-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 3:06 PM, Ryan Brown rybr...@redhat.com wrote:

  On 08/04/2014 07:18 PM, Yuriy Taraday wrote:
  snip

 +1, this is definitely a feature I'd want to see.

 Currently I run two branches bug/LPBUG#-local and bug/LPBUG# where
 the local is my full history of the change and the other branch is the
 squashed version I send out to Gerrit.


And I'm too lazy to keep switching between these branches :)
Great, you're first to support this feature!

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 5:27 PM, Sylvain Bauza sba...@redhat.com wrote:

 -1 to this as git-review default behaviour.


I don't suggest to make it the default behavior. As I wrote there will
definitely be a config option that would turn it on.


 Ideally, branches should be identical in between Gerrit and local Git.


The thing is that there's no feature branches in Gerrit. Just some number
of independent commits (patchsets). And you'll even get log of those
locally in special refs!


 I can understand some exceptions where developers want to work on
 intermediate commits and squash them before updating Gerrit, but in that
 case, I can't see why it needs to be kept locally. If a new patchset has to
 be done on patch A, then the local branch can be rebased interactively on
 last master, edit patch A by doing an intermediate patch, then squash the
 change, and pick the later patches (B to E)


And that works up to the point when your change requests evolves for
several months and there's no easy way to dig up why did you change that
default or how did this algorithm ended up in such shape. You can't simply
run bisect to find what did you break since 10 patchsets ago. Git has been
designed to be super easy to keep branches and most of them - locally. And
we can't properly use them.


 That said, I can also understand that developers work their way, and so
 could dislike squashing commits, hence my proposal to have a --no-squash
 option when uploading, but use with caution (for a single branch, how many
 dependencies are outdated in Gerrit because developers work on separate
 branches for each single commit while they could work locally on a single
 branch ? I can't iimagine how often errors could happen if we don't force
 by default to squash commits before sending them to Gerrit)


I don't quite get the reason for --no-squash option. With current
git-review there's no squashing at all. You either upload all outstanding
commits or you go a change smth by yourself. With my suggested approach you
don't squash (in terms of rebasing) anything, you just create a new commit
with the very same contents as in your branch.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 6:49 PM, Ryan Brown rybr...@redhat.com wrote:

 On 08/05/2014 09:27 AM, Sylvain Bauza wrote:
 
  Le 05/08/2014 13:06, Ryan Brown a écrit :
  -1 to this as git-review default behaviour. Ideally, branches should be
  identical in between Gerrit and local Git.

 Probably not as default behaviour (people who don't want that workflow
 would be driven mad!), but I think enough folks would want it that it
 should be available as an option.


This would definitely be a feature that only some users would turn on in
their config files.


 I am well aware this may be straying into feature creep territory, and
 it wouldn't be terrible if this weren't implemented.


I'm not sure I understand what do you mean by this...

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 7:51 PM, ZZelle zze...@gmail.com wrote:

 Hi,


 I like the idea  ... with complex change, it could useful for the
 understanding to split it into smaller changes during development.


 Do we need to expose such feature under git review? we could define a new
 subcommand? git reviewflow?


Yes. I think we should definitely make it an enhancement for 'git review'
command because it's essentially mostly the same 'git review' control flow
with an extra preparation step and a bit shifted upload source. git-review
is a magic command that does what you need finishing with change request
upload. And this is exactly what I want here.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 8:20 PM, Varnau, Steve (Trafodion) 
steve.var...@hp.com wrote:

  Yuriy,



 It looks like this would automate a standard workflow that my group often
 uses: multiple commits, create “delivery” branch, git merge --squash, git
 review.  That looks really useful.



 Having it be repeatable is a bonus.


That's great! I'm glad to hear that there are more and more supporters for
it.


  Per last bullet of the implementation, I would not require not modifying
 current index/HEAD. A checkout back to working branch can be done at the
 end, right?


To make this magic commit we'll have to backtrack HEAD to the latest commit
in master, then load tree from the latest commit in the feature branch to
index and then do the commit. To do this properly without hurting worktree,
messing index and losing HEAD I think it'd be safer to create a very small
clone. As a bonus you won't have to stash your local changes or current
index to run 'git review'.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.

 I don't understand this.  If it's a complex change that you need
 multiple commits to keep track of locally, why wouldn't reviewers want
 the same thing?  Squashing a bunch of commits together solely so you
 have one review for Gerrit isn't a good thing.  Is it just the warning
 message that git-review prints when you try to push multiple commits
 that is the problem here?


When you're developing some big change you'll end up with trying dozens of
different approaches and make thousands of mistakes. For reviewers this is
just unnecessary noise (commit title Scratch my last CR, that was
bullshit) while for you it's a precious history that can provide basis for
future research or bug-hunting.

Merges are one of the strong sides of Git itself (and keeping them very
easy is one of the founding principles behind it). With current workflow we
don't use them at all. master went too far forward? You have to do rebase
and screw all your local history and most likely squash everything anyway
because you don't want to fix commits with known bugs in them. With
proposed feature you can just do merge once and let 'git review' add some
magic without ever hurting your code.

And speaking about breaking down of change requests don't forget support
for change requests chains that this feature would lead to. How to you deal
with 5 consecutive change request that are up on review for half a year?
The only way I could suggest to my colleague at a time was Erm... Learn
Git and dance with rebases, detached heads and reflogs! My proposal might
take care of that too.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 1:17 AM, Ben Nemec openst...@nemebean.com wrote:

 On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
  On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com
 wrote:
 
  On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.
 
  I don't understand this.  If it's a complex change that you need
  multiple commits to keep track of locally, why wouldn't reviewers want
  the same thing?  Squashing a bunch of commits together solely so you
  have one review for Gerrit isn't a good thing.  Is it just the warning
  message that git-review prints when you try to push multiple commits
  that is the problem here?
 
 
  When you're developing some big change you'll end up with trying dozens
 of
  different approaches and make thousands of mistakes. For reviewers this
 is
  just unnecessary noise (commit title Scratch my last CR, that was
  bullshit) while for you it's a precious history that can provide basis
 for
  future research or bug-hunting.

 So basically keeping a record of how not to do it?


Well, yes, you can call version control system a history of failures.
Because if there were no failures there would've been one omnipotent commit
that does everything you want it to.


  I get that, but I
 think I'm more onboard with the suggestion of sticking those dead end
 changes into a separate branch.  There's no particular reason to keep
 them on your working branch anyway since they'll never merge to master.


The commits themselves are never going to merge to master but that's not
the only meaning of their life. With current tooling working branch ends
up a patch series that is constantly rewritten with no proper history of
when did that happen and why. As I said, you can't find roots of bugs in
your code, you can't dig into old versions of your code (what if you need a
method that you've already created but removed because of some wrong
suggestion?).

 They're basically unnecessary conflicts waiting to happen.


No. They are your local history. They don't need to be rebased on top of
master - you can just merge master into your branch and resolve conflicts
once. After that your autosquashed commit will merge clearly back to
master.


  Merges are one of the strong sides of Git itself (and keeping them very
  easy is one of the founding principles behind it). With current workflow
 we
  don't use them at all. master went too far forward? You have to do rebase
  and screw all your local history and most likely squash everything anyway
  because you don't want to fix commits with known bugs in them. With
  proposed feature you can just do merge once and let 'git review' add some
  magic without ever hurting your code.

 How do rebases screw up your local history?  All your commits are still
 there after a rebase, they just have a different parent.  I also don't
 see how rebases are all that much worse than merges.  If there are no
 conflicts, rebases are trivial.  If there are conflicts, you'd have to
 resolve them either way.


Merge is a new commit, new recorded point in history. Rebase is rewriting
your commit, replacing it with a new one, without any record in history (of
course there will be a record in reflog but there's not much tooling to
work with it). Yes, you just apply your patch to a different version of
master branch. And then fix some conflicts. And then fix some tests. And
then you end up with totally different commit.
I totally agree that life's very easy when there's no conflicts and you've
written all your feature in one go. But that's almost never the true.


 I also reiterate my point about not keeping broken commits on your
 working branch.  You know at some point they're going to get
 accidentally submitted. :-)


Well... As long as you use 'git review' to upload CRs, you're safe. If you
do 'git push gerrit HEAD:refs/for/master' you're screwed. But why would you
do that?


 As far as letting git review do magic, how is that better than git
 rebase once and no magic required?  You deal with the conflicts and
 you're good to go.


In a number of manual tasks it's the same. If your patch cannot be merged
into master, you merge master to your local branch and you're good to go.
But as I said, merge will be remembered, rebase won't. And after that
rebase/merge you might end up having your tests failing, and you'll have to
rewrite your commit again with --amend, with no record in history.


 And if someone asks you to split a commit, you can
 do it.  With this proposal you can't, because anything but squashing
 into one commit is going to be a nightmare (which might be my biggest
 argument against this).


You can do it with the new approach as well. See at the end of the
proposal. You split your current branch into a number of branches and let
git-review detect who depends on who between them.

 And speaking about breaking down of change

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
I'd like to stress this to everyone: I DO NOT propose squashing together
commits that should belong to separate change requests. I DO NOT propose to
upload all your changes at once. I DO propose letting developers to keep
local history of all iterations they have with a change request. The
history that absolutely doesn't matter to anyone but this developer.

On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net wrote:

 Ben Nemec openst...@nemebean.com writes:

  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
   They're basically unnecessary conflicts waiting to happen.

 Yeah, I would never keep broken or unfinished commits around like this.
 In my opinion (as a core Mercurial developer), the best workflow is to
 work on a feature and make small and large commits as you go along. When
 the feature works, you begin squashing/splitting the commits to make
 them into logical pieces, if they aren't already in good shape. You then
 submit the branch for review and iterate on it until it is accepted.


Absolutely true. And it's mostly the same workflow that happens in
OpenStack: you do your cool feature, you carve meaningful small
self-contained pieces out of it, you submit series of change requests.
And nothing in my proposal conflicts with it. It just provides a way to
make developer's side of this simpler (which is the intent of git-review,
isn't it?) while not changing external artifacts of one's work: the same
change requests, with the same granularity.


 As a reviewer, it cannot be stressed enough how much small, atomic,
 commits help. Squashing things together into large commits make reviews
 very tricky and removes the possibility of me accepting a later commit
 while still discussing or rejecting earlier commits (cherry-picking).


That's true, too. But please don't think I'm proposing to squash everything
together and push 10k-loc patches. I hate that, too. I'm proposing to let
developer use one's tools (Git) in a simpler way.
And the simpler way (for some of us) would be to have one local branch for
every change request, not one branch for the whole series. Switching
between branches is very well supported by Git and doesn't require extra
thinking. Jumping around in detached HEAD state and editing commits during
rebase requires remembering all those small details.

 FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than conflicts,
  of course, which are going to be an issue with any long-running change
  no matter how it's submitted. There isn't a ton of git magic involved.

 I agree. The conflicts you talk about are intrinsic to the parallel
 development. Doing a rebase is equivalent to doing a series of merges,
 so if rebase gives you conflicts, you can be near certain that a plain
 merge would give you conflicts too. The same applies other way around.


You disregard other issues that can happen with patch series. You might
need something more that rebase. You might need to fix something. You might
need to focus on the one commit in the middle and do huge bunch of changes
in it alone. And I propose to just allow developer to keep track of what's
one been doing instead of forcing one to remember all of this.

 So as you may have guessed by now, I'm opposed to adding this to
  git-review. I think it's going to encourage bad committer behavior
  (monolithic commits) and doesn't address a use case I find compelling
  enough to offset that concern.

 I don't understand why this would even be in the domain of git-review. A
 submitter can do the puff magic stuff himself using basic Git commands
 before he submits the collapsed commit.


Isn't it the domain of git-review - puff magic? You can upload your
changes with 'git push HEAD:refs/for/master', you can do all your rebasing
by yourself, but somehow we ended up with this tool that simplifies common
tasks related to uploading changes to Gerrit.
And (at least for some) such change would simplify their day-to-day
workflow with regards to uploading changes to Gerrit.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 12:55 PM, Sylvain Bauza sba...@redhat.com wrote:


 Le 06/08/2014 10:35, Yuriy Taraday a écrit :

  I'd like to stress this to everyone: I DO NOT propose squashing together
 commits that should belong to separate change requests. I DO NOT propose to
 upload all your changes at once. I DO propose letting developers to keep
 local history of all iterations they have with a change request. The
 history that absolutely doesn't matter to anyone but this developer.


 Well, I can understand that for ease, we could propose it as an option in
 git-review, but I'm just thinking that if you consider your local Git repo
 as your single source of truth (and not Gerrit), then you just have to make
 another branch and squash your intermediate commits for Gerrit upload only.


That's my proposal - generate such another branches automatically. And
from this thread it looks like some people already do them by hand.


 If you need modifying (because of another iteration), you just need to
 amend the commit message on each top-squasher commit by adding the
 Change-Id on your local branch, and redo the process (make a branch,
 squash, upload) each time you need it.


I don't quite understand the top-squasher commit part but what I'm
suggesting is to automate this process to make users including myself
happier.


 Gerrit is cool, it doesn't care about SHA-1s but only Change-Id, so
 cherry-picking and rebasing still works (hurrah)


Yes, and that's the only stable part of those another branches.


 tl;dr: do as many as intermediate commits you want, but just generate a
 Change-ID on the commit you consider as patch, so you just squash the
 intermediate commits on a separate branch copy for Gerrit use only
 (one-way).

 Again, I can understand the above as hacky, so I'm not against your
 change, just emphasizing it as non-necessary (but anyway, everything can be
 done without git-review, even the magical -m option :-) )


I'd even prefer to leave it to git config file so that it won't get
accidentally enabled unless user know what one's doing.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 6:20 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
  I'd like to stress this to everyone: I DO NOT propose squashing together
  commits that should belong to separate change requests. I DO NOT propose
 to
  upload all your changes at once. I DO propose letting developers to keep
  local history of all iterations they have with a change request. The
  history that absolutely doesn't matter to anyone but this developer.

 Right, I understand that may not be the intent, but it's almost
 certainly going to be the end result.  You can't control how people are
 going to use this feature, and history suggests if it can be abused, it
 will be.


Can you please outline the abuse scenario that isn't present nowadays?
People upload huge changes and are encouraged to split them during review.
The same will happen within proposed workflow. More experienced developers
split their change into a set of change requests. The very same will happen
within proposed workflow.


  On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net
 wrote:
 
  Ben Nemec openst...@nemebean.com writes:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
   They're basically unnecessary conflicts waiting to happen.
 
  Yeah, I would never keep broken or unfinished commits around like this.
  In my opinion (as a core Mercurial developer), the best workflow is to
  work on a feature and make small and large commits as you go along. When
  the feature works, you begin squashing/splitting the commits to make
  them into logical pieces, if they aren't already in good shape. You then
  submit the branch for review and iterate on it until it is accepted.
 
 
  Absolutely true. And it's mostly the same workflow that happens in
  OpenStack: you do your cool feature, you carve meaningful small
  self-contained pieces out of it, you submit series of change requests.
  And nothing in my proposal conflicts with it. It just provides a way to
  make developer's side of this simpler (which is the intent of git-review,
  isn't it?) while not changing external artifacts of one's work: the same
  change requests, with the same granularity.
 
 
  As a reviewer, it cannot be stressed enough how much small, atomic,
  commits help. Squashing things together into large commits make reviews
  very tricky and removes the possibility of me accepting a later commit
  while still discussing or rejecting earlier commits (cherry-picking).
 
 
  That's true, too. But please don't think I'm proposing to squash
 everything
  together and push 10k-loc patches. I hate that, too. I'm proposing to let
  developer use one's tools (Git) in a simpler way.
  And the simpler way (for some of us) would be to have one local branch
 for
  every change request, not one branch for the whole series. Switching
  between branches is very well supported by Git and doesn't require extra
  thinking. Jumping around in detached HEAD state and editing commits
 during
  rebase requires remembering all those small details.
 
  FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than conflicts,
  of course, which are going to be an issue with any long-running change
  no matter how it's submitted. There isn't a ton of git magic involved.
 
  I agree. The conflicts you talk about are intrinsic to the parallel
  development. Doing a rebase is equivalent to doing a series of merges,
  so if rebase gives you conflicts, you can be near certain that a plain
  merge would give you conflicts too. The same applies other way around.
 
 
  You disregard other issues that can happen with patch series. You might
  need something more that rebase. You might need to fix something. You
 might
  need to focus on the one commit in the middle and do huge bunch of
 changes
  in it alone. And I propose to just allow developer to keep track of
 what's
  one been doing instead of forcing one to remember all of this.

 This is a separate issue though.  Editing a commit in the middle of a
 series doesn't have to be done at the same time as a rebase to master.


No, this will be done with a separate interactive rebase or that detached
HEAD and reflog dance. I don't see this as smth clearer than doing proper
commits in a separate branches.

In fact, not having

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 12:41 AM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 1:17 AM, Ben Nemec openst...@nemebean.com
 wrote:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
  On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com
  wrote:
 
  On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.
 
  I don't understand this.  If it's a complex change that you need
  multiple commits to keep track of locally, why wouldn't reviewers want
  the same thing?  Squashing a bunch of commits together solely so you
  have one review for Gerrit isn't a good thing.  Is it just the warning
  message that git-review prints when you try to push multiple commits
  that is the problem here?
 
 
  When you're developing some big change you'll end up with trying dozens
  of
  different approaches and make thousands of mistakes. For reviewers this
  is
  just unnecessary noise (commit title Scratch my last CR, that was
  bullshit) while for you it's a precious history that can provide basis
  for
  future research or bug-hunting.
 
  So basically keeping a record of how not to do it?
 
 
  Well, yes, you can call version control system a history of failures.
  Because if there were no failures there would've been one omnipotent
 commit
  that does everything you want it to.

 Ideally, no.  In a perfect world every commit would work, so the version
 history would be a number of small changes that add up to this great
 application.  In reality it's a combination of new features, oopses, and
 fixes for those oopses.  I certainly wouldn't describe it as a history
 of failures though.  I would hope the majority of commits to our
 projects are _not_ failures. :-)


Well, new features are merged just to be later fixed and refactored - how
that's not a failure? And we basically do keep a record of how not to do
it in our repositories. Why prevent developers do the same on the smaller
scale?

  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
 
 
  The commits themselves are never going to merge to master but that's not
  the only meaning of their life. With current tooling working branch
 ends
  up a patch series that is constantly rewritten with no proper history of
  when did that happen and why. As I said, you can't find roots of bugs in
  your code, you can't dig into old versions of your code (what if you
 need a
  method that you've already created but removed because of some wrong
  suggestion?).

 You're not going to find the root of a bug in your code by looking at an
 old commit that was replaced by some other implementation.  If anything,
 I see that as more confusing.  And if you want to keep old versions of
 your code, either push it to Gerrit or create a new branch before
 changing it further.


So you propose two options:
- store history of your work within Gerrit's patchsets for each change
request, which don't fit commit often approach (who'd want to see how I
struggle with fixing some bug or write working test?);
- store history of your work in new branches instead of commits in the same
branch, which... is not how Git is supposed to be used.
And both this options don't provide any proper way of searching through
this history.

Have you ever used bisect? Sometimes I find myself wanting to use it
instead of manually digging through patchsets in Gerrit to find out which
change I made broke some usecase I didn't put in unittests yet.

  They're basically unnecessary conflicts waiting to happen.
 
 
  No. They are your local history. They don't need to be rebased on top of
  master - you can just merge master into your branch and resolve conflicts
  once. After that your autosquashed commit will merge clearly back to
  master.

 Then don't rebase them.  git checkout -b dead-end and move on. :-)


I never proposed to rebase anything. I want to use merge instead of rebase.

  Merges are one of the strong sides of Git itself (and keeping them very
  easy is one of the founding principles behind it). With current
 workflow
  we
  don't use them at all. master went too far forward? You have to do
 rebase
  and screw all your local history and most likely squash everything
 anyway
  because you don't want to fix commits with known bugs in them. With
  proposed feature you can just do merge once and let 'git review' add
 some
  magic without ever hurting your code.
 
  How do rebases screw up your local history?  All your commits are still
  there after a rebase, they just have a different parent.  I also don't
  see how rebases are all that much worse than merges.  If there are no
  conflicts, rebases are trivial

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
I'll start using pictures now, so let's assume M is the latest commit on
the master.

On Wed, Aug 6, 2014 at 9:31 PM, Zane Bitter zbit...@redhat.com wrote:

 On 04/08/14 19:18, Yuriy Taraday wrote:

 Hello, git-review users!

 I'd like to gather feedback on a feature I want to implement that might
 turn out useful for you.

 I like using Git for development. It allows me to keep track of current
 development process, it remembers everything I ever did with the code
 (and more).


 _CVS_ allowed you to remember everything you ever did; Git is _much_ more
 useful.


  I also really like using Gerrit for code review. It provides clean
 interfaces, forces clean histories (who needs to know that I changed one
 line of code in 3am on Monday?) and allows productive collaboration.


 +1


  What I really hate is having to throw away my (local, precious for me)
 history for all change requests because I need to upload a change to
 Gerrit.


 IMO Ben is 100% correct and, to be blunt, the problem here is your
 workflow.


Well... That's the workflow that was born with Git. Keeping track of all
changes, do extremely cheap merges, and all that.

Don't get me wrong, I sympathise - really, I do. Nobody likes to change
 their workflow. I *hate* it when I have to change mine. However what you're
 proposing is to modify the tools to make it easy for other people - perhaps
 new developers - to use a bad workflow instead of to learn a good one from
 the beginning, and that would be a colossal mistake. All of the things you
 want to be made easy are currently hard because doing them makes the world
 a worse place.


And when OpenStack switched to Gerrit I was really glad. Instead of ugly

master: ...-M-.-o-o-...
 \   /
  a1-b1-a2-a3-b2-c1-b3-c2

where a[1-3], b[1-3] and c[1-2] are iterations over the same piece of the
feature, we can have pretty

master: ...-M-.o-.-o-...
 \/   /
  A^-B^-C^

where A^, B^ and C^ are the perfect self-contained, independently
reviewable and mergeable pieces of the feature.

And this looked splendid and my workflow seemed clear. Suppose I have smth
like:

master: ...-M
 \
  A3-B2-C1

and I need to update B to B3 and C to C2. So I go:
$ git rebase -i M  # and add edit action to B commit
$ vim # do some changes, test them, etc
$ git rebase --continue
now I have

master: ...-M
 \
  A3-B2-C1
\
 B3-C1'

Then I fix C commit, amend it and get:

master: ...-M
 \
  A3-B2-C1
\
 B3-C1'
   \
С2

Now everything's cool, isn't it? But world isn't fair. And C2 fails a test
that I didn't expect to fail. Or the test suite failed to fail earlier. I'd
like to see if I broke it just now or were it broken after rebase. How do I
do it? With your workflow - I don't. I play smart and guess where the
problem was or dig into reflog to find C1' (or C1), etc. Let's see what
else I can't find. After full iteration over this feature (as in the first
picture) I end up with total history like this:

master: ...-M
|\
| A1-B1
|\
| A2-B1'
 \
  A3-B1''
   |\
   | B2-C1
\
 B3-C1'
   \
С2

With only A3, B3 and C2 available, the rest are practically unreachable.
Now you find out that something that you were sure was working in B1 is
broken (you'll tell me hey, you're supposed to have tests with
everything! - I'll answer: what if you've found a problem in the test
suite that gave false success?). You can do absolutely nothing to localize
the issue now. Just go and dig into your B code (which might've been
written months ago).
Or you slap your head understanding that the function you thought is not
needed in B2 is actually needed. Well, you can hope you did upload B2 to
Gerrit and you'll find it there. Or you didn't because you decided to make
that change the minute after you committed C1, created B3 and B2 never
existed now...

Now imagine you could somehow link together all As, Bs and Cs. Let's draw
vertical edges between them. And let's transpose the picture, shall we?

master: ...-M
 \
  A1-A2--A3
\  \   \\  \
 B1-B1'-B1''-B2-B3
   \  \   \
C1-C1'-C2

Note that all commits here are absolutely the same as in previous picture.
They just have additional parents (and consequently differen IDs). No
changes to any code in them happen. No harm done.

So now it looks way better. I can just do:
$ git checkout B3
$ git diff HEAD~
and find my lost function!

Now let's be honest and admit that As, Bs and Cs are essentially branches -
labels your commits have that shift with relevant

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
Oh, looks like we got a bit of a race condition in messages. I hope you
don't mind.


On Wed, Aug 6, 2014 at 11:00 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 01:42 PM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 6:20 PM, Ben Nemec openst...@nemebean.com
 wrote:
 
  On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
  I'd like to stress this to everyone: I DO NOT propose squashing
 together
  commits that should belong to separate change requests. I DO NOT
 propose
  to
  upload all your changes at once. I DO propose letting developers to
 keep
  local history of all iterations they have with a change request. The
  history that absolutely doesn't matter to anyone but this developer.
 
  Right, I understand that may not be the intent, but it's almost
  certainly going to be the end result.  You can't control how people are
  going to use this feature, and history suggests if it can be abused, it
  will be.
 
 
  Can you please outline the abuse scenario that isn't present nowadays?
  People upload huge changes and are encouraged to split them during
 review.
  The same will happen within proposed workflow. More experienced
 developers
  split their change into a set of change requests. The very same will
 happen
  within proposed workflow.

 There will be a documented option in git-review that automatically
 squashes all commits.  People _will_ use that incorrectly because from a
 submitter perspective it's easier to deal with one review than multiple,
 but from a reviewer perspective it's exactly the opposite.


It won't be documented as such. It will include use with care and years
of Git experience: 3+ stickers. Autosquashing will never be mentioned
there. Only a detailed explanation of how to work with it and (probably)
how it works. No rogue dev will get through it without getting the true
understanding.

By the way, currently git-review suggests to squash your outstanding
commits but there is no overwhelming flow of overly huge change requests,
is there?

 On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net
  wrote:
 
  Ben Nemec openst...@nemebean.com writes:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to
 master.
   They're basically unnecessary conflicts waiting to happen.
 
  Yeah, I would never keep broken or unfinished commits around like
 this.
  In my opinion (as a core Mercurial developer), the best workflow is to
  work on a feature and make small and large commits as you go along.
 When
  the feature works, you begin squashing/splitting the commits to make
  them into logical pieces, if they aren't already in good shape. You
 then
  submit the branch for review and iterate on it until it is accepted.
 
 
  Absolutely true. And it's mostly the same workflow that happens in
  OpenStack: you do your cool feature, you carve meaningful small
  self-contained pieces out of it, you submit series of change requests.
  And nothing in my proposal conflicts with it. It just provides a way to
  make developer's side of this simpler (which is the intent of
 git-review,
  isn't it?) while not changing external artifacts of one's work: the
 same
  change requests, with the same granularity.
 
 
  As a reviewer, it cannot be stressed enough how much small, atomic,
  commits help. Squashing things together into large commits make
 reviews
  very tricky and removes the possibility of me accepting a later commit
  while still discussing or rejecting earlier commits (cherry-picking).
 
 
  That's true, too. But please don't think I'm proposing to squash
  everything
  together and push 10k-loc patches. I hate that, too. I'm proposing to
 let
  developer use one's tools (Git) in a simpler way.
  And the simpler way (for some of us) would be to have one local branch
  for
  every change request, not one branch for the whole series. Switching
  between branches is very well supported by Git and doesn't require
 extra
  thinking. Jumping around in detached HEAD state and editing commits
  during
  rebase requires remembering all those small details.
 
  FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than
 conflicts,
  of course, which are going to be an issue with any long-running
 change
  no matter how it's submitted. There isn't a ton of git magic
 involved.
 
  I agree. The conflicts you talk

Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Yuriy Taraday
Hello.

On Wed, Apr 23, 2014 at 2:40 AM, James E. Blair jebl...@openstack.orgwrote:

 * The new Workflow label will have a -1 Work In Progress value which
   will replace the Work In Progress button and review state.  Core
   reviewers and change owners will have permission to set that value
   (which will be removed when a new patchset is uploaded).


Wouldn't it be better to make this label more persistent?
As I remember there were some ML threads about keeping WIP mark across
patch sets. There were even talks about changing git-review to support this.
How about we make it better with the new version of Gerrit?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Yuriy Taraday
On Fri, Apr 25, 2014 at 8:10 PM, Zaro zaro0...@gmail.com wrote:

 Gerrit 2.8 allows setting label values on patch sets either thru the
 command line[1] or REST API[2].  Since we will setup WIP as a -1 score
 on a label this will just be a matter of updating git-review to set
 the label on new patchsets.  I'm no sure if there's a bug entered in
 our the issue tracker for this but you are welcome to create one.

 [1] https://review-dev.openstack.org/Documentation/cmd-review.html
 [2]
 https://review-dev.openstack.org/Documentation/rest-api-changes.html#set-review


Why do you object making it a default behavior on the Gerrit side?
Is there any issue with making this label pass on to new patch sets?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-26 Thread Yuriy Taraday
On Fri, Apr 25, 2014 at 11:41 PM, Zaro zaro0...@gmail.com wrote:

 Do you mean making it default to WIP on every patchset that gets
 uploaded?


No. I mean carrying WIP to all new patch sets once it is set just like
Code-Review -2 is handled by default.

Gerrit 2.8 does allow you to carry the same label score forward[1] if
 it's either a trivial rebase or no code has changed.  We plan to set
 these options for the 'Code-Review' label, but not the Workflow label.

 [1]
 https://gerrit-review.googlesource.com/Documentation/config-labels.html


It looks like copyMinScore option for Workflow label will do what I'm
talking about.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Yuriy Taraday
On Thu, May 1, 2014 at 8:17 PM, Salvatore Orlando sorla...@nicira.comwrote:

 The patch you've been looking at just changes the way in which SystemExit
 is used, it does not replace it with sys.exit.
 In my experience sys.exit was causing unit test threads to interrupt
 abruptly, whereas SystemExit was being caught by the test runner and
 handled.


According to https://docs.python.org/2.7/library/sys.html#sys.exit ,
sys.exit(n) is an equivalent for raise SystemExit(n), it can be confirmed
in the source code here:
http://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l206
If there's any difference in behavior it seems to be the problem of test
runner. For example, it can mock sys.exit somehow.

 I find therefore a bit strange that you're reporting what appears to be
 the opposite behaviour.

 Maybe if you could share the code you're working on we can have a look at
 it and see what's going on.


I'd suggest finding out what's the difference in both of your cases.

Coming back to topic, I'd prefer using standard library call because it can
be mocked for testing.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Yuriy Taraday
On Thu, May 1, 2014 at 10:41 PM, Paul Michali (pcm) p...@cisco.com wrote:

 ==
 FAIL: process-returncode
 tags: worker-1
 --
 *Binary content:*
 *  traceback (test/plain; charset=utf8)*
 ==
 FAIL: process-returncode
 tags: worker-0
 --
 *Binary content:*
 *  traceback (test/plain; charset=utf8)*


process-returncode failures means that child process (subunit one) exited
with nonzero code.


 It looks like there was some traceback, but it doesn’t show it. Any ideas
 how to get around this, as it makes it hard to troubleshoot these types of
 failures?


Somehow traceback got MIME type test/plain. I guess, testr doesn't push
this type of attachments to the screen. You can try to see what's there in
.testrepository dir but I doubt there will be anything useful there.

I think this behavior is expected. Subunit process gets terminated because
of uncaught SystemExit exception and testr reports that as an error.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Debugging tox tests with pdb?

2014-05-07 Thread Yuriy Taraday
Hello, Eric.


On Wed, May 7, 2014 at 10:15 PM, Pendergrass, Eric
eric.pendergr...@hp.comwrote:

 Hi, I’ve read much of the documentation around Openstack tests, tox, and
 testr.  All I’ve found indicates debugging can be done, but only by running
 the entire test suite.



 I’d like the ability to run a single test module with pdb.set_trace()
 breakpoints inserted, then step through the test.  I’ve tried this but it
 causes test failures on a test that would otherewise succeed.  The command
 I use to run the test is similar to this:  tox -e py27 test_module_name



 Is there some way to debug single tests that I haven’t found?  If not, how
 is everyone doing test development without the ability to debug?


You can do it as easy as:
.tox/py27/bin/python -m testtools.run test_module_name

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Searching for docs reviews in Gerrit

2014-05-18 Thread Yuriy Taraday
Hello, Anne.


On Sat, May 17, 2014 at 7:03 AM, Anne Gentle a...@openstack.org wrote:

 file:section_networking-adv-config.xml
 project:openstack/openstack-manuals


As it's stated in the manual: The regular expression pattern must start
with ^. Meaning that it'll always look only for files whose paths start
with string matching this regex not just include them.


 nor does:
 file:docs/admin-guide-cloud/networking/section_networking-adv-config.xml
 project:openstack/openstack-manuals


You've misspelled the first dir name - it's doc and it's working.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Divergence of *-specs style checking

2014-05-20 Thread Yuriy Taraday
Great idea!

On Mon, May 19, 2014 at 8:38 PM, Alexis Lee alex...@hp.com wrote:

 Potentially the TITLES structure could
 be read from a per-project YAML file and the test itself could be drawn
 from some common area?


I think you can get that data from template.rst file by parsing it and
analyzing the tree.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Hide CI comments in Gerrit

2014-05-29 Thread Yuriy Taraday
On Tue, May 27, 2014 at 6:07 PM, James E. Blair jebl...@openstack.orgwrote:

 I wonder if it would
 be possible to detect them based on the presence of a Verified vote?


Not all CIs always add a vote. Only 3 or so of over 9000 Neutron's CIs put
their +/-1s on the change.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy for linking bug or bp in commit message

2014-05-29 Thread Yuriy Taraday
On Wed, May 28, 2014 at 3:54 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, May 23, 2014 at 1:13 PM, Nachi Ueno na...@ntti3.com wrote:

 (2) Avoid duplication of works
 I have several experience of this.  Anyway, we should encourage people
 to check listed bug before
 writing patches.


 That's a very good point, but I don't think requiring a bug/bp for every
 patch is a good way to address this. Perhaps there is another way.


We can require developer to either link to bp/bug or explicitly add
Minor-fix line to the commit message.
I think that would force commit author to at least think about whether
commit worth submitting a bug/bp or not.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Yuriy Taraday
This behavior of os.pipe() has changed in Python 3.x so it won't be an
issue on newer Python (if only it was accessible for us).

From the looks of it you can mitigate the problem by running libguestfs
requests in a separate process (multiprocessing.managers comes to mind).
This way the only descriptors child process could theoretically inherit
would be long-lived pipes to main process although they won't leak because
they should be marked with CLOEXEC before any libguestfs request is run.
The other benefit is that this separate process won't be busy opening and
closing tons of fds so the problem with inheriting will be avoided.


On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use tpool
 to invoke libguestfs, and one external commend is executed in another green
 thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs routine
 in greenthread, rather than another native thread. But it will impact the
 performance very much. So I do not think that is an acceptable solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the issue
 can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch pad
 as below. . I am not sure if libguestfs itself can have certain mechanism
 to free/close the fds that inherited from parent process instead of require
 explicitly calling the tear down. Maybe open a defect to libguestfs to see
 what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

 --
 Qin Zhao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Yuriy Taraday
Please take a look at
https://docs.python.org/2.7/library/multiprocessing.html#managers -
everything is already implemented there.
All you need is to start one manager that would serve all your requests to
libguestfs. The implementation in stdlib will provide you with all
exceptions and return values with minimum code changes on Nova side.
Create a new Manager, register an libguestfs endpoint in it and call
start(). It will spawn a separate process that will speak with calling
process over very simple RPC.
From the looks of it all you need to do is replace tpool.Proxy calls in
VFSGuestFS.setup method to calls to this new Manager.


On Thu, Jun 5, 2014 at 7:21 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 Thanks for reading my bug!  You are right. Python 3.3 or 3.4 should not
 have this issue, since they have can secure the file descriptor. Before
 OpenStack move to Python 3, we may still need a solution. Calling
 libguestfs in a separate process seems to be a way. This way, Nova code can
 close those fd by itself, not depending upon CLOEXEC. However, that will be
 an expensive solution, since it requires a lot of code change. At least we
 need to write code to pass the return value and exception between these two
 processes. That will make this solution very complex. Do you agree?


 On Thu, Jun 5, 2014 at 9:39 PM, Yuriy Taraday yorik@gmail.com wrote:

 This behavior of os.pipe() has changed in Python 3.x so it won't be an
 issue on newer Python (if only it was accessible for us).

 From the looks of it you can mitigate the problem by running libguestfs
 requests in a separate process (multiprocessing.managers comes to mind).
 This way the only descriptors child process could theoretically inherit
 would be long-lived pipes to main process although they won't leak because
 they should be marked with CLOEXEC before any libguestfs request is run.
 The other benefit is that this separate process won't be busy opening and
 closing tons of fds so the problem with inheriting will be avoided.


 On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com
 wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use
 tpool to invoke libguestfs, and one external commend is executed in another
 green thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs
 routine in greenthread, rather than another native thread. But it will
 impact the performance very much. So I do not think that is an acceptable
 solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the
 issue can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch
 pad as below. . I am not sure if libguestfs itself can have certain
 mechanism to free/close the fds that inherited from parent process instead
 of require explicitly calling the tear down. Maybe open a defect to
 libguestfs to see what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram 
 to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this
 deadlock. This problem is related with Python runtime, libguestfs, and
 eventlet. The situation is a little complicated. Is there any

  1   2   >