Re: [openstack-dev] [fuel-octane] Nominate Sergey Abramov to fuel-octane core

2016-07-21 Thread Yuriy Taraday
+1 We need more cores anyway ;)

On Thu, Jul 21, 2016 at 11:56 AM Oleg Gelbukh  wrote:

> +1 here
>
> Sergey's performance and quality of the code he submitted are impressive.
> Please, keep going.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Thu, Jul 21, 2016 at 10:21 AM, Artur Svechnikov <
> asvechni...@mirantis.com> wrote:
>
>> +1
>>
>> Best regards,
>> Svechnikov Artur
>>
>> On Thu, Jul 21, 2016 at 12:10 AM, Ilya Kharin 
>> wrote:
>>
>>> Hello,
>>>
>>> I would like to nominate Sergey Abramov to fuel-octane core due to his
>>> significant contribution to the project [1] and [2].
>>>
>>> Best regards,
>>> Ilya Kharin.
>>>
>>> [1] http://stackalytics.com/report/contribution/fuel-octane/90
>>> [2]
>>> http://stackalytics.com/?release=all=fuel-octane=marks_id=sabramov
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Gate broken by openstack/requirements

2016-05-16 Thread Yuriy Taraday
>From IRC discussion I got that Telemetry team opted out of using
global-requirements and upper-constraints altogether a while ago, so I
understand why my proposal is not an option.

On Mon, May 16, 2016 at 3:33 PM Yuriy Taraday <yorik@gmail.com> wrote:

> Isn't it just a matter of updating upper-constraints? It seems latest
> generate-constraints CR [0] that includes this update is stuck for some
> reason so why not just update gnocchiclient in upper-constraints separately
> instead of dropping it from globar-requirements guard altogether? Later
> seems like an overreaction, really.
>
> [0] https://review.openstack.org/316350
>
> On Mon, May 16, 2016 at 3:21 PM Davanum Srinivas <dava...@gmail.com>
> wrote:
>
>> Julien,
>>
>> Cleaned up g-r/u-c in:
>> https://review.openstack.org/#/c/316356/
>>
>> -- Dims
>>
>> On Mon, May 16, 2016 at 6:43 AM, Julien Danjou <jul...@danjou.info>
>> wrote:
>> > Hi folks,
>> >
>> > Just to let you know that one of our telemetry test job is broken
>> > because of openstack/requirements capping gnocchiclient to 2.3.0 (for no
>> > good reason obviously).
>> >
>> > Until this cap is moved to 2.3.1 (that fixes the gnocchiclient bug we're
>> > hitting) or gnocchiclient is removed from openstack/requirements, we're
>> > stuck.
>> >
>> > So either of this reviews is required:
>> >
>> >   https://review.openstack.org/#/c/316350/
>> >   https://review.openstack.org/#/c/316356/
>> >
>> > No need for recheck until then.
>> >
>> > Cheers,
>> > --
>> > Julien Danjou
>> > -- Free Software hacker
>> > -- https://julien.danjou.info
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Gate broken by openstack/requirements

2016-05-16 Thread Yuriy Taraday
Isn't it just a matter of updating upper-constraints? It seems latest
generate-constraints CR [0] that includes this update is stuck for some
reason so why not just update gnocchiclient in upper-constraints separately
instead of dropping it from globar-requirements guard altogether? Later
seems like an overreaction, really.

[0] https://review.openstack.org/316350

On Mon, May 16, 2016 at 3:21 PM Davanum Srinivas  wrote:

> Julien,
>
> Cleaned up g-r/u-c in:
> https://review.openstack.org/#/c/316356/
>
> -- Dims
>
> On Mon, May 16, 2016 at 6:43 AM, Julien Danjou  wrote:
> > Hi folks,
> >
> > Just to let you know that one of our telemetry test job is broken
> > because of openstack/requirements capping gnocchiclient to 2.3.0 (for no
> > good reason obviously).
> >
> > Until this cap is moved to 2.3.1 (that fixes the gnocchiclient bug we're
> > hitting) or gnocchiclient is removed from openstack/requirements, we're
> > stuck.
> >
> > So either of this reviews is required:
> >
> >   https://review.openstack.org/#/c/316350/
> >   https://review.openstack.org/#/c/316356/
> >
> > No need for recheck until then.
> >
> > Cheers,
> > --
> > Julien Danjou
> > -- Free Software hacker
> > -- https://julien.danjou.info
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Liberty backward compatibility jobs are bound to fail

2016-02-10 Thread Yuriy Taraday
Hello.

I've noticed once again that job
"gate-tempest-dsvm-neutron-src-oslo.concurrency-liberty" is always failing.
After looking at the failure I found that the core issue is
ContextualVersionConflict [0]. It seems that we have conflicting
requirements for oslo.utils here, and we do: in Liberty upper-constraints
set oslo.utils to 3.2.0 version [1] while in master oslo.concurrency
requires at least 3.4.0 which is stated in global-requirements [2].

Other projects have similar issues too:
- oslo.utils fails [3] because of debtcollertor 1.1.0 [4] while it requires
at least 1.2.0 in master [5];
- oslo.messaging fails the same way because of debtcollector [6];
- etc.

Looks like a lot of wasted cycles to me.

It seems we need to either bump stable/liberty upper-constraints to match
current requirements of modern oslo libraries or somehow adapt backward
compatibility jobs to ignore upper-constraints for these libraries. Of
course we could also stop running these jobs altogether for projects that
have conflicting dependencies, but I think the reason we have them in the
first place is that we want to see that we can use new oslo libraris with
older OpenStack releases.

[0]
http://logs.openstack.org/83/273083/5/check/gate-tempest-dsvm-neutron-src-oslo.concurrency-liberty/369f8b7/logs/apache/keystone.txt.gz#_2016-01-28_14_49_01_352371
[1]
https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L202
[2]
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L110
[3]
http://logs.openstack.org/10/276510/2/check/gate-tempest-dsvm-neutron-src-oslo.utils-liberty/717ce34/logs/apache/keystone.txt.gz#_2016-02-05_02_11_35_72
[4]
https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L90
[5]
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L28
[6]
http://logs.openstack.org/76/278276/2/check/gate-tempest-dsvm-neutron-src-oslo.messaging-liberty/91cb3e4/logs/apache/keystone.txt.gz#_2016-02-10_10_05_29_293781
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [fuelclient] Pre-release versions of fuelclient for testing purposes

2016-01-21 Thread Yuriy Taraday
By the way, it would be very helpful for testing external tools if we had
7.0.1 release on PyPI as well. It seems python-fuelclient somehow ended up
with a "stable/7.0.1" branch instead of "7.0.1" tag.

On Wed, Jan 20, 2016 at 2:49 PM Roman Prykhodchenko  wrote:

> Releasing a beta version sounds like a good plan but does OpenStack Infra
> actually support this?
>
> > 20 січ. 2016 р. о 12:05 Oleg Gelbukh 
> написав(ла):
> >
> > Hi,
> >
> > Currently we're experiencing issues with Python dependencies of our
> package (fuel-octane), specifically between fuelclient's dependencies and
> keystoneclient dependencies.
> >
> > New keystoneclient is required to work with the new version of Nailgun
> due to introduction of SSL in the latter. On the other hand, fuelclient is
> released along with the main release of Fuel, and the latest version
> available from PyPI is 7.0.0, and it has very old dependencies (based on
> packages available in centos6/python26).
> >
> > The solution I'd like to propose is to release beta version of
> fuelclient (8.0.0b1) with updated requirements ASAP. With --pre flag to
> pip/tox, this will allow to run unittests against the proper set of
> requirements. On the other hand, it will not break the users consuming the
> latest stable (7.0.0) version with old requirements from PyPI.
> >
> > Please, share your thoughts and considerations. If no objections, I will
> create a corresponding bug/blueprint against fuelclient to be fixed in the
> current release cycle.
> >
> > --
> > Best regards,
> > Oleg Gelbukh
> > Mirantis
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Python 3.5 is now the default Py3 in Debian Sid

2016-01-14 Thread Yuriy Taraday
On Thu, Jan 14, 2016 at 5:48 PM Jeremy Stanley  wrote:

> On 2016-01-14 09:47:52 +0100 (+0100), Julien Danjou wrote:
> [...]
> > Is there any plan to add Python 3.5 to infra?
>
> I expect we'll end up with it shortly after Ubuntu 16.04 LTS
> releases in a few months (does anybody know for sure what its
> default Python 3 is slated to be?).
>

It's 3.5.1 already in Xenial: http://packages.ubuntu.com/xenial/python3
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zuul][infra] Synchronizing state of Zuul with Gerrit

2016-01-13 Thread Yuriy Taraday
Today we had a change [0] that somehow weren't being picked up by Zuul to
gate queue although it had Workflow+1 and Verified+1. Only after I added
another Workflow+1 it did get Zuul's attention. I don't know what exactly
happen, but it seems Zuul didn't notice (lost) either initial Verified+1 or
Workflow+1 from dims and so later rechecks had no effect.

I wonder if we need another step in synchronizing state between Zuul and
Gerrit to avoid such issues. I think we could benefit from Zuul
periodically querying Gerrit for changes that should be in queues and
ensuring that they are where they are supposed to be. This can happen once
in an hour or evel less often to avoid any visible impact on Gerrit side.

I think this can be implemented by adding a Gerrit query to every pipeline
in layout.yaml and running it with cron trigger.

[0] https://review.openstack.org/265982
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Solar] SolarDB/ConfigDB place in Fuel

2015-12-22 Thread Yuriy Taraday
Hello, everybody.

It's a week old thread and I should've jumped in earlier. Better late than
never.

On Wed, Dec 16, 2015 at 2:04 AM Dmitriy Shulyak 
wrote:

> Hello folks,
>
> This topic is about configuration storage which will connect data sources
> (nailgun/bareon/others) and orchestration. And right now we are developing
> two projects that will overlap a bit.
>
> I understand there is not enough context to dive into this thread right
> away, but i will appreciate if those people, who participated in design,
> will add their opinions/clarifications on this matter.
>

Let's try to add more context here. I see a lot of confusion around this
matter. I think most of it comes from not having a single complete source
of data about both approaches. I'll try to summarize the problem and
outline proposed solutions as well as state of implementation for those.

== The problems. ==

Currently we have 2 main problems in question:
1. How to store data in Fuel so that it can come from different sources and
be consumed by different pieces of software.
2. How to integrate Solar with Fuel and allow it to consume data provided
by Nailgun (currently) or whatever else (if we get #1 implemented).

I was assigned (driven, actually) to look at the problem #1, and so with a
start of a number of ideas from Oleg and others from my team and after some
discussion with other people involved in Fuel architecture, I've finalized
the scope and outlined architecture of the service that we would need to
solve it in [0]. I didn't mean to step on anyone's toes, but later I was
shown that the similar issue is being solved by SolarDB initiative that is
being developed in the scope of integration between Solar and Fuel.

== The solutions. ==

= Config Service =

(because ConfigDB became an overused and ill-defined term)

Config Service is thought as a way to link Nailgun (and other sources in
the future) to deployment tasks (and other consumers in the future) in a
consistent and verifiable way. Someone lets the service know about the
structure of the data that components provide and the structure of the data
that other components consume thus declaring internal data flow between
sources and consumers of data. Then for every environment Nailgun (for now
- it can be other service, it can be changed to pull model later) feeds all
necessary data into the service, triggers deployment tasks that consume
data from the service the way that is more suitable for them. If we need to
feed this data into some external service (Puppet master, for example),
we're free to do so as long as we define data structure that a consumer
expects.

= SolarDB =

(mainly based on [1] and presentations seen earlier, please correct me if
smth wrong)

SolarDB includes active component: Data Processors (DPs). DPs fetch data
from wherever they're intended to (Nailgun for starters, any other source
in the future) and store them as Solar's Data Resorces (DRs). DRs are then
translated to concrete data for other Solar's Resources (Executable
Resources, ERs), this data is preprocessed by Policy Engine and converted
to a set of calls to mutators that change ERs data in Solar's internal
database in a way that lets Solar decide what should be done to change
actual state of the environment.

== State of implementations ==

= Config Service =

I plan to show a PoC for Config Service integration before the end of this
year. Coding of the service itself is almost at finish line at [2],
integration with Nailgun and Astute/Puppet will take most of the remaining
time.

= SolarDB =

PoC with Data Processors and Data Resources happened with simple cluster
architecture (I don't know the date here). Policy Engine is in early stages
of development.

== Main differences ==

I'll try to list main differences along with what seems for me pros and
cons for both sides (major points taken from original Dmitriy's email).

1. Config Service (CS) is initially planned as a passive element of
integration while SolarDB (SD) have DPs that actively fetch data from the
sources.

CS+: simpler implementation, can get PoC done fast
CS+: easier to integrate with current Fuel architecture (actors remain
actors there)
CS+: can be easily modified to add active components to the service (pull
model)
CS-: brings in another component into the stack
CS-: requires other components to be changed to push data into the service
SD+: doesn't require any changes in other components
SD-: requires Solar to store fetched data
SD-: brings in another component into the stack

Overall I think that this point is not crucial here: both approaches can be
converted back and forth without much effort in both solutions.

2. Config Service is designed as an independent service, SolarDB is tight
to Solar infrastructure.

CS+: defines clear interface between components
CS+: doesn't require Solar at all, so we can land and use it independently
CS-: duplicates data between Config Service and Solar
SD+: integrated 

Re: [openstack-dev] [nova] proposed new compute review dashboard

2015-12-18 Thread Yuriy Taraday
On Fri, Dec 18, 2015 at 5:52 PM Matt Riedemann 
wrote:

> This has come up before, but I think we should put something like this
> in the nova IRC channel topic. Swift has something like this in their
> IRC channel topic and it's very easy for someone new to the channel to
> go in and see what their review priorities are (I'd consider myself a
> new person to Swift). Even if the dashboard above isn't perfect for
> everyone, I think we should start with *something* and this is a good
> start. Plus it doesn't require reviewing it to add it to the channel
> topic, so (minimal) bikeshedding! :)
>

Even better you can add it to openstack/nova project in Gerrit to a special
ref "refs/meta/dashboars/review-inbox", for example. Then you'll have it
discoverable from Gerrit and have a nice URL for it (like [0], although
that one is not project-specific). See [1] for details.

IIRC, this is a "new" feature in our Gerrit (I mean it have been in Gerrit
for ages, but not the old one we had).

[0]
https://review.openstack.org/#/projects/openstack/nova,dashboards/important-changes:important-changes-dashboard
[1]
https://review.openstack.org/Documentation/intro-project-owner.html#dashboards
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-13 Thread Yuriy Taraday
On Sun, Dec 13, 2015 at 12:14 PM Shinobu Kinjo  wrote:

> What is the current status of this failure?
>
>  > 2015-12-13 08:55:04.863 | ValueError: need more than 1 value to unpack
>

It shouldn't reappear in gate because CI images have been reverted to tox
2.2.1.
It can be reproduced locally if one has tox 2.3.0 installed locally.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-12 Thread Yuriy Taraday
Tempest jobs in all our projects seem to become broken after tox 2.3.0
release yesterday. It's a regression in tox itself:
https://bitbucket.org/hpk42/tox/issues/294

I suggest us to add tox to upper-constraints to avoid this breakage for now
and in the future: https://review.openstack.org/256947

Note that we install tox in gate with no regard to global-requirements, so
only upper-constraints can save us from tox releases.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-12 Thread Yuriy Taraday
Hi, Jeremy.

On Sat, Dec 12, 2015 at 8:27 PM Jeremy Stanley  wrote:

> On 2015-12-12 16:51:09 + (+), Jeremy Stanley wrote:
> [...]
> > No, it won't, since upper-constraints is merely used to constrain
> > requirements lists.
>
> I take that back, the pip_install function in DevStack applies
> upper-constraints.txt on anything it installs, but regardless it's
> misleading to try to pin it in upper-constraints.txt because that
> won't help any of the numerous other jobs which may use constraints
> but rely on preinstalled tox.


I think it should be a good first step in right direction. For example,
with today's issue it would break gate for tempest itself only since all
other jobs would have preinstalled tox reverted to one mentioned in
upper-constraints.


> Also I've confirmed that tempest jobs
> do still seem to be working fine in our CI, and don't seem to be
> unconditionally upgrading the preinstalled tox version.


Pip doesn't upgrade tox if it is already installed, but will obey
constraints if they're provided. That's why it works with current reverted
image.

For the
> benefit of people running DevStack themselves downstream, Ryan's
> https://review.openstack.org/256620 looks like a more sensible
> temporary workaround.
>

Won't they use constraints too? I think we cover DevStack issue with
upper-constraints change as a more permanent solution.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-12 Thread Yuriy Taraday
On Sat, Dec 12, 2015 at 10:27 PM Jeremy Stanley <fu...@yuggoth.org> wrote:

> On 2015-12-12 19:00:23 + (+0000), Yuriy Taraday wrote:
> > I think it should be a good first step in right direction. For example,
> > with today's issue it would break gate for tempest itself only since all
> > other jobs would have preinstalled tox reverted to one mentioned in
> > upper-constraints.
> [...]
>
> Other way around. It would force DevStack to downgrade tox if the
> existing version on the worker were higher. Pretty much no other
> jobs install tox during the job, so they rely entirely on the one
> present on the system being correct and an entry for tox in
> upper-constraints.txt wouldn't help them at all, whether they're
> using that file to constrain their requirements lists or not (since
> tox is not present in any of our projects' requirements lists).
>

By "other" jobs I meant all jobs that use devstack to install tempest.
That's seem to be all jobs in all projects except probably tempest itself.

As for jobs that don't use devstack but only run tox, I suggest us to add a
step to adjust tox version according to upper-constraints as well.

Also, the constraints list is built from pip installing everything
> in global-requirements.txt into a virtualenv, so if tox is not a
> direct or transitive requirement then it will end up dropped from
> upper-constraints.txt on the next automated proposal in review.
>

Ok, will fix that in my CR.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Configuration management for Fuel 7.0

2015-12-03 Thread Yuriy Taraday
Hi, Roman.

On Thu, Dec 3, 2015 at 5:36 PM Roman Sokolkov 
wrote:

> I've selected 13 real-world tasks from customer (i.e. update flag X in
> nova.conf):
> - 6/13 require fuel-library patching (or is #2 unusable)
> - 3/13 are OK and can be done with #2
> - 4/13 can be done with some limitations.
>
> If needed i'll provide details.
>
> Rough statistics is that *only ~20-25% of use cases can be done with #2*.
>
> Let me give a very popular use case that will fail with #2. Assume we'r
> executing whole task graph every two hours.
> We want to change nova.conf "DEFAULT/amqp_durable_queues" from False to
> True.
>
> There is no parameter in hiera for "amqp_durable_queues". We have two
> solutions here (both are bad):
> 1) Redefine "DEFAULT/amqp_durable_queues" = True in plugin task. What will
> happen on the node. amqp_durable_queues will continue changing value
> between True and False on every execution. We shouldn't do it this way.
> 2) Patch fuel-library. Value for amqp_durable_queues should be taken from
> hiera. This is also one way ticket.
>

You are describing one of use cases we want to cover in future with Config
Service. If we store all configuration variables consumed by all deployment
tasks in the service, one will be able to change (override) the value in
the same service and let deployment tasks apply config changes on nodes.

This would require support from the deployment side (source of all config
values will become a service, not static file) and from Nailgun (all data
should be stored in the service). In the future this approach will allow us
to clarify which value goes where and to define new values and override old
ones in a clearly manageable fashion.

Config Service would also allow us to use data defined outside of Nailgun
to feed values into deployment tasks, such as external CM services (e.g.
Puppet Master).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Upgrade to Gerrit 2.11

2015-10-15 Thread Yuriy Taraday
On Wed, Oct 14, 2015 at 3:08 AM Zaro  wrote:

> Hello All,
>
> The openstack-infra team would like to upgrade from our Gerrit 2.8 to
> Gerrit 2.11.  We are proposing to do the upgrade shortly after the
> Mitaka summit.  The main motivation behind the upgrade is to allow us
> to take advantage of some of the new REST api, ssh commands, and
> stream events features.  Also we wanted to stay closer to upstream so
> it will be easier to pick up more recent features and fixes.
>
> We want to let everyone know that there is a big UI change in Gerrit
> 2.11.  The change screen (CS), which is the main view for a patchset,
> has been completely replaced with a new change screen (CS2).  While
> Gerrit 2.8 contains both old CS and CS2, I believe everyone in
> Openstack land is really just using the old CS.  CS2 really wasn't
> ready in 2.8 and really should never be used in that version.  The CS2
> has come a long way since then and many other big projects have moved
> to using Gerrit 2.11 so it's not a concern any longer.  If you would
> like a preview of Gerrit 2.11 and maybe help us test it, head over to
> http://review-dev.openstack.org.  If you are very opposed to CS2 then
> you may like Gertty (https://pypi.python.org/pypi/gertty) instead.  If
> neither option works for you then maybe you can help us create a new
> alternative :)
>
> We are soliciting feedback so please let us know what you think.
>

I think that's great news!
I've been using CS2 since it became an option and it even (mostly) worked
fine for me, so I've been waiting for so long for this upgrade.

Where should I direct issues I find in review-dev.openstack.org? I've found
two so far:
- "Unified diff" button in diff view (next to navigation arrows) always
leads to Internal Server Error;
- cgit links in change screen have "%2F" in URL instead of "/" which leads
to Apache's Not Found instead of cgit's one.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Yuriy Taraday
On Wed, Oct 7, 2015 at 12:51 AM Monty Taylor  wrote:

> On 10/06/2015 10:52 AM, Sebastian Kalinowski wrote:
> > I've already wrote in the review that caused this thread that I do not
> want
> > to blindly follow rules for using one or another. We should always
> consider
> > technical requirements. And I do not see a reason to leave py.test (and
> > nobody
> > show me such reason) and replace it with something else.
>
> Hi!
>
> The reason is that testrepository is what OpenStack uses and as I
> understand it, Fuel wants to join the Big Tent.
>

It saddens me that once again choice of library is being forced upon a
project based on what other projects use, not on technical merit. py.test
is more than just a (way better) test runner, it allows to write tests with
less boilerplate and more power. While its features are not extensively
used in Fuel code, switching to testr would still require changing test
logic which is generally bad (that's why mox is still in use in OpenStack).
Can we avoid that?

The use of testr is documented in the Project Testing Interface:
>
>
> http://git.openstack.org/cgit/openstack/governance/tree/reference/project-testing-interface.rst#n78
>
> There are many reasons for it, but in large part we are continually
> adding more and more tools to process subunit output across the board in
> the Gate. subunit2sql is an important one, as it will be feeding into
> expanded test result dashboards.
>
> We also have zuul features in the pipeline to be able to watch the
> subunit streams in real time to respond more quickly to issues in test
> runs.
>

We also have standard job builders based around tox and testr. Having
> project divergence in this area is a non-starter when there are over 800
> repositories.
>

So it seems that all that's needed to keep py.test as an option is a plugin
for py.test that generates subunit stream like Robert said, is that right?

In short, while I understand that this seems like an area where a
> project can do whatever it wants to, it really isn't. If it's causing
> you excessive pain, I recommend connecting with Robert on ways to make
> improvements to testrepository. Those improvements will also have the

effect of improving life for the rest of OpenStack, which is also a
> great reason why we all use the same tools rather than foster an
> environment of per-project snowflakes.
>

I wouldn't call py.test a snowflake. It's a very well-established testing
tool and OpenStack projects could benefit from using it if we integrate it
with testr well.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-07 Thread Yuriy Taraday
On Wed, Oct 7, 2015 at 3:14 AM Monty Taylor <mord...@inaugust.com> wrote:

> On 10/06/2015 06:01 PM, Thomas Goirand wrote:
> > On 10/06/2015 01:14 PM, Yuriy Taraday wrote:
> >> On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko <m...@romcheg.me
> >> <mailto:m...@romcheg.me>> wrote:
> >>
> >>  Atm I have the following pros. and cons. regarding testrepository:
> >>
> >>  pros.:
> >>
> >>  1. It’s ”standard" in OpenStack so using it gives Fuel more karma
> >>  and moves it more under big tent
> >>
> >>
> >> I don't think that big tent model aims at eliminating diversity of tools
> >> we use in our projects. A collection of web frameworks used in big tent
> >> is an example of that.
> >
> >  From the downstream distro point of view, I don't agree in general, and
> > with the web framework in particular. (though it's less a concern for
> > the testr vs pbr). We keep adding dependencies and duplicates, but never
> > remove them. For example, tablib and suds/sudsjurko need to be removed
> > because they are not maintainable, there's not much work to do so, but
> > nobody does the work...
>
> The Big Tent has absolutely no change in opinion about eliminating
> diversity of tools. OpenStack has ALWAYS striven to reduce diversity of
> tools. Big Tent applies OpenStack to more things that request to be part
> of OpenStack.
>
> Nothing has changed in the intent.
>
> Diversity of tools in a project this size is a bad idea. Always has
> been. Always will be.
>
> The amount of web frameworks in use is a bug.
>

I'm sorry, that was my mistake. I just can't remember any project that was
declined place under big tent (or integrated) because of a library in use.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-06 Thread Yuriy Taraday
On Mon, Oct 5, 2015 at 5:40 PM Roman Prykhodchenko  wrote:

> Atm I have the following pros. and cons. regarding testrepository:
>
> pros.:
>
> 1. It’s ”standard" in OpenStack so using it gives Fuel more karma and
> moves it more under big tent
>

I don't think that big tent model aims at eliminating diversity of tools we
use in our projects. A collection of web frameworks used in big tent is an
example of that.

2. It’s in global requirements, so it doesn’t cause dependency hell
>

That can be solved by adding py.test to openstack/requirements.

cons.:
> 1. Debugging is really hard
>

I'd say that debugging here is not the right term. Every aspect of
developing with testr is harder than with py.test. py.test tends to just
work where you need additional tools and effort with testr.

In general I don't see any benefit the project can get from using testr
while its limitations will bite developers at every turn.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [oslo.privsep] Any progress on privsep?

2015-09-20 Thread Yuriy Taraday
Hello, Li.

On Sat, Sep 19, 2015 at 6:15 AM Li Ma  wrote:

> Thanks for your reply, Gus. That's awesome. I'd like to have a look at
> it or test if possible.
>
> Any source code available in the upstream?
>

You can find latest (almost approved from the looks of it) version of
blueprint here: https://review.openstack.org/204073
It links to current implementation (not API described in blueprint though):
https://review.openstack.org/155631
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Let's change the way we distribute Fuel (was: [Fuel] Remove MOS DEB repo from master node)

2015-09-10 Thread Yuriy Taraday
Hello, thread!

First let me address some of the very good points Alex raised in his email.

On Wed, Sep 9, 2015 at 10:33 PM Alex Schultz  wrote:

> Fair enough, I just wanted to raise the UX issues around these types of
> things as they should go into the decision making process.
>

UX issues is what we definitely should address even for ourselves: number
of things that need to happen to deploy Master with just one small change
is enormous.


> Let me explain why I think having local MOS mirror by default is bad:
>> 1) I don't see any reason why we should treat MOS  repo other way than
>> all other online repos. A user sees on the settings tab the list of repos
>> one of which is local by default while others are online. It can make user
>> a little bit confused, can't it? A user can be also confused by the fact,
>> that some of the repos can be cloned locally by fuel-createmirror while
>> others can't. That is not straightforward, NOT fuel-createmirror UX.
>>
>
> I agree. The process should be the same and it should be just another
> repo. It doesn't mean we can't include a version on an ISO as part of a
> release.  Would it be better to provide the mirror on the ISO but not have
> it enabled by default for a release so that we can gather user feedback on
> this? This would include improved documentation and possibly allowing a
> user to choose their preference so we can collect metrics?
>

I think instead of relying on average user of spherical Fuel we should let
user decide what goes to ISO.

2) Having local MOS mirror by default makes things much more convoluted. We
>> are forced to have several directories with predefined names and we are
>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
>> to MOS, which is not true. It is possible to implement really flexible
>> delivery scheme, but we need to think of these things as they are
>> independent.
>>
>
> I'm not sure what you mean by this. Including a point in time copy on an
> ISO as a release is a common method of distributing software. Is this a
> messaging thing that needs to be addressed? Perhaps I'm not familiar with
> people referring to the ISO as being MOS.
>

It is so common that some people think it's very broken. But we can fix
that.

For large users it is easy to build custom ISO and put there what they need
>> but first we need to have simple working scheme clear for everyone. I think
>> dealing with all repos the same way is what is gonna makes things simpler.
>>
>
> Who is going to build a custom ISO? How does one request that? What
> resources are consumed by custom ISO creation process/request? Does this
> scale?
>

How about user building ISO on one's workstation?

This thread is not about internet connectivity, it is about aligning things.
>>
>
> You are correct in that this thread is not explicitly about internet
> connectivity, but they are related. Any changes to remove a local
> repository and only provide an internet based solution makes internet
> connectivity something that needs to be included in the discussion.  I just
> want to make sure that we properly evaluate this decision based on end user
> feedback not because we don't want to manage this from a developer
> standpoint.
>

We can use Internet connectivity not only in target DC.

Now what do I mean by all that? Let's make Fuel distribution that's easier
to develop and distribute while making it more comfortable to use in the
process.

As Alex pointed out, the common way to distribute an OS is to put some
number of packages from some snapshot of golden repo on ISO and let user
install that. Let's say, it's a DVD way (although there was time OS could
fit CD). The other less common way of distributing OS is a small minimal
ISO and use online repo to install everything. Let's say, it's a MiniCD way.

Fuel is now using a DVD way: we put everything user will ever need to an
ISO and give it to user. Vladimir's proposal was to use smth similar to
MiniCD way: put only Fuel on ISO and keep online repo running.

Note that I'll speak of Fuel as an installer people put on MiniCD. It's a
bit bigger, but it deploys clouds, not just separate machines. Packages and
OS then translate to everything needed to deploy OpenStack: packages and
deploy scripts (puppet manifests, could be packaged as well). We could
apply the same logic to distribution of Fuel itself though, but let's not
get into it right now.

Let's compare these ways from distributor (D) and user (U) point of view.

DVD way.
Pros:
- (D) a single piece to deliver to user;
- (D,U) a snapshot of repo put on ISO is easier to cover with QA and so
it's better tested;
- (U) one-time download for everything;
- (U) no need for Internet connectivity when you're installing OS;
- (U) you can store ISO and reuse it any number of times.
Cons:
- (D) you still have to maintain online repo for updates;
- (D,U) it's 

Re: [openstack-dev] [Fuel] Let's change the way we distribute Fuel (was: [Fuel] Remove MOS DEB repo from master node)

2015-09-10 Thread Yuriy Taraday
On Thu, Sep 10, 2015 at 4:43 PM Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> > Vladimir's proposal was to use smth similar to MiniCD
>
> Just to clarify. My proposal is to remove DEB MOS repo from the master
> node by default and thus from the ISO. That is it.
> My proposal does not assume having internet connection during installing
> the master node. Fuel RPM packages together with their dependencies are
> still there on ISO, thus the master node can be installed w/o internet
> connection. Cloud/OpenStack can not be deployed out of the box anyway. It
> is because we don't put Ubuntu upstream on ISO. Anyway a user is forced to
> make Ubuntu upstream mirror available on the master node (cloning it
> locally or via internet connection).
>
> IMO, Fuel in this case is like a browser or bittorrent client. Packages
> are available on Linux DVDs but it makes little sense to use them w/o
> internet connection.
>
>
> Vladimir Kozhukalov
>
> On Thu, Sep 10, 2015 at 2:53 PM, Yuriy Taraday <yorik@gmail.com>
> wrote:
>
>> Note that I'll speak of Fuel as an installer people put on MiniCD. It's a
>> bit bigger, but it deploys clouds, not just separate machines. Packages and
>> OS then translate to everything needed to deploy OpenStack: packages and
>> deploy scripts (puppet manifests, could be packaged as well). We could
>> apply the same logic to distribution of Fuel itself though, but let's not
>> get into it right now.
>>
>
As I've mentioned later in the initial mail (see above), I'm not talking
about using this approach to deploy Fuel (although it'd be great if we do).
I'm talking about using it to deploy Fuel and then MOS. We can download
some fixed part of the image that contains everything needed to deploy Fuel
and add all necessary repos and manifests to it, for example.

So to repeat the analogy, Fuel is like deb-installer that is present on any
Debian-based MiniCD and MOS (packages+manifests) is like packages that
present on DVD (and downloaded in MiniCD case). You don't want to dig into
deb-installer, but you might want to install different software from
different sources. Just like you don't want to mess with Fuel itself while
you might want to install customized MOS from local repo (or from resulting
image).
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] AttributeError: 'GitReviewException' object has no attribute 'EXIT_CODE'

2015-07-26 Thread Yuriy Taraday
Hello, Malhar.

It seems that the actual error is hidden behind traceback object at
0x7f337522e3f8-like lines. Judging by line numbers in traceback that is
seen, you're using version 1.25.0, and there I can see how the error got
away:
https://github.com/openstack-infra/git-review/blob/1.25.0/git_review/cmd.py#L768

Can you edit file /usr/local/lib/python2.7/dist-packages/git_review/cmd.py
adding something like import traceback; traceback.print_exc() before or
after that line? It should provide some to what error does git-review get
when it tries to contact Gerrit.

On Sun, Jul 26, 2015 at 5:33 PM Malhar Vora mlvora.2...@gmail.com wrote:

 Hi Jeremy,

 I have signed agreement on https://review.openstack.org/. That is not a
 problem. http://about.me/malhar.vora

 On Sun, Jul 26, 2015 at 7:25 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-07-26 02:53:30 -0700 (-0700), Malhar Vora wrote:
  I have done everything from scratch and followed below step.
 
  Please check and tell me what is missing,
 [...]
  3. Created account in https://review.openstack.org and updated contact
  details
 [...]

 There's a related step you've skipped here. From our instructions:

 agree to the Individual Contributor License Agreement at
 https://review.openstack.org/#/settings/agreements

 I expect that's the problem, but don't have time to check the ICLA
 signers group membership right now to confirm.

  6. Installed git-review using apt-get install git-review
 [...]
  We don't know where your gerrit is. Please manually create a remote
  named gerrit and try again.
  Could not connect to gerrit at ssh://
  malhar_v...@review.openstack.org:29418/openstack/ironic.git
 [...]

 This is likely obscuring the actual error. Chances are the version
 of git-review in your distribution is misinterpreting the lack of a
 signed CLA during its test push as a failure to connect to Gerrit.
 This was a known issue in one or more releases of the utility. More
 recent versions (1.25.0 certainly, maybe 1.24 as well but I'm having
 trouble tracking down exactly which commit improved that situation)
 are clearer about this and you'll actually see the CLA or contact
 info errors rather than a misleading error about SSH credentials.
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Need new release of stable oslotest with capped mock

2015-07-15 Thread Yuriy Taraday
Hello, oslo team.

With recent mock nightmare we should not release a new stable version of
oslotest so that projects that depend on oslotest but don't directly depend
on mock will be unblocked in gate.

I found out about this from this review: [0]
It fails because stable oslotest 1.5.1 have uncapped dependency on mock for
2.6. It still remains so because Proposal Bot's review to update
requirements in oslotest [1] got stuck because of a problem with new(er)
version of fixtures. It has been fixed in oslotest master 2 weeks ago [2],
but hasn't been backported to stable/kilo, so I've created a CR [3] (change
touches only a test for oslotest, so it's double-safe for stable).

So after CRs [3][1] are merged to oslotest we should release a new stable
version (1.5.2, I guess) for it and then we can update requirements in
oslo.concurrency [0].

All that said it looks like we need to pay more attention to Proposal Bot's
failures. It should trigger a loud alarm and make Zuul blink all red since
it most likely means that something got broken in our requirements and
noone would notice until it breaks something else.

[0] https://review.openstack.org/201862
[1] https://review.openstack.org/201196
[2] https://review.openstack.org/197900
[3] https://review.openstack.org/202091
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Need new release of stable oslotest with capped mock

2015-07-15 Thread Yuriy Taraday
On Wed, Jul 15, 2015 at 4:14 PM Yuriy Taraday yorik@gmail.com wrote:

 Hello, oslo team.

 With recent mock nightmare we should not release a new stable version of
 oslotest so that projects that depend on oslotest but don't directly depend
 on mock will be unblocked in gate.

 I found out about this from this review: [0]
 It fails because stable oslotest 1.5.1 have uncapped dependency on mock
 for 2.6. It still remains so because Proposal Bot's review to update
 requirements in oslotest [1] got stuck because of a problem with new(er)
 version of fixtures. It has been fixed in oslotest master 2 weeks ago [2],
 but hasn't been backported to stable/kilo, so I've created a CR [3] (change
 touches only a test for oslotest, so it's double-safe for stable).

 So after CRs [3][1] are merged to oslotest we should release a new stable
 version (1.5.2, I guess) for it and then we can update requirements in
 oslo.concurrency [0].

 All that said it looks like we need to pay more attention to Proposal
 Bot's failures. It should trigger a loud alarm and make Zuul blink all red
 since it most likely means that something got broken in our requirements
 and noone would notice until it breaks something else.

 [0] https://review.openstack.org/201862
 [1] https://review.openstack.org/201196
 [2] https://review.openstack.org/197900
 [3] https://review.openstack.org/202091


Looks like there's another mock-related change that should be backported to
stable branch: https://review.openstack.org/202111
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we need python-fasteners and not just oslo.concurrency?

2015-07-15 Thread Yuriy Taraday
On Wed, Jul 15, 2015 at 3:32 PM Thomas Goirand z...@debian.org wrote:

 I've seen that the latest version of taskflow needs fasteners, which
 handles lock stuff. Why can't this goes into oslo.concurrency?


It already did (in a way): https://review.openstack.org/185291
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Looking for help getting git-review to work over https

2015-06-11 Thread Yuriy Taraday
On Thu, Jun 11, 2015, 18:09 KARR, DAVID dk0...@att.com wrote:

I could use some help with setting up git-review in a slightly unfriendly
firewall situation.

I'm trying to set up git-review on my CentOS7 VM, and our firewall blocks
the non-standard ssh port.  I'm following the instructions at
http://docs.openstack.org/infra/manual/developers.html#accessing-gerrit-over-https
, for configuring git-review to use https on port 443, but this still isn't
working (times out with Could not connect to gerrit).  I've confirmed
that I can reach other external sites on port 443.

Can someone give me a hand with this?



 Hello.


- Can you please post all output from git review -vs?

- Do you have gerrit remote already configured?

- Do you have access to https://review.openstack.org/ from your browser?

- Can you access it from command line (via curl -I
https://review.openstack.org/; for example)?

- Does git ls-remote https://review.openstack.org/openstack/nova 
/dev/null produce and error?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][third-party][neutron-lbaas] git review - how to provide the URL to the test artifacts

2015-04-30 Thread Yuriy Taraday
Hello, Shane.

git-review doesn't support this. You can add a comment using existing
Gerrit APIs: either via SSH [0] or via HTTP [1].

[0] https://review.openstack.org/Documentation/cmd-review.html#_examples
[1]
https://review.openstack.org/Documentation/rest-api-changes.html#set-review

On Tue, Apr 28, 2015 at 8:05 PM Shane McGough smcgo...@kemptechnologies.com
wrote:

  Hi all


  I am running into trouble with how to post back the link to the log
 artefacts after running the CI.


  I can see how this is done in zuul using the url_pattern in zuul.conf,
 but as it stands now I am only using jenkins and the command line to
 monitor gerrit and build test environments.


  Is there a way to provide the URL back to gerrit with git review via ssh
 in the command line?


  Thanks


   Shane McGough
 Junior Software Developer
 *KEMP Technologies*
 National Technology Park, Limerick, Ireland.

  kemptechnologies.com | @KEMPtech https://twitter.com/KEMPtech |
 LinkedIn https://www.linkedin.com/company/kemp-technologies

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-22 Thread Yuriy Taraday
On Sun Feb 22 2015 at 6:27:16 AM Michael Bayer mba...@redhat.com wrote:




  On Feb 21, 2015, at 9:49 PM, Joshua Harlow harlo...@outlook.com wrote:
 
  Some comments/questions inline...
 
  Mike Bayer wrote:
 
  Yuriy Taradayyorik@gmail.com  wrote:
 
  On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlowharlo...@outlook.com
 wrote:
  This feels like something we could do in the service manager base
 class,
  maybe by adding a post fork hook or something.
  +1 to that.
 
  I think it'd be nice to have the service __init__() maybe be something
 like:
 
def __init__(self, threads=1000, prefork_callbacks=None,
 postfork_callbacks=None):
   self.postfork_callbacks = postfork_callbacks or []
   self.prefork_callbacks = prefork_callbacks or []
   # always ensure we are closing any left-open fds last...
   self.prefork_callbacks.append(self._close_descriptors)
   ...
 
  (you must've meant postfork_callbacks.append)
 
  Note that multiprocessing module already have 
  `multiprocessing.util.register_after_fork`
 method that allows to register callback that will be called every time a
 Process object is run. If we remove explicit use of `os.fork` in
 oslo.service (replace it with Process class) we'll be able to specify any
 after-fork callbacks in libraries that they need.
  For example, EngineFacade could register `pool.dispose()` callback
 there (it should have some proper finalization logic though).
 
  +1 to use Process and the callback system for required initialization
 steps
  and so forth, however I don’t know that an oslo lib should silently
 register
  global events on the assumption of how its constructs are to be used.
 
  I think whatever Oslo library is responsible for initiating the
 Process/fork
  should be where it ensures that resources from other Oslo libraries are
 set
  up correctly. So oslo.service might register its own event handler with
 
  Sounds like some kind of new entrypoint + discovery service that
 oslo.service (eck can we name it something else, something that makes it
 useable for others on pypi...) would need to plug-in to. It would seems
 like this is a general python problem (who is to say that only oslo
 libraries use resources that need to be fixed/closed after forking); are
 there any recommendations that the python community has in general for this
 (aka, a common entrypoint *all* libraries export that allows them to do
 things when a fork is about to occur)?
 
  oslo.db such that it gets notified of new database engines so that it
 can
  associate a disposal with it; it would do something similar for
  oslo.messaging and other systems that use file handles.   The end
  result might be that it uses register_after_fork(), but the point is
 that
  oslo.db.sqlalchemy.create_engine doesn’t do this; it lets oslo.service
  apply a hook so that oslo.service can do it on behalf of oslo.db.
 
  Sounds sort of like global state/a 'open resource' pool that each
 library needs to maintain internally to it that tracks how
 applications/other libraries are using it; that feels sorta odd IMHO.
 
  Wouldn't that mean libraries that provide back resource objects, or
 resource containing objects..., for others to use would now need to capture
 who is using what (weakref pools?) to retain what all the resources are
 being used and by whom (so that they can fix/close them on fork); not every
 library has a pool (like sqlalchemy afaik does) to track these kind(s) of
 things (for better or worse...). And what if those libraries use other
 libraries that use resources (who owns what?); seems like this just gets
 very messy/impractical pretty quickly once you start using any kind of 3rd
 party library that doesn't follow the same pattern... (which brings me back
 to the question of isn't there a common python way/entrypoint that deal
 with forks that works better than ^).
 
 
  So, instead of oslo.service cutting through and closing out the file
  descriptors from underneath other oslo libraries that opened them, we
 set up
  communication channels between oslo libs that maintain a consistent
 layer of
  abstraction, and instead of making all libraries responsible for the
 side
  effects that might be introduced from other oslo libraries, we make the
  side-effect-causing library the point at which those effects are
  ameliorated as a service to other oslo libraries.   This allows us to
 keep
  the knowledge of what it means to use “multiprocessing” in one
  place, rather than spreading out its effects.
 
  If only we didn't have all those other libraries[1] that people use to
 (that afaik highly likely also have resources they open); so even with
 getting oslo.db and oslo.messaging into this kind of pattern, we are still
 left with the other 200+ that aren't/haven't been following this pattern ;-)

 I'm only trying to solve well known points like this one between two Oslo
 libraries.   Obviously trying to multiply out this pattern times all
 libraries, 

Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-21 Thread Yuriy Taraday
On Fri Feb 20 2015 at 9:14:30 PM Joshua Harlow harlo...@outlook.com wrote:

  This feels like something we could do in the service manager base class,
  maybe by adding a post fork hook or something.

 +1 to that.

 I think it'd be nice to have the service __init__() maybe be something
 like:

   def __init__(self, threads=1000, prefork_callbacks=None,
postfork_callbacks=None):
  self.postfork_callbacks = postfork_callbacks or []
  self.prefork_callbacks = prefork_callbacks or []
  # always ensure we are closing any left-open fds last...
  self.prefork_callbacks.append(self._close_descriptors)
  ...


(you must've meant postfork_callbacks.append)

Note that multiprocessing module already have
`multiprocessing.util.register_after_fork` method that allows to register
callback that will be called every time a Process object is run. If we
remove explicit use of `os.fork` in oslo.service (replace it with Process
class) we'll be able to specify any after-fork callbacks in libraries that
they need. For example, EngineFacade could register `pool.dispose()`
callback there (it should have some proper finalization logic though).

I'd also suggest to avoid closing any fds in library that it doesn't own.
This would definitely give some woes to developers who would expect shared
descriptors to work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-02-02 Thread Yuriy Taraday
On Mon Feb 02 2015 at 11:49:31 AM Julien Danjou jul...@danjou.info wrote:

 On Fri, Jan 30 2015, Yuriy Taraday wrote:

  That's a great research! Under its impression I've spent most of last
  evening reading PyMySQL sources. It looks like it not as much need C
  speedups currently as plain old Python optimizations. Protocol parsing
 code
  seems very inefficient (chained struct.unpack's interleaved with data
  copying and util method calls that do the same struct.unpack with
  unnecessary type check... wow...) That's a huge place for improvement.
  I think it worth spending time on coming vacation to fix these slowdowns.
  We'll see if they'll pay back those 10% slowdown people are talking
 about.

 With all my respect, you may be right, but I need to say it'd be better
 to profile and then optimize rather than spend time rewriting random
 parts of the code then hoping it's going to be faster. :-)


Don't worry, I do profile. Currently I use mini-benchmark Mike provided an
optimizing hottest methods. I'm already getting 25% more speed in this case
and that's not a limit. I will be posting pull requests to pymysql soon.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-30 Thread Yuriy Taraday
On Thu Jan 29 2015 at 12:59:34 AM Mike Bayer mba...@redhat.com wrote:

 Hey list -


Hey, Mike.

While PyMySQL is lacking test coverage in some areas, has no external
 documentation, and has at least some areas where Python performance can be
 improved, the basic structure of the driver is perfectly fine and
 straightforward.  I can envision turning this driver into a total monster,
 adding C-speedups where needed but without getting in the way of async
 patching, adding new APIs for explicit async, and everything else.
  However, I’ve no idea what the developers have an appetite for.

 Please review the document at https://wiki.openstack.org/
 wiki/PyMySQL_evaluation.


That's a great research! Under its impression I've spent most of last
evening reading PyMySQL sources. It looks like it not as much need C
speedups currently as plain old Python optimizations. Protocol parsing code
seems very inefficient (chained struct.unpack's interleaved with data
copying and util method calls that do the same struct.unpack with
unnecessary type check... wow...) That's a huge place for improvement.
I think it worth spending time on coming vacation to fix these slowdowns.
We'll see if they'll pay back those 10% slowdown people are talking about.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Deprecation of LDAP Assignment (Only Affects Project/Tenant/Role/Assignment info in LDAP)

2015-01-29 Thread Yuriy Taraday
Hello.

On Wed Jan 28 2015 at 11:30:43 PM Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 LDAP is used in Keystone as a backend for both the Identity (Users and
 groups) and assignments (assigning roles to users) backend.

 Where did the LDAP Assignment backend come from? We originally had a
 single backend for Identity (users, groups, etc) and Assignment
 (Projects/Tenants, Domains, Roles, and everything else
 not-users-and-groups). When we did the split of Identity and Assignment we
 needed to support the organizations that deployed everything in the LDAP
 backend. This required both a driver for Identity and Assignment.

  We are planning on keeping support for identity while deprecating support
 for assignment.  There is only one known organization that this will impact
 (CERN) and they have a transition plan in place already.


I can (or actually can't do it here) name quite a few of our customers who
do use LDAP assignment backend. The issue that is solved by this is data
replication across data centers. What would be the proposed solution for
them? MySQL multi-master replication (Galera) is feared to perform badly
across DC.

The Problem
 ——
 The SQL Assignment backend has become significantly more feature rich and
 due to the limitations of the basic LDAP schemas available (most LDAP
 admins wont let someone load custom schemas), the LDAP assignment backend
 has languished and fallen further and further behind. It turns out almost
 no deployments use LDAP to house projects/tenants, domains, roles, etc. A
 lot of deployments use LDAP for users and groups.

 We explored many options on this front and it boiled down to three:

 1. Try and figure out how to wedge all the new features into a sub-optimal
 data store (basic/standard LDAP schemas)
 2. Create a custom schema for LDAP Assignment. This would require
 convincing LDAP admins (or Active Directory admins) to load a custom
 schema. This also was a very large amount of work for a very small
 deployment base.
 3. Deprecate the LDAP Assignment backend and work with the community to
 support (if desired) an out-of-tree LDAP driver (supported by those who
 need it).


I'd like to note that it is in fact possible to make LDAP backend work even
with native AD schema without modifications. The only issue that has been
hanging with LDAP schema from the very beginning of LDAP driver is usage of
groupOfNames for projects and nesting other objects under it. With some
fixes we managed to make it work with stock AD schema with no modifications
for Havana and port that to Icehouse.

Based upon interest, workload, and general maintainability issues, we have
 opted to deprecate the LDAP Assignment backend. What does this mean?


 1. This means effective as of Kilo, the LDAP assignment backend is
 deprecated and Frozen.
 1.a. No new code/features will be added to the LDAP Assignment backend.
 1.b. Only exception to 1.a is security-related fixes.

 2.The LDAP Assignment backend ([assignment]/driver” config option set to
 “keystone.assignment.backends.ldap.Assignment” or a subclass) will remain
 in-tree with plans to be removed in the “M”-release.
 2.a. This is subject to support beyond the “M”-release based upon what the
 keystone development team and community require.


Is there a possibility that this decision will be amended if someone steps
up to properly maintain LDAP backend? Developing such driver out of main
tree would be really hard mostly catch up with mainline work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon ode support

2014-11-07 Thread Yuriy Taraday
Hello, Miguel.

I switched departments recently and unfortunately don't have much free time
for community work. Feel free to pick up my change requests and push them
if you have time. I'll try to keep track of these changes and give some
feedback on them on occasion, but don't wait on me.
Thank you for keeping this feature in mind. I'd be glad to see it finally
used in Neutron (and any other project).

-- 

Kind regards, Yuriy.

On Fri, Nov 7, 2014 at 1:05 PM, Miguel Ángel Ajo majop...@redhat.com
wrote:


 Hi Yorik,

I was talking with Mark Mcclain a minute ago here at the summit about
 this. And he told me that now at the start of the cycle looks like a good
 moment to merge the spec  the root wrap daemon bits, so we have a lot of
 headroom for testing during the next months.

We need to upgrade the spec [1] to the new Kilo format.

Do you have some time to do it?, I can allocate some time and do it
 right away.

 [1] https://review.openstack.org/#/c/93889/
 --
 Miguel Ángel Ajo
 Sent with Sparrow http://www.sparrowmailapp.com/?sig

 On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:

 +1

 Sent from my Android phone using TouchDown (www.nitrodesk.com)


 -Original Message-
 From: Yuriy Taraday [yorik@gmail.com]
 Received: Thursday, 24 Jul 2014, 0:42
 To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]

 Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon
mode support


 Hello.

 I'd like to propose making a spec freeze exception for
 rootwrap-daemon-mode spec [1].

 Its goal is to save agents' execution time by using daemon mode for
 rootwrap and thus avoiding python interpreter startup time as well as sudo
 overhead for each call. Preliminary benchmark shows 10x+ speedup of the
 rootwrap interaction itself.

 This spec have a number of supporters from Neutron team (Carl and Miguel
 gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
 The only thing that has been blocking its progress is Mark's -2 left when
 oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
 in oslo.rootwrap is steadily getting approved [5].

 [1] https://review.openstack.org/93889
 [2] https://review.openstack.org/82787
 [3] https://review.openstack.org/84667
 [4] https://review.openstack.org/107386
 [5]
 https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Neutron] Rootwrap daemon ode support

2014-11-07 Thread Yuriy Taraday
It hasn't been started it yet. AFAIR Nova people wanted to see it working
in Neutron first (and I asked them too late in Juno cycle), so I tried to
push it to Neutron only first.
I don't know if anyone is interested in implementing this for Nova but I'll
ask around.

On Fri, Nov 7, 2014 at 3:35 PM, Miguel Ángel Ajo majop...@redhat.com
wrote:

  Yuriy, what’s the status of the rootwrap-daemon implementation on the
 nova side?, was it merged?, otherwise do you think there could be anyone
 interested in picking it up?

 Best regards,

 --
 Miguel Ángel Ajo
 Sent with Sparrow http://www.sparrowmailapp.com/?sig

 On Friday, 7 de November de 2014 at 11:52, Miguel Ángel Ajo wrote:

  Ohh, sad to hear that Yuriy, you were doing an awesome work. I will take
 some time to re-review the final state of the code and specs, and move it
 forward. Thank you very much for your contribution.

 --
 Miguel Ángel Ajo
 Sent with Sparrow http://www.sparrowmailapp.com/?sig

 On Friday, 7 de November de 2014 at 11:44, Yuriy Taraday wrote:

 Hello, Miguel.

 I switched departments recently and unfortunately don't have much free
 time for community work. Feel free to pick up my change requests and push
 them if you have time. I'll try to keep track of these changes and give
 some feedback on them on occasion, but don't wait on me.
 Thank you for keeping this feature in mind. I'd be glad to see it finally
 used in Neutron (and any other project).

 --

 Kind regards, Yuriy.

 On Fri, Nov 7, 2014 at 1:05 PM, Miguel Ángel Ajo majop...@redhat.com
 wrote:


 Hi Yorik,

I was talking with Mark Mcclain a minute ago here at the summit about
 this. And he told me that now at the start of the cycle looks like a good
 moment to merge the spec  the root wrap daemon bits, so we have a lot of
 headroom for testing during the next months.

We need to upgrade the spec [1] to the new Kilo format.

Do you have some time to do it?, I can allocate some time and do it
 right away.

 [1] https://review.openstack.org/#/c/93889/
 --
 Miguel Ángel Ajo
 Sent with Sparrow http://www.sparrowmailapp.com/?sig

 On Thursday, 24 de July de 2014 at 01:42, Miguel Angel Ajo Pelayo wrote:

 +1

 Sent from my Android phone using TouchDown (www.nitrodesk.com)


 -Original Message-
 From: Yuriy Taraday [yorik@gmail.com]
 Received: Thursday, 24 Jul 2014, 0:42
 To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]

 Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon
mode support


 Hello.

 I'd like to propose making a spec freeze exception for
 rootwrap-daemon-mode spec [1].

 Its goal is to save agents' execution time by using daemon mode for
 rootwrap and thus avoiding python interpreter startup time as well as sudo
 overhead for each call. Preliminary benchmark shows 10x+ speedup of the
 rootwrap interaction itself.

 This spec have a number of supporters from Neutron team (Carl and Miguel
 gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
 The only thing that has been blocking its progress is Mark's -2 left when
 oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
 in oslo.rootwrap is steadily getting approved [5].

 [1] https://review.openstack.org/93889
 [2] https://review.openstack.org/82787
 [3] https://review.openstack.org/84667
 [4] https://review.openstack.org/107386
 [5]
 https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z



  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kesytone][multidomain] - Time to leave LDAP backend?

2014-09-10 Thread Yuriy Taraday
On Tue, Sep 9, 2014 at 8:25 AM, Nathan Kinder nkin...@redhat.com wrote:

 On 09/01/2014 01:43 AM, Marcos Fermin Lobo wrote:
  Hi all,
 
 
 
  I found two functionalities for keystone that could be against each
 other.
 
 
 
  Multi-domain feature (This functionality is new in Juno.)
 
  ---
 
  Link:
 
 http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers
 
 
  Keystone supports the option to specify identity driver configurations
  on a domain by domain basis, allowing, for example, a specific domain to
  have its own LDAP or SQL server. So, we can use different backends for
  different domains. But, as Henry Nash said “it has not been validated
  with multiple SQL drivers”
  https://bugs.launchpad.net/keystone/+bug/1362181/comments/2
 
 
 
  Hierarchical Multitenancy
 
  
 
  Link:
 
 https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
 
  This is nested projects feature but, only for SQL, not LDAP.
 
 
 
  So, if you are using LDAP and you want “nested projects” feature, you
  should to migrate from LDAP to SQL but, I you want to get multi-domain
  feature too you can’t use 2 SQL backends (you need at least one LDAP
  backend) because is not validated for multiple SQL drivers…
 
 
 
  Maybe I’m losing something, please, correct me if I’m wrong.
 
 
 
  Here my questions:
 
 
 
  -  If I want Multi-domain and Hierarchical Multitenancy
  features, which are my options? What should I do (migrate or not migrate
  to SQL)?
 
  -  Is LDAP going to deprecated soon?

 I think you need to keep in mind that there are two separate backends
 that support LDAP: identity and assignment.

 From everyone I have talked to on the Keystone team, SQL is preferred
 for the assignment backend.  Storing assignment information in LDAP
 seems to be a non-standard use case.

 For the identity backend, LDAP is preferred.  Many people have users and
 groups already in an LDAP server, and Keystone should be able to take
 advantage of those existing users and credentials for centralized
 authentication.  In addition, every LDAP server I know have has better
 security features than the SQL identity backend offers, such as password
 policies and account lockout.

 The multiple domain support for multiple LDAP servers was really
 designed to allow for separate groups of users from separate identity
 LDAP servers to be usable in a single Keystone instance.

 Given that the Keystone team considers SQL as the preferred assignment
 backend, the hierarchical project blueprint was targeted against it.
 The idea is that you would use LDAP server(s) for your users and have
 hierarchical projects in SQL.

 My personal feeling is that the LDAP assignment backend should
 ultimately be deprecated.  I don't think the LDAP assignment backend
 really offers any benefit of SQL, and you have to define some
 non-standard LDAP schema to represent projects, roles, etc., or you end
 up trying to shoehorn the data into standard LDAP schema that was really
 meant for something else.

 It would be interesting to create a poll like Morgan did for the
 Keystone token format to see how widely the LDAP assignments backend is.
  Even more interesting would be to know the reasons why people are using
 it over SQL.


Please don't consider LDAP assignment backend as and outcast. It is used
and we have use cases where it's the only way to go.

Some enterprises with strict security policies require all security-related
tasks to be done through AD, and project/roles assignment is one of them.
LDAP assignment backend is a right fit here.
Storing such info in AD provides additional benefit of providing not only
single management point, but also an enterprise-ready cross-datacenter
replication. (Galera or other MySQL replications arguably don't quite work
for this)
From what I see, the only obstruction here is need for a custom LDAP schema
for AD (which doesn't fly with strict enterprise constraints). That can be
mitigated by using AD-native objectClass'es for projects and groups instead
of 'groupOfNames' and 'organizationalRole': 'organizationalUnit' and
'group'. These object can be managed by commonly used AD tools (not LDAP
editor), but require some changes in Keystone to work. We've hacked
together some patches to Keystone that should make it work and will propose
them in Kilo cycle.
Another missing feature is domains/hierarchical projects. It's not
impossible to implement this in LDAP backend, but we need someone to step
up here. With OUs it should be rather obvious how to store these in LDAP,
but we'll need some algorithmic support as well.

We shouldn't give up on LDAP backend. It's used by a lot of private clouds
and some public ones. The problem is that its users usually aren't ready to
make necessary changes to make it work and so have to bend their rules to
make existing backend work. Some of them already are giving back:
connection 

Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-10 Thread Yuriy Taraday
On Tue, Sep 9, 2014 at 9:58 PM, Doug Hellmann d...@doughellmann.com wrote:


 On Sep 9, 2014, at 10:51 AM, Sean Dague s...@dague.net wrote:

  On 09/09/2014 10:41 AM, Doug Hellmann wrote:
 
  On Sep 8, 2014, at 8:18 PM, James E. Blair cor...@inaugust.com wrote:
 
  Sean Dague s...@dague.net writes:
 
  The crux of the issue is that zookeeper python modules are C
 extensions.
  So you have to either install from packages (which we don't do in unit
  tests) or install from pip, which means forcing zookeeper dev packages
  locally. Realistically this is the same issue we end up with for mysql
  and pg, but given their wider usage we just forced that pain on
 developers.
  ...
  Which feels like we need some decoupling on our requirements vs. tox
  targets to get there. CC to Monty and Clark as our super awesome tox
  hackers to help figure out if there is a path forward here that makes
 sense.
 
  From a technical standpoint, all we need to do to make this work is to
  add the zookeeper python client bindings to (test-)requirements.txt.
  But as you point out, that makes it more difficult for developers who
  want to run unit tests locally without having the requisite libraries
  and header files installed.
 
  I don’t think I’ve ever tried to run any of our unit tests on a box
 where I hadn’t also previously run devstack to install all of those sorts
 of dependencies. Is that unusual?
 
  It is for Linux users, running local unit tests is the norm for me.

 To be clear, I run the tests on the same host where I ran devstack, not in
 a VM. I just use devstack as a way to bootstrap all of the libraries needed
 for the unit test dependencies. I guess I’m just being lazy. :-)


You can't run devstack everywhere you code (and want to run tests). I, for
example, can't run devstack on my work laptop because I use Gentoo there.
And I have MacOS X on my home laptop, so no devstack there too. The latter
should be more frequent case in the community.

That said I never had a problem with emerging (on either of systems)
necessary C libraries for tests to run. As long as they don't pull a lot of
(or any) Linux-specific dependencies, it's fine.

For me this issue is the case for setuptools' extras. The only problem with
them is that we can't specify them in requirement.txt files currently, so
we'd have to add another hack to pbr to gather extra dependencies from
files like requirements-extra_name.txt or smth like that.
Then we can provide different tox venvs for diferent extras sets.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-04 Thread Yuriy Taraday
On Wed, Sep 3, 2014 at 7:24 PM, Doug Hellmann d...@doughellmann.com wrote:

 On Sep 3, 2014, at 5:27 AM, Yuriy Taraday yorik@gmail.com wrote:

 On Tue, Sep 2, 2014 at 11:17 PM, Clark Boylan cboy...@sapwetik.org
 wrote:

 It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.


 We can make eventlet an optional dependency of oslo.messaging (through
 setuptools' extras). In fact I don't quite understand the need for eventlet
 as direct dependency there since we can just write code that uses threading
 library and it'll get monkeypatched if consumer app wants to use eventlet.


 There is code in the messaging library that makes calls directly into
 eventlet now, IIRC. It sounds like that could be changed, but that’s
 something to consider for a future version.


Yes, I hope to see unified threading/eventlet executor there
(futures-based, I guess) some day.

The last time I looked at setuptools extras they were a documented but
 unimplemented specification. Has that changed?


According to docs [1] it works in pip (and has been working in setuptools
for ages), and according to bug [2], it has been working for couple years.

[1] http://pip.readthedocs.org/en/latest/reference/pip_install.html#examples
(#6)
[2] https://github.com/pypa/pip/issues/7

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-04 Thread Yuriy Taraday
On Wed, Sep 3, 2014 at 8:21 PM, Doug Hellmann d...@doughellmann.com wrote:

  On Sep 3, 2014, at 11:57 AM, Clark Boylan cboy...@sapwetik.org wrote:
  On Wed, Sep 3, 2014, at 08:22 AM, Doug Hellmann wrote:
 
  On Sep 2, 2014, at 3:17 PM, Clark Boylan cboy...@sapwetik.org wrote:
  The setup.cfg classifiers should be able to do that for us, though PBR
  may need updating? We will also need to learn to upload potentially 1
 
  How do you see that working? We want all of the Oslo libraries to,
  eventually, support both python 2 and 3. How would we use the
 classifiers
  to tell when to build a universal wheel and when to build separate
  wheels?
 
  The classifiers provide info on the versions of python we support. By
  default we can build python2 wheel if only 2 is supported, build python3
  wheel if only 3 is supported, build a universal wheel if both are
  supported. Then we can add a setup.cfg flag to override the universal
  wheel default to build both a python2 and python3 wheel instead. Dstufft
  and mordred should probably comment on this idea before we implement
  anything.

 OK. I’m not aware of any python-3-only projects, and the flag to override
 the universal wheel is the piece I was missing. I think there’s already a
 setuptools flag related to whether or not we should build universal wheels,
 isn’t there?


I think we should rely on wheel.universal flag from setup.cfg if it's
there. If it's set, we should always build universal wheels. If it's not
set, we should look in specifiers and build wheels for Python versions that
are mentioned there.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-04 Thread Yuriy Taraday
On Thu, Sep 4, 2014 at 4:47 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-09-03 13:27:55 +0400 (+0400), Yuriy Taraday wrote:
 [...]
  May be we should drop 3.3 already?

 It's in progress. Search review.openstack.org for open changes in
 all projects with the topic py34. Shortly I'll also have some
 infra config changes up to switch python33 jobs out for python34,
 ready to drop once the j-3 milestone has been tagged and is finally
 behind us.


Great! Looking forward to purging python 3.3 from my system.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-03 Thread Yuriy Taraday
On Tue, Sep 2, 2014 at 11:17 PM, Clark Boylan cboy...@sapwetik.org wrote:

 On Tue, Sep 2, 2014, at 11:30 AM, Yuriy Taraday wrote:
  Hello.
 
  Currently for alpha releases of oslo libraries we generate either
  universal
  or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
  releases in projects where Python 3.x is supported and verified in the
  gate. I've ran into this in change request [1] generated after
  global-requirements change [2]. There we have oslotest library that can't
  be built as a universal wheel because of different requirements (mox vs
  mox3 as I understand is the main difference). Because of that py33 job in
  [1] failed and we can't bump oslotest version in requirements.
 
  I propose to change infra scripts that generate and upload wheels to
  create
  py3 wheels as well as py2 wheels for projects that support Python 3.x (we
  can use setup.cfg classifiers to find that out) but don't support
  universal
  wheels. What do you think about that?
 
  [1] https://review.openstack.org/117940
  [2] https://review.openstack.org/115643
 
  --
 
  Kind regards, Yuriy.
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 We may find that we will need to have py3k wheels in addition to the
 existing wheels at some point, but I don't think this use case requires
 it. If oslo.test needs to support python2 and python3 it should use mox3
 in both cases which claims to support python2.6, 2.7 and 3.2. Then you
 can ship a universal wheel. This should solve the immediate problem.


Yes, I think, it's the way to go with oslotest specifically. Created a
change request for this: https://review.openstack.org/118551

It has been pointed out to me that one case where it won't be so easy is
 oslo.messaging and its use of eventlet under python2. Messaging will
 almost certainly need python 2 and python 3 wheels to be separate. I
 think we should continue to use universal wheels where possible and only
 build python2 and python3 wheels in the special cases where necessary.


We can make eventlet an optional dependency of oslo.messaging (through
setuptools' extras). In fact I don't quite understand the need for eventlet
as direct dependency there since we can just write code that uses threading
library and it'll get monkeypatched if consumer app wants to use eventlet.

The setup.cfg classifiers should be able to do that for us, though PBR
 may need updating?


I don't think so - it loads all classifiers from setup.cfg, they should be
available through some distutils machinery.

We will also need to learn to upload potentially 1
 wheel in our wheel jobs. That bit is likely straight foward. The last
 thing that we need to make sure we do is that we have some testing in
 place for the special wheels. We currently have the requirements
 integration test which runs under python2 checking that we can actually
 install all the things together. This ends up exercising our wheels and
 checking that they actually work. We don't have a python3 equivalent for
 that job. It may be better to work out some explicit checking of the
 wheels we produce that applies to both versions of python. I am not
 quite sure how we should approach that yet.


I guess we can just repeat that check with Python 3.x. If I see it right,
all we need is to repeat loop in pbr/tools/integration.sh with different
Python version. The problem might occur that we'll be running this test
with Python 3.4 that is default on trusty but all our unittests jobs run on
3.3 instead. May be we should drop 3.3 already?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] [infra] Alpha wheels for Python 3.x

2014-09-02 Thread Yuriy Taraday
Hello.

Currently for alpha releases of oslo libraries we generate either universal
or Python 2.x-only wheels. This presents a problem: we can't adopt alpha
releases in projects where Python 3.x is supported and verified in the
gate. I've ran into this in change request [1] generated after
global-requirements change [2]. There we have oslotest library that can't
be built as a universal wheel because of different requirements (mox vs
mox3 as I understand is the main difference). Because of that py33 job in
[1] failed and we can't bump oslotest version in requirements.

I propose to change infra scripts that generate and upload wheels to create
py3 wheels as well as py2 wheels for projects that support Python 3.x (we
can use setup.cfg classifiers to find that out) but don't support universal
wheels. What do you think about that?

[1] https://review.openstack.org/117940
[2] https://review.openstack.org/115643

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-11 Thread Yuriy Taraday
On Mon, Aug 11, 2014 at 5:44 AM, Joshua Harlow harlo...@outlook.com wrote:

 One question from me:

 Will there be later fixes to remove oslo.config dependency/usage from
 oslo.concurrency?

 I still don't understand how oslo.concurrency can be used as a library
 with the configuration being set in a static manner via oslo.config (let's
 use the example of `lock_path` @ https://github.com/YorikSar/
 oslo.concurrency/blob/master/oslo/concurrency/lockutils.py#L41). For
 example:

 Library X inside application Z uses lockutils (via the nice
 oslo.concurrency library) and sets the configuration `lock_path` to its
 desired settings, then library Y (also a user of oslo.concurrency) inside
 same application Z sets the configuration for `lock_path` to its desired
 settings. Now both have some unknown set of configuration they have set and
 when library X (or Y) continues to use lockutils they will be using some
 mix of configuration (likely some mish mash of settings set by X and Y);
 perhaps to a `lock_path` that neither actually wants to be able to write
 to...

 This doesn't seem like it will end well; and will just cause headaches
 during debug sessions, testing, integration and more...

 The same question can be asked about the `set_defaults()` function, how is
 library Y or X expected to use this (are they?)??

 I hope one of the later changes is to remove/fix this??

 Thoughts?

 -Josh


I'd be happy to remove lock_path config variable altogether. It's basically
never used. There are two basic branches in code wrt lock_path:
- when you provide lock_path argument to lock (and derivative functions),
file-based lock is used and CONF.lock_path is ignored;
- when you don't provide lock_path in arguments, semaphore-based lock is
used and CONF.lock_path is just a prefix for its name (before hashing).

I wonder if users even set lock_path in their configs as it has almost no
effect. So I'm all for removing it, but...
From what I understand, every major change in lockutils drags along a lot
of headache for everybody (and risk of bugs that would be discovered very
late). So is such change really worth it? And if so, it will require very
thorough research of lockutils usage patterns.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
Hello, oslo cores.

I've finished polishing up oslo.concurrency repo at [0] - please take a
look at it. I used my new version of graduate.sh [1] to generate it, so
history looks a bit different from what you might be used to.

I've made as little changes as possible, so there're still some steps left
that should be done after new repo is created:
- fix PEP8 errors H405 and E126;
- use strutils from oslo.utils;
- remove eventlet dependency (along with random sleeps), but proper testing
with eventlet should remain;
- fix for bug [2] should be applied from [3] (although it needs some
improvements);
- oh, there's really no limit for this...

I'll finalize and publish relevant change request to openstack-infra/config
soon.

Looking forward to any feedback!

[0] https://github.com/YorikSar/oslo.concurrency
[1] https://review.openstack.org/109779
[2] https://bugs.launchpad.net/oslo/+bug/1327946
[3] https://review.openstack.org/108954

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.concurrency repo review

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:58 PM, Yuriy Taraday yorik@gmail.com wrote:

 Hello, oslo cores.

 I've finished polishing up oslo.concurrency repo at [0] - please take a
 look at it. I used my new version of graduate.sh [1] to generate it, so
 history looks a bit different from what you might be used to.

 I've made as little changes as possible, so there're still some steps left
 that should be done after new repo is created:
 - fix PEP8 errors H405 and E126;
 - use strutils from oslo.utils;
 - remove eventlet dependency (along with random sleeps), but proper
 testing with eventlet should remain;
 - fix for bug [2] should be applied from [3] (although it needs some
 improvements);
 - oh, there's really no limit for this...

 I'll finalize and publish relevant change request to
 openstack-infra/config soon.


Here it is: https://review.openstack.org/112666

Looking forward to any feedback!

 [0] https://github.com/YorikSar/oslo.concurrency
 [1] https://review.openstack.org/109779
 [2] https://bugs.launchpad.net/oslo/+bug/1327946
  [3] https://review.openstack.org/108954

 --

 Kind regards, Yuriy.




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 10:28 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 08/06/2014 05:41 PM, Zane Bitter wrote:

 On 06/08/14 18:12, Yuriy Taraday wrote:

 Well, as per Git author, that's how you should do with not-CVS. You have
 cheap merges - use them instead of erasing parts of history.


 This is just not true.

 http://www.mail-archive.com/dri-devel@lists.sourceforge.net/msg39091.html

 Choice quotes from the author of Git:

 * 'People can (and probably should) rebase their _private_ trees'
 * 'you can go wild on the git rebase thing'
 * 'we use git rebase etc while we work on our problems.'
 * 'git rebase is not wrong.'


 Also relevant:

 ...you must never pull into a branch that isn't already
 in good shape.

 Don't merge upstream code at random points.

 keep your own history clean


And in the very same thread he says I don't like how you always rebased
patches and none of these rules should be absolutely black-and-white.
But let's not get driven into discussion of what Linus said (or I'll have
to rewatch his ages old talk in Google to get proper quotes).
In no way I want to promote exposing private trees with all those
intermediate changes. And my proposal is not against rebasing (although we
could use -R option for git-review more often to publish what we've tested
and to let reviewers see diffs between patchsets). It is for letting people
keep history of their work towards giving you a crystal-clean change
request series.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Thu, Aug 7, 2014 at 7:36 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 05:35 PM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 11:00 PM, Ben Nemec openst...@nemebean.com
 wrote:
  You keep mentioning detached HEAD and reflog.  I have never had to deal
  with either when doing a rebase, so I think there's a disconnect here.
  The only time I see a detached HEAD is when I check out a change from
  Gerrit (and I immediately stick it in a local branch, so it's a
  transitive state), and the reflog is basically a safety net for when I
  horribly botch something, not a standard tool that I use on a daily
 basis.
 
 
  It usually takes some time for me to build trust in utility that does a
 lot
  of different things at once while I need only one small piece of that.
 So I
  usually do smth like:
  $ git checkout HEAD~2
  $ vim
  $ git commit
  $ git checkout mybranch
  $ git rebase --onto HEAD@{1} HEAD~2
  instead of almost the same workflow with interactive rebase.

 I'm sorry, but I don't trust the well-tested, widely used tool that Git
 provides to make this easier so I'm going to reimplement essentially the
 same thing in a messier way myself is a non-starter for me.  I'm not
 surprised you dislike rebases if you're doing this, but it's a solved
 problem.  Use git rebase -i.


I'm sorry, I must've mislead you by using word 'trust' in that sentence.
It's more like understanding. I like to understand how things work. I don't
like treating tools as black boxes. And I also don't like when tool does a
lot of things at once with no way back. So yes, I decompose 'rebase -i' a
bit and get slightly (1 command, really) longer workflow. But at least I
can stop at any point and think if I'm really finished at this step. And
sometimes interactive rebase works for me better than this, sometimes it
doesn't. It all depends on situation.

I don't dislike rebases because I sometimes use a bit longer version of it.
I would be glad to avoid them because they destroy history that can help me
later.

I think I've said all I'm going to say on this.


I hope you don't think that this thread was about rebases vs merges. It's
about keeping track of your changes without impact on review process.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-07 Thread Yuriy Taraday
On Fri, Aug 8, 2014 at 3:03 AM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 08/07/2014 04:52 PM, Yuriy Taraday wrote:

  I hope you don't think that this thread was about rebases vs merges.
 It's about keeping track of your changes without impact on review process.


 But if you rebase, what is stopping you from keeping whatever private
 history you want and then rebase the desired changes onto the version that
 the current review tools are using?


That's almost what my proposal is about - allowing developer to keep
private history and store uploaded changes separately.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
I'd like to stress this to everyone: I DO NOT propose squashing together
commits that should belong to separate change requests. I DO NOT propose to
upload all your changes at once. I DO propose letting developers to keep
local history of all iterations they have with a change request. The
history that absolutely doesn't matter to anyone but this developer.

On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net wrote:

 Ben Nemec openst...@nemebean.com writes:

  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
   They're basically unnecessary conflicts waiting to happen.

 Yeah, I would never keep broken or unfinished commits around like this.
 In my opinion (as a core Mercurial developer), the best workflow is to
 work on a feature and make small and large commits as you go along. When
 the feature works, you begin squashing/splitting the commits to make
 them into logical pieces, if they aren't already in good shape. You then
 submit the branch for review and iterate on it until it is accepted.


Absolutely true. And it's mostly the same workflow that happens in
OpenStack: you do your cool feature, you carve meaningful small
self-contained pieces out of it, you submit series of change requests.
And nothing in my proposal conflicts with it. It just provides a way to
make developer's side of this simpler (which is the intent of git-review,
isn't it?) while not changing external artifacts of one's work: the same
change requests, with the same granularity.


 As a reviewer, it cannot be stressed enough how much small, atomic,
 commits help. Squashing things together into large commits make reviews
 very tricky and removes the possibility of me accepting a later commit
 while still discussing or rejecting earlier commits (cherry-picking).


That's true, too. But please don't think I'm proposing to squash everything
together and push 10k-loc patches. I hate that, too. I'm proposing to let
developer use one's tools (Git) in a simpler way.
And the simpler way (for some of us) would be to have one local branch for
every change request, not one branch for the whole series. Switching
between branches is very well supported by Git and doesn't require extra
thinking. Jumping around in detached HEAD state and editing commits during
rebase requires remembering all those small details.

 FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than conflicts,
  of course, which are going to be an issue with any long-running change
  no matter how it's submitted. There isn't a ton of git magic involved.

 I agree. The conflicts you talk about are intrinsic to the parallel
 development. Doing a rebase is equivalent to doing a series of merges,
 so if rebase gives you conflicts, you can be near certain that a plain
 merge would give you conflicts too. The same applies other way around.


You disregard other issues that can happen with patch series. You might
need something more that rebase. You might need to fix something. You might
need to focus on the one commit in the middle and do huge bunch of changes
in it alone. And I propose to just allow developer to keep track of what's
one been doing instead of forcing one to remember all of this.

 So as you may have guessed by now, I'm opposed to adding this to
  git-review. I think it's going to encourage bad committer behavior
  (monolithic commits) and doesn't address a use case I find compelling
  enough to offset that concern.

 I don't understand why this would even be in the domain of git-review. A
 submitter can do the puff magic stuff himself using basic Git commands
 before he submits the collapsed commit.


Isn't it the domain of git-review - puff magic? You can upload your
changes with 'git push HEAD:refs/for/master', you can do all your rebasing
by yourself, but somehow we ended up with this tool that simplifies common
tasks related to uploading changes to Gerrit.
And (at least for some) such change would simplify their day-to-day
workflow with regards to uploading changes to Gerrit.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 12:55 PM, Sylvain Bauza sba...@redhat.com wrote:


 Le 06/08/2014 10:35, Yuriy Taraday a écrit :

  I'd like to stress this to everyone: I DO NOT propose squashing together
 commits that should belong to separate change requests. I DO NOT propose to
 upload all your changes at once. I DO propose letting developers to keep
 local history of all iterations they have with a change request. The
 history that absolutely doesn't matter to anyone but this developer.


 Well, I can understand that for ease, we could propose it as an option in
 git-review, but I'm just thinking that if you consider your local Git repo
 as your single source of truth (and not Gerrit), then you just have to make
 another branch and squash your intermediate commits for Gerrit upload only.


That's my proposal - generate such another branches automatically. And
from this thread it looks like some people already do them by hand.


 If you need modifying (because of another iteration), you just need to
 amend the commit message on each top-squasher commit by adding the
 Change-Id on your local branch, and redo the process (make a branch,
 squash, upload) each time you need it.


I don't quite understand the top-squasher commit part but what I'm
suggesting is to automate this process to make users including myself
happier.


 Gerrit is cool, it doesn't care about SHA-1s but only Change-Id, so
 cherry-picking and rebasing still works (hurrah)


Yes, and that's the only stable part of those another branches.


 tl;dr: do as many as intermediate commits you want, but just generate a
 Change-ID on the commit you consider as patch, so you just squash the
 intermediate commits on a separate branch copy for Gerrit use only
 (one-way).

 Again, I can understand the above as hacky, so I'm not against your
 change, just emphasizing it as non-necessary (but anyway, everything can be
 done without git-review, even the magical -m option :-) )


I'd even prefer to leave it to git config file so that it won't get
accidentally enabled unless user know what one's doing.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 6:20 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
  I'd like to stress this to everyone: I DO NOT propose squashing together
  commits that should belong to separate change requests. I DO NOT propose
 to
  upload all your changes at once. I DO propose letting developers to keep
  local history of all iterations they have with a change request. The
  history that absolutely doesn't matter to anyone but this developer.

 Right, I understand that may not be the intent, but it's almost
 certainly going to be the end result.  You can't control how people are
 going to use this feature, and history suggests if it can be abused, it
 will be.


Can you please outline the abuse scenario that isn't present nowadays?
People upload huge changes and are encouraged to split them during review.
The same will happen within proposed workflow. More experienced developers
split their change into a set of change requests. The very same will happen
within proposed workflow.


  On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net
 wrote:
 
  Ben Nemec openst...@nemebean.com writes:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
   They're basically unnecessary conflicts waiting to happen.
 
  Yeah, I would never keep broken or unfinished commits around like this.
  In my opinion (as a core Mercurial developer), the best workflow is to
  work on a feature and make small and large commits as you go along. When
  the feature works, you begin squashing/splitting the commits to make
  them into logical pieces, if they aren't already in good shape. You then
  submit the branch for review and iterate on it until it is accepted.
 
 
  Absolutely true. And it's mostly the same workflow that happens in
  OpenStack: you do your cool feature, you carve meaningful small
  self-contained pieces out of it, you submit series of change requests.
  And nothing in my proposal conflicts with it. It just provides a way to
  make developer's side of this simpler (which is the intent of git-review,
  isn't it?) while not changing external artifacts of one's work: the same
  change requests, with the same granularity.
 
 
  As a reviewer, it cannot be stressed enough how much small, atomic,
  commits help. Squashing things together into large commits make reviews
  very tricky and removes the possibility of me accepting a later commit
  while still discussing or rejecting earlier commits (cherry-picking).
 
 
  That's true, too. But please don't think I'm proposing to squash
 everything
  together and push 10k-loc patches. I hate that, too. I'm proposing to let
  developer use one's tools (Git) in a simpler way.
  And the simpler way (for some of us) would be to have one local branch
 for
  every change request, not one branch for the whole series. Switching
  between branches is very well supported by Git and doesn't require extra
  thinking. Jumping around in detached HEAD state and editing commits
 during
  rebase requires remembering all those small details.
 
  FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than conflicts,
  of course, which are going to be an issue with any long-running change
  no matter how it's submitted. There isn't a ton of git magic involved.
 
  I agree. The conflicts you talk about are intrinsic to the parallel
  development. Doing a rebase is equivalent to doing a series of merges,
  so if rebase gives you conflicts, you can be near certain that a plain
  merge would give you conflicts too. The same applies other way around.
 
 
  You disregard other issues that can happen with patch series. You might
  need something more that rebase. You might need to fix something. You
 might
  need to focus on the one commit in the middle and do huge bunch of
 changes
  in it alone. And I propose to just allow developer to keep track of
 what's
  one been doing instead of forcing one to remember all of this.

 This is a separate issue though.  Editing a commit in the middle of a
 series doesn't have to be done at the same time as a rebase to master.


No, this will be done with a separate interactive rebase or that detached
HEAD and reflog dance. I don't see this as smth clearer than doing proper
commits in a separate branches.

In fact, not having

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 12:41 AM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 1:17 AM, Ben Nemec openst...@nemebean.com
 wrote:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
  On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com
  wrote:
 
  On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.
 
  I don't understand this.  If it's a complex change that you need
  multiple commits to keep track of locally, why wouldn't reviewers want
  the same thing?  Squashing a bunch of commits together solely so you
  have one review for Gerrit isn't a good thing.  Is it just the warning
  message that git-review prints when you try to push multiple commits
  that is the problem here?
 
 
  When you're developing some big change you'll end up with trying dozens
  of
  different approaches and make thousands of mistakes. For reviewers this
  is
  just unnecessary noise (commit title Scratch my last CR, that was
  bullshit) while for you it's a precious history that can provide basis
  for
  future research or bug-hunting.
 
  So basically keeping a record of how not to do it?
 
 
  Well, yes, you can call version control system a history of failures.
  Because if there were no failures there would've been one omnipotent
 commit
  that does everything you want it to.

 Ideally, no.  In a perfect world every commit would work, so the version
 history would be a number of small changes that add up to this great
 application.  In reality it's a combination of new features, oopses, and
 fixes for those oopses.  I certainly wouldn't describe it as a history
 of failures though.  I would hope the majority of commits to our
 projects are _not_ failures. :-)


Well, new features are merged just to be later fixed and refactored - how
that's not a failure? And we basically do keep a record of how not to do
it in our repositories. Why prevent developers do the same on the smaller
scale?

  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to master.
 
 
  The commits themselves are never going to merge to master but that's not
  the only meaning of their life. With current tooling working branch
 ends
  up a patch series that is constantly rewritten with no proper history of
  when did that happen and why. As I said, you can't find roots of bugs in
  your code, you can't dig into old versions of your code (what if you
 need a
  method that you've already created but removed because of some wrong
  suggestion?).

 You're not going to find the root of a bug in your code by looking at an
 old commit that was replaced by some other implementation.  If anything,
 I see that as more confusing.  And if you want to keep old versions of
 your code, either push it to Gerrit or create a new branch before
 changing it further.


So you propose two options:
- store history of your work within Gerrit's patchsets for each change
request, which don't fit commit often approach (who'd want to see how I
struggle with fixing some bug or write working test?);
- store history of your work in new branches instead of commits in the same
branch, which... is not how Git is supposed to be used.
And both this options don't provide any proper way of searching through
this history.

Have you ever used bisect? Sometimes I find myself wanting to use it
instead of manually digging through patchsets in Gerrit to find out which
change I made broke some usecase I didn't put in unittests yet.

  They're basically unnecessary conflicts waiting to happen.
 
 
  No. They are your local history. They don't need to be rebased on top of
  master - you can just merge master into your branch and resolve conflicts
  once. After that your autosquashed commit will merge clearly back to
  master.

 Then don't rebase them.  git checkout -b dead-end and move on. :-)


I never proposed to rebase anything. I want to use merge instead of rebase.

  Merges are one of the strong sides of Git itself (and keeping them very
  easy is one of the founding principles behind it). With current
 workflow
  we
  don't use them at all. master went too far forward? You have to do
 rebase
  and screw all your local history and most likely squash everything
 anyway
  because you don't want to fix commits with known bugs in them. With
  proposed feature you can just do merge once and let 'git review' add
 some
  magic without ever hurting your code.
 
  How do rebases screw up your local history?  All your commits are still
  there after a rebase, they just have a different parent.  I also don't
  see how rebases are all that much worse than merges.  If there are no
  conflicts, rebases are trivial

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
I'll start using pictures now, so let's assume M is the latest commit on
the master.

On Wed, Aug 6, 2014 at 9:31 PM, Zane Bitter zbit...@redhat.com wrote:

 On 04/08/14 19:18, Yuriy Taraday wrote:

 Hello, git-review users!

 I'd like to gather feedback on a feature I want to implement that might
 turn out useful for you.

 I like using Git for development. It allows me to keep track of current
 development process, it remembers everything I ever did with the code
 (and more).


 _CVS_ allowed you to remember everything you ever did; Git is _much_ more
 useful.


  I also really like using Gerrit for code review. It provides clean
 interfaces, forces clean histories (who needs to know that I changed one
 line of code in 3am on Monday?) and allows productive collaboration.


 +1


  What I really hate is having to throw away my (local, precious for me)
 history for all change requests because I need to upload a change to
 Gerrit.


 IMO Ben is 100% correct and, to be blunt, the problem here is your
 workflow.


Well... That's the workflow that was born with Git. Keeping track of all
changes, do extremely cheap merges, and all that.

Don't get me wrong, I sympathise - really, I do. Nobody likes to change
 their workflow. I *hate* it when I have to change mine. However what you're
 proposing is to modify the tools to make it easy for other people - perhaps
 new developers - to use a bad workflow instead of to learn a good one from
 the beginning, and that would be a colossal mistake. All of the things you
 want to be made easy are currently hard because doing them makes the world
 a worse place.


And when OpenStack switched to Gerrit I was really glad. Instead of ugly

master: ...-M-.-o-o-...
 \   /
  a1-b1-a2-a3-b2-c1-b3-c2

where a[1-3], b[1-3] and c[1-2] are iterations over the same piece of the
feature, we can have pretty

master: ...-M-.o-.-o-...
 \/   /
  A^-B^-C^

where A^, B^ and C^ are the perfect self-contained, independently
reviewable and mergeable pieces of the feature.

And this looked splendid and my workflow seemed clear. Suppose I have smth
like:

master: ...-M
 \
  A3-B2-C1

and I need to update B to B3 and C to C2. So I go:
$ git rebase -i M  # and add edit action to B commit
$ vim # do some changes, test them, etc
$ git rebase --continue
now I have

master: ...-M
 \
  A3-B2-C1
\
 B3-C1'

Then I fix C commit, amend it and get:

master: ...-M
 \
  A3-B2-C1
\
 B3-C1'
   \
С2

Now everything's cool, isn't it? But world isn't fair. And C2 fails a test
that I didn't expect to fail. Or the test suite failed to fail earlier. I'd
like to see if I broke it just now or were it broken after rebase. How do I
do it? With your workflow - I don't. I play smart and guess where the
problem was or dig into reflog to find C1' (or C1), etc. Let's see what
else I can't find. After full iteration over this feature (as in the first
picture) I end up with total history like this:

master: ...-M
|\
| A1-B1
|\
| A2-B1'
 \
  A3-B1''
   |\
   | B2-C1
\
 B3-C1'
   \
С2

With only A3, B3 and C2 available, the rest are practically unreachable.
Now you find out that something that you were sure was working in B1 is
broken (you'll tell me hey, you're supposed to have tests with
everything! - I'll answer: what if you've found a problem in the test
suite that gave false success?). You can do absolutely nothing to localize
the issue now. Just go and dig into your B code (which might've been
written months ago).
Or you slap your head understanding that the function you thought is not
needed in B2 is actually needed. Well, you can hope you did upload B2 to
Gerrit and you'll find it there. Or you didn't because you decided to make
that change the minute after you committed C1, created B3 and B2 never
existed now...

Now imagine you could somehow link together all As, Bs and Cs. Let's draw
vertical edges between them. And let's transpose the picture, shall we?

master: ...-M
 \
  A1-A2--A3
\  \   \\  \
 B1-B1'-B1''-B2-B3
   \  \   \
C1-C1'-C2

Note that all commits here are absolutely the same as in previous picture.
They just have additional parents (and consequently differen IDs). No
changes to any code in them happen. No harm done.

So now it looks way better. I can just do:
$ git checkout B3
$ git diff HEAD~
and find my lost function!

Now let's be honest and admit that As, Bs and Cs are essentially branches -
labels your commits have that shift with relevant

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-06 Thread Yuriy Taraday
Oh, looks like we got a bit of a race condition in messages. I hope you
don't mind.


On Wed, Aug 6, 2014 at 11:00 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/06/2014 01:42 PM, Yuriy Taraday wrote:
  On Wed, Aug 6, 2014 at 6:20 PM, Ben Nemec openst...@nemebean.com
 wrote:
 
  On 08/06/2014 03:35 AM, Yuriy Taraday wrote:
  I'd like to stress this to everyone: I DO NOT propose squashing
 together
  commits that should belong to separate change requests. I DO NOT
 propose
  to
  upload all your changes at once. I DO propose letting developers to
 keep
  local history of all iterations they have with a change request. The
  history that absolutely doesn't matter to anyone but this developer.
 
  Right, I understand that may not be the intent, but it's almost
  certainly going to be the end result.  You can't control how people are
  going to use this feature, and history suggests if it can be abused, it
  will be.
 
 
  Can you please outline the abuse scenario that isn't present nowadays?
  People upload huge changes and are encouraged to split them during
 review.
  The same will happen within proposed workflow. More experienced
 developers
  split their change into a set of change requests. The very same will
 happen
  within proposed workflow.

 There will be a documented option in git-review that automatically
 squashes all commits.  People _will_ use that incorrectly because from a
 submitter perspective it's easier to deal with one review than multiple,
 but from a reviewer perspective it's exactly the opposite.


It won't be documented as such. It will include use with care and years
of Git experience: 3+ stickers. Autosquashing will never be mentioned
there. Only a detailed explanation of how to work with it and (probably)
how it works. No rogue dev will get through it without getting the true
understanding.

By the way, currently git-review suggests to squash your outstanding
commits but there is no overwhelming flow of overly huge change requests,
is there?

 On Wed, Aug 6, 2014 at 12:03 PM, Martin Geisler mar...@geisler.net
  wrote:
 
  Ben Nemec openst...@nemebean.com writes:
 
  On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
 
  When you're developing some big change you'll end up with trying
  dozens of different approaches and make thousands of mistakes. For
  reviewers this is just unnecessary noise (commit title Scratch my
  last CR, that was bullshit) while for you it's a precious history
  that can provide basis for future research or bug-hunting.
 
  So basically keeping a record of how not to do it?  I get that, but I
  think I'm more onboard with the suggestion of sticking those dead end
  changes into a separate branch.  There's no particular reason to keep
  them on your working branch anyway since they'll never merge to
 master.
   They're basically unnecessary conflicts waiting to happen.
 
  Yeah, I would never keep broken or unfinished commits around like
 this.
  In my opinion (as a core Mercurial developer), the best workflow is to
  work on a feature and make small and large commits as you go along.
 When
  the feature works, you begin squashing/splitting the commits to make
  them into logical pieces, if they aren't already in good shape. You
 then
  submit the branch for review and iterate on it until it is accepted.
 
 
  Absolutely true. And it's mostly the same workflow that happens in
  OpenStack: you do your cool feature, you carve meaningful small
  self-contained pieces out of it, you submit series of change requests.
  And nothing in my proposal conflicts with it. It just provides a way to
  make developer's side of this simpler (which is the intent of
 git-review,
  isn't it?) while not changing external artifacts of one's work: the
 same
  change requests, with the same granularity.
 
 
  As a reviewer, it cannot be stressed enough how much small, atomic,
  commits help. Squashing things together into large commits make
 reviews
  very tricky and removes the possibility of me accepting a later commit
  while still discussing or rejecting earlier commits (cherry-picking).
 
 
  That's true, too. But please don't think I'm proposing to squash
  everything
  together and push 10k-loc patches. I hate that, too. I'm proposing to
 let
  developer use one's tools (Git) in a simpler way.
  And the simpler way (for some of us) would be to have one local branch
  for
  every change request, not one branch for the whole series. Switching
  between branches is very well supported by Git and doesn't require
 extra
  thinking. Jumping around in detached HEAD state and editing commits
  during
  rebase requires remembering all those small details.
 
  FWIW, I have had long-lived patch series, and I don't really see what
  is so difficult about running git rebase master. Other than
 conflicts,
  of course, which are going to be an issue with any long-running
 change
  no matter how it's submitted. There isn't a ton of git magic
 involved.
 
  I agree. The conflicts you talk

Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 5:15 AM, Angus Salkeld angus.salk...@rackspace.com
wrote:

 On Tue, 2014-08-05 at 03:18 +0400, Yuriy Taraday wrote:
  Hello, git-review users!
 
 
  I'd like to gather feedback on a feature I want to implement that
  might turn out useful for you.
 
 
  I like using Git for development. It allows me to keep track of
  current development process, it remembers everything I ever did with
  the code (and more).
  I also really like using Gerrit for code review. It provides clean
  interfaces, forces clean histories (who needs to know that I changed
  one line of code in 3am on Monday?) and allows productive
  collaboration.
  What I really hate is having to throw away my (local, precious for me)
  history for all change requests because I need to upload a change to
  Gerrit.

 I just create a short-term branch to record this.


I tend to use branches that are squashed down to one commit after the first
upload and that's it. I'd love to keep all history during feature
development, not just the tip of it.


 
  That's why I want to propose making git-review to support the workflow
  that will make me happy. Imagine you could do smth like this:
 
 
  0. create new local branch;
 
 
  master: M--
   \
  feature:  *
 
 
  1. start hacking, doing small local meaningful (to you) commits;
 
 
  master: M--
   \
  feature:  A-B-...-C
 
 
  2. since hacking takes tremendous amount of time (you're doing a Cool
  Feature (tm), nothing less) you need to update some code from master,
  so you're just merging master in to your branch (i.e. using Git as
  you'd use it normally);
 
  master: M---N-O-...
   \\\
  feature:  A-B-...-C-D-...
 
 
  3. and now you get the first version that deserves to be seen by
  community, so you run 'git review', it asks you for desired commit
  message, and poof, magic-magic all changes from your branch is
  uploaded to Gerrit as _one_ change request;
 
  master: M---N-O-...
   \\\E* = uploaded
  feature:  A-B-...-C-D-...-E
 
 
  4. you repeat steps 1 and 2 as much as you like;
  5. and all consecutive calls to 'git review' will show you last commit
  message you used for upload and use it to upload new state of your
  local branch to Gerrit, as one change request.
 
 
  Note that during this process git-review will never run rebase or
  merge operations. All such operations are done by user in local branch
  instead.
 
 
  Now, to the dirty implementations details.
 
 
  - Since suggested feature changes default behavior of git-review,
  it'll have to be explicitly turned on in config
  (review.shadow_branches? review.local_branches?). It should also be
  implicitly disabled on master branch (or whatever is in .gitreview
  config).
  - Last uploaded commit for branch branch-name will be kept in
  refs/review-branches/branch-name.
  - For every call of 'git review' it will find latest commit in
  gerrit/master (or remote and branch from .gitreview), create a new one
  that will have that commit as its parent and a tree of current commit
  from local branch as its tree.
  - While creating new commit, it'll open an editor to fix commit
  message for that new commit taking it's initial contents from
  refs/review-branches/branch-name if it exists.
  - Creating this new commit might involve generating a temporary bare
  repo (maybe even with shared objects dir) to prevent changes to
  current index and HEAD while using bare 'git commit' to do most of the
  work instead of loads of plumbing commands.
 
 
  Note that such approach won't work for uploading multiple change
  request without some complex tweaks, but I imagine later we can
  improve it and support uploading several interdependent change
  requests from several local branches. We can resolve dependencies
  between them by tracking latest merges (if branch myfeature-a has been
  merged to myfeature-b then change request from myfeature-b will depend
  on change request from myfeature-a):
 
  master:M---N-O-...
  \\\-E*
  myfeature-a: A-B-...-C-D-...-E   \
\   \   J* = uploaded
  myfeature-b:   F-...-G-I-J
 
 
  This improvement would be implemented later if needed.
 
 
  I hope such feature seams to be useful not just for me and I'm looking
  forward to some comments on it.

 Hi Yuriy,

 I like my local history matching what is up for review and
 don't value the interim messy commits (I make a short term branch to
 save the history so I can go back to it - if I mess up a merge).


You'll still get this history in those special refs. But in your branch
you'll have your own history.



 Tho' others might love this idea.

 -Angus



-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 3:06 PM, Ryan Brown rybr...@redhat.com wrote:

  On 08/04/2014 07:18 PM, Yuriy Taraday wrote:
  snip

 +1, this is definitely a feature I'd want to see.

 Currently I run two branches bug/LPBUG#-local and bug/LPBUG# where
 the local is my full history of the change and the other branch is the
 squashed version I send out to Gerrit.


And I'm too lazy to keep switching between these branches :)
Great, you're first to support this feature!

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 5:27 PM, Sylvain Bauza sba...@redhat.com wrote:

 -1 to this as git-review default behaviour.


I don't suggest to make it the default behavior. As I wrote there will
definitely be a config option that would turn it on.


 Ideally, branches should be identical in between Gerrit and local Git.


The thing is that there's no feature branches in Gerrit. Just some number
of independent commits (patchsets). And you'll even get log of those
locally in special refs!


 I can understand some exceptions where developers want to work on
 intermediate commits and squash them before updating Gerrit, but in that
 case, I can't see why it needs to be kept locally. If a new patchset has to
 be done on patch A, then the local branch can be rebased interactively on
 last master, edit patch A by doing an intermediate patch, then squash the
 change, and pick the later patches (B to E)


And that works up to the point when your change requests evolves for
several months and there's no easy way to dig up why did you change that
default or how did this algorithm ended up in such shape. You can't simply
run bisect to find what did you break since 10 patchsets ago. Git has been
designed to be super easy to keep branches and most of them - locally. And
we can't properly use them.


 That said, I can also understand that developers work their way, and so
 could dislike squashing commits, hence my proposal to have a --no-squash
 option when uploading, but use with caution (for a single branch, how many
 dependencies are outdated in Gerrit because developers work on separate
 branches for each single commit while they could work locally on a single
 branch ? I can't iimagine how often errors could happen if we don't force
 by default to squash commits before sending them to Gerrit)


I don't quite get the reason for --no-squash option. With current
git-review there's no squashing at all. You either upload all outstanding
commits or you go a change smth by yourself. With my suggested approach you
don't squash (in terms of rebasing) anything, you just create a new commit
with the very same contents as in your branch.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 6:49 PM, Ryan Brown rybr...@redhat.com wrote:

 On 08/05/2014 09:27 AM, Sylvain Bauza wrote:
 
  Le 05/08/2014 13:06, Ryan Brown a écrit :
  -1 to this as git-review default behaviour. Ideally, branches should be
  identical in between Gerrit and local Git.

 Probably not as default behaviour (people who don't want that workflow
 would be driven mad!), but I think enough folks would want it that it
 should be available as an option.


This would definitely be a feature that only some users would turn on in
their config files.


 I am well aware this may be straying into feature creep territory, and
 it wouldn't be terrible if this weren't implemented.


I'm not sure I understand what do you mean by this...

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 7:51 PM, ZZelle zze...@gmail.com wrote:

 Hi,


 I like the idea  ... with complex change, it could useful for the
 understanding to split it into smaller changes during development.


 Do we need to expose such feature under git review? we could define a new
 subcommand? git reviewflow?


Yes. I think we should definitely make it an enhancement for 'git review'
command because it's essentially mostly the same 'git review' control flow
with an extra preparation step and a bit shifted upload source. git-review
is a magic command that does what you need finishing with change request
upload. And this is exactly what I want here.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 8:20 PM, Varnau, Steve (Trafodion) 
steve.var...@hp.com wrote:

  Yuriy,



 It looks like this would automate a standard workflow that my group often
 uses: multiple commits, create “delivery” branch, git merge --squash, git
 review.  That looks really useful.



 Having it be repeatable is a bonus.


That's great! I'm glad to hear that there are more and more supporters for
it.


  Per last bullet of the implementation, I would not require not modifying
 current index/HEAD. A checkout back to working branch can be done at the
 end, right?


To make this magic commit we'll have to backtrack HEAD to the latest commit
in master, then load tree from the latest commit in the feature branch to
index and then do the commit. To do this properly without hurting worktree,
messing index and losing HEAD I think it'd be safer to create a very small
clone. As a bonus you won't have to stash your local changes or current
index to run 'git review'.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com wrote:

 On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.

 I don't understand this.  If it's a complex change that you need
 multiple commits to keep track of locally, why wouldn't reviewers want
 the same thing?  Squashing a bunch of commits together solely so you
 have one review for Gerrit isn't a good thing.  Is it just the warning
 message that git-review prints when you try to push multiple commits
 that is the problem here?


When you're developing some big change you'll end up with trying dozens of
different approaches and make thousands of mistakes. For reviewers this is
just unnecessary noise (commit title Scratch my last CR, that was
bullshit) while for you it's a precious history that can provide basis for
future research or bug-hunting.

Merges are one of the strong sides of Git itself (and keeping them very
easy is one of the founding principles behind it). With current workflow we
don't use them at all. master went too far forward? You have to do rebase
and screw all your local history and most likely squash everything anyway
because you don't want to fix commits with known bugs in them. With
proposed feature you can just do merge once and let 'git review' add some
magic without ever hurting your code.

And speaking about breaking down of change requests don't forget support
for change requests chains that this feature would lead to. How to you deal
with 5 consecutive change request that are up on review for half a year?
The only way I could suggest to my colleague at a time was Erm... Learn
Git and dance with rebases, detached heads and reflogs! My proposal might
take care of that too.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [git-review] Supporting development in local branches

2014-08-05 Thread Yuriy Taraday
On Wed, Aug 6, 2014 at 1:17 AM, Ben Nemec openst...@nemebean.com wrote:

 On 08/05/2014 03:14 PM, Yuriy Taraday wrote:
  On Tue, Aug 5, 2014 at 10:48 PM, Ben Nemec openst...@nemebean.com
 wrote:
 
  On 08/05/2014 10:51 AM, ZZelle wrote:
  Hi,
 
 
  I like the idea  ... with complex change, it could useful for the
  understanding to split it into smaller changes during development.
 
  I don't understand this.  If it's a complex change that you need
  multiple commits to keep track of locally, why wouldn't reviewers want
  the same thing?  Squashing a bunch of commits together solely so you
  have one review for Gerrit isn't a good thing.  Is it just the warning
  message that git-review prints when you try to push multiple commits
  that is the problem here?
 
 
  When you're developing some big change you'll end up with trying dozens
 of
  different approaches and make thousands of mistakes. For reviewers this
 is
  just unnecessary noise (commit title Scratch my last CR, that was
  bullshit) while for you it's a precious history that can provide basis
 for
  future research or bug-hunting.

 So basically keeping a record of how not to do it?


Well, yes, you can call version control system a history of failures.
Because if there were no failures there would've been one omnipotent commit
that does everything you want it to.


  I get that, but I
 think I'm more onboard with the suggestion of sticking those dead end
 changes into a separate branch.  There's no particular reason to keep
 them on your working branch anyway since they'll never merge to master.


The commits themselves are never going to merge to master but that's not
the only meaning of their life. With current tooling working branch ends
up a patch series that is constantly rewritten with no proper history of
when did that happen and why. As I said, you can't find roots of bugs in
your code, you can't dig into old versions of your code (what if you need a
method that you've already created but removed because of some wrong
suggestion?).

 They're basically unnecessary conflicts waiting to happen.


No. They are your local history. They don't need to be rebased on top of
master - you can just merge master into your branch and resolve conflicts
once. After that your autosquashed commit will merge clearly back to
master.


  Merges are one of the strong sides of Git itself (and keeping them very
  easy is one of the founding principles behind it). With current workflow
 we
  don't use them at all. master went too far forward? You have to do rebase
  and screw all your local history and most likely squash everything anyway
  because you don't want to fix commits with known bugs in them. With
  proposed feature you can just do merge once and let 'git review' add some
  magic without ever hurting your code.

 How do rebases screw up your local history?  All your commits are still
 there after a rebase, they just have a different parent.  I also don't
 see how rebases are all that much worse than merges.  If there are no
 conflicts, rebases are trivial.  If there are conflicts, you'd have to
 resolve them either way.


Merge is a new commit, new recorded point in history. Rebase is rewriting
your commit, replacing it with a new one, without any record in history (of
course there will be a record in reflog but there's not much tooling to
work with it). Yes, you just apply your patch to a different version of
master branch. And then fix some conflicts. And then fix some tests. And
then you end up with totally different commit.
I totally agree that life's very easy when there's no conflicts and you've
written all your feature in one go. But that's almost never the true.


 I also reiterate my point about not keeping broken commits on your
 working branch.  You know at some point they're going to get
 accidentally submitted. :-)


Well... As long as you use 'git review' to upload CRs, you're safe. If you
do 'git push gerrit HEAD:refs/for/master' you're screwed. But why would you
do that?


 As far as letting git review do magic, how is that better than git
 rebase once and no magic required?  You deal with the conflicts and
 you're good to go.


In a number of manual tasks it's the same. If your patch cannot be merged
into master, you merge master to your local branch and you're good to go.
But as I said, merge will be remembered, rebase won't. And after that
rebase/merge you might end up having your tests failing, and you'll have to
rewrite your commit again with --amend, with no record in history.


 And if someone asks you to split a commit, you can
 do it.  With this proposal you can't, because anything but squashing
 into one commit is going to be a nightmare (which might be my biggest
 argument against this).


You can do it with the new approach as well. See at the end of the
proposal. You split your current branch into a number of branches and let
git-review detect who depends on who between them.

 And speaking about breaking down of change

Re: [openstack-dev] [all][gerrit] any way to see my votes?

2014-07-31 Thread Yuriy Taraday
On Thu, Jul 31, 2014 at 2:23 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hi all,

 in Gerrit UI, I would like to be able to see a separate column with my
 votes, so that I have a clear view of what was missed from my eye.
 I've looked in settings, and failed to find an option for this.

 Is there a way to achieve this?


You can use search for this. label:Code-Review=0,self will get you all
changes that don't have your -2,-1,+1 or +2. The same goes for other labels.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-31 Thread Yuriy Taraday
On Wed, Jul 30, 2014 at 11:52 AM, Kyle Mestery mest...@mestery.com wrote:
 and even less
 possibly rootwrap [3] if the security implications can be worked out.

Can you please provide some input on those security implications that are
not worked out yet?
I'm really surprised to see such comments in some ML thread not directly
related to the BP. Why is my spec blocked? Neither spec [1] nor code (which
is available for a really long time now [2] [3]) can get enough reviewers'
attention because of those groundless -2's. Should I abandon these change
requests and file new ones to get some eyes on my code and proposals? It's
just getting ridiculous. Let's take a look at timeline, shall we?

Mar, 25 - first version of the first part of Neutron code is published at
[2]
Mar, 28 - first reviewers come and it gets -1'd by Mark because of lack of
BP (thankful it wasn't -2 yet, so reviews continued)
Apr, 1 - Both Oslo [5] and Neturon [6] BPs are created;
Apr, 2 - first version of the second part of Neutron code is published at
[3];
May, 16 - first version of Neutron spec is published at [1];
May, 19 - Neutron spec gets frozen by Mark's -2 (because Oslo BP is not
approved yet);
May, 21 - first part of Neutron code [2] is found generally OK by reviewers;
May, 21 - first version of Oslo spec is published at [4];
May, 29 - a version of the second part of Neutron code [3] is published
that later raises only minor comments by reviewers;
Jun, 5 - both parts of Neutron code [2] [3] get frozen by -2 from Mark
because BP isn't approved yet;
Jun, 23 - Oslo spec [4] is mostly ironed out;
Jul, 8 - Oslo spec [4] is merged, Neutron spec immediately gets +1 and +2;
Jul, 20 - SAD kicks in, no comments from Mark or anyone on blocked change
requests;
Jul, 24 - in response to Kyle's suggestion I'm filing SAD exception;
Jul, 31 - I'm getting final decision as follows: Your BP will extremely
unlikely get to Juno.

Do you see what I see? Code and spec is mostly finished in May (!) where
the mostly part is lack of reviewers because of that Mark's -2. And 1
month later when all bureaucratic reasons fall off nothing happens. Don't
think I didn't try to approach Mark. Don't think I didn't approach Kyle on
this issue. Because I did. But nothing happens and another month passes by
and I get You know, may be later general response. Noone (but those who
knew about it originally) even looks at my code for 2 months already
because Mark doesn't think (I hope he did think) he should lift -2 and I'm
getting why not wait another 3 months?

What the hell is that? You don't want to land features that doesn't have
code 2 weeks before Juno-3, I get that. My code has almost finished code by
3.5 months before that! And you're considering to throw it to Kilo because
of some mystical issues that must've been covered in Oslo spec [4] and if
you like it they can be covered in Neutron spec [1] but you have to let
reviewers see it!

I don't think that Mark's actions (lack of them, actually) are what's
expected from core reviewer. No reaction to requests from developer whose
code got frozen by his -2. No reaction (at least no visible one) to PTL's
requests (and Kyle assured me he made those). Should we consider Mark
uncontrollable and unreachable? Why does he have -2 right in the first
place then? So how should I overcome his inaction? I can recreate new
change requests and hope he won't just -2 them with no comment at all. But
that would be just a sign of total failure of our shiny bureaucracy.

[1] https://review.openstack.org/93889 - Neutron spec
[2] https://review.openstack.org/82787 - first part of Neutron code
[3] https://review.openstack.org/84667 - second part of Neutron code
[4] https://review.openstack.org/94613 - Oslo spec
[5] https://blueprints.launchpad.net/oslo/+spec/rootwrap-daemon-mode
[6] https://blueprints.launchpad.net/neutron/+spec/rootwrap-daemon-mode

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec exceptions are closed, FPF is August 21

2014-07-31 Thread Yuriy Taraday
On Thu, Jul 31, 2014 at 12:30 PM, Thierry Carrez thie...@openstack.org
wrote:

 Carl Baldwin wrote:
  Let me know if I can help resolve the concerns around rootwrap.  I
  think in this case, the return on investment could be high with a
  relatively low investment.

 I agree the daemon work around oslo.rootwrap is very promising, but this
 is a bit sensitive so we can't rush it. I'm pretty confident
 oslo.rootwrap 1.3 (or 2.0) will be available by the Juno release, but
 realistically I expect most projects to switch to using it during the
 kilo cycle, rather than in the very last weeks of Juno...


Neutron has always been considered to be the first adopter of daemon mode.
Given all the code on the Neutron side is mostly finished I think we can
safely switch Neutron first in Juno and wait for Kilo to switch other
projects.


-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in the
 parsed configuration for the resulting options?


I can imagine something like this:
1. iterate over undefined groups in config;
2. select groups of interest (e.g. by prefix or some regular expression);
3. register options in them;
4. use those options.

Registered group can be passed to a plugin/library that would register its
options in it.

So the only thing that oslo.config lacks in its interface here is some way
to allow the first step. The rest can be overcomed with some sugar.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in the
 parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register its
 options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


Plugin would have to register its options under a fixed group. But what if
we want a number of plugin instances?



 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


I don't exactly know what the original author's intention is but I don't
generally like the fact that all libraries and plugins wanting to use
config have to influence global CONF instance.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Fri, Jul 25, 2014 at 12:05 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in
 the parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register
 its options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


 Plugin would have to register its options under a fixed group. But what if
 we want a number of plugin instances?


 Presumably something would know a name associated with each instance and
 could pass it to the plugin to use when registering its options.




 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


 I don't exactly know what the original author's intention is but I don't
 generally like the fact that all libraries and plugins wanting to use
 config have to influence global CONF instance.


 That is a common misconception. The use of a global configuration option
 is an application developer choice. The config library does not require it.
 Some of the other modules in the oslo incubator expect a global config
 object because they started life in applications with that pattern, but as
 we move them to libraries we are updating the APIs to take a ConfigObj as
 argument (see oslo.messaging and oslo.db for examples).


What I mean is that instead of passing ConfigObj and a section name in
arguments for some plugin/lib it would be cleaner to receive an object that
represents one section of config, not the whole config at once.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Fri, Jul 25, 2014 at 2:35 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 5:43 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Fri, Jul 25, 2014 at 12:05 AM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name
 known options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in
 the parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register
 its options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


 Plugin would have to register its options under a fixed group. But what
 if we want a number of plugin instances?


 Presumably something would know a name associated with each instance and
 could pass it to the plugin to use when registering its options.




 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


 I don't exactly know what the original author's intention is but I don't
 generally like the fact that all libraries and plugins wanting to use
 config have to influence global CONF instance.


 That is a common misconception. The use of a global configuration option
 is an application developer choice. The config library does not require it.
 Some of the other modules in the oslo incubator expect a global config
 object because they started life in applications with that pattern, but as
 we move them to libraries we are updating the APIs to take a ConfigObj as
 argument (see oslo.messaging and oslo.db for examples).


 What I mean is that instead of passing ConfigObj and a section name in
 arguments for some plugin/lib it would be cleaner to receive an object that
 represents one section of config, not the whole config at once.


 The new ConfigFilter class lets you do something like what you want [1].
 The options are visible only in the filtered view created by the plugin, so
 the application can’t see them. That provides better data separation, and
 prevents the options used by the plugin or library from becoming part of
 its API.

 Doug

 [1] http://docs.openstack.org/developer/oslo.config/cfgfilter.html


Yes, it looks like it. Didn't know about that, thanks!
I wonder who should wrap CONF object into ConfigFilter - core or plugin.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon mode support

2014-07-23 Thread Yuriy Taraday
Hello.

I'd like to propose making a spec freeze exception for rootwrap-daemon-mode
spec [1].

Its goal is to save agents' execution time by using daemon mode for
rootwrap and thus avoiding python interpreter startup time as well as sudo
overhead for each call. Preliminary benchmark shows 10x+ speedup of the
rootwrap interaction itself.

This spec have a number of supporters from Neutron team (Carl and Miguel
gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
The only thing that has been blocking its progress is Mark's -2 left when
oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
in oslo.rootwrap is steadily getting approved [5].

[1] https://review.openstack.org/93889
[2] https://review.openstack.org/82787
[3] https://review.openstack.org/84667
[4] https://review.openstack.org/107386
[5]
https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.serialization and oslo.concurrency graduation call for help

2014-07-22 Thread Yuriy Taraday
Hello, Ben.

On Mon, Jul 21, 2014 at 7:23 PM, Ben Nemec openst...@nemebean.com wrote:

 Hi all,

 The oslo.serialization and oslo.concurrency graduation specs are both
 approved, but unfortunately I haven't made as much progress on them as I
 would like.  The serialization repo has been created and has enough acks
 to continue the process, and concurrency still needs to be started.

 Also unfortunately, I am unlikely to make progress on either over the
 next two weeks due to the tripleo meetup and vacation.  As discussed in
 the Oslo meeting last week
 (
 http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-07-18-16.00.log.html
 )
 we would like to continue work on them during that time, so Doug asked
 me to look for volunteers to pick up the work and run with it.

 The current status and next steps for oslo.serialization can be found in
 the bp:
 https://blueprints.launchpad.net/oslo/+spec/graduate-oslo-serialization

 As mentioned, oslo.concurrency isn't started and has a few more pending
 tasks, which are enumerated in the spec:

 http://git.openstack.org/cgit/openstack/oslo-specs/plain/specs/juno/graduate-oslo-concurrency.rst

 Any help would be appreciated.  I'm happy to pick this back up in a
 couple of weeks, but if someone could shepherd it along in the meantime
 that would be great!


I would be happy to work on graduating oslo.concurrency as well as
improving it after that. I like fiddling with OS's, threads and races :)
I can also help to finish work on oslo.serialization (it looks like some
steps are already finished there).

What would be needed to start working on that? I haven't been following
development of processes within Oslo. So I would need someone to answer
questions as they arise.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Proposal Deadline has passed, a note on Spec Approval Deadline

2014-07-21 Thread Yuriy Taraday
Hello, Kyle.

As I can see, my spec got left behind. Should I give up any hope and move
it to Kilo dir?


On Mon, Jul 14, 2014 at 3:24 PM, Miguel Angel Ajo Pelayo 
mangel...@redhat.com wrote:

 The oslo-rootwrap spec counterpart of this
 spec has been approved:

 https://review.openstack.org/#/c/94613/

 Cheers :-)

 - Original Message -
  Yurly, thanks for your spec and code! I'll sync with Carl tomorrow on
 this
  and see how we can proceed for Juno around this.
 
 
  On Sat, Jul 12, 2014 at 10:00 AM, Carl Baldwin  c...@ecbaldwin.net 
 wrote:
 
 
 
 
  +1 This spec had already been proposed quite some time ago. I'd like to
 see
  this work get in to juno.
 
  Carl
  On Jul 12, 2014 9:53 AM, Yuriy Taraday  yorik@gmail.com  wrote:
 
 
 
  Hello, Kyle.
 
  On Fri, Jul 11, 2014 at 6:18 PM, Kyle Mestery 
 mest...@noironetworks.com 
  wrote:
 
 
  Just a note that yesterday we passed SPD for Neutron. We have a
  healthy backlog of specs, and I'm working to go through this list and
  make some final approvals for Juno-3 over the next week. If you've
  submitted a spec which is in review, please hang tight while myself
  and the rest of the neutron cores review these. It's likely a good
  portion of the proposed specs may end up as deferred until K
  release, given where we're at in the Juno cycle now.
 
  Thanks!
  Kyle
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Please don't skip my spec on rootwrap daemon support:
  https://review.openstack.org/#/c/93889/
  It got -2'd my Mark McClain when my spec in oslo wasn't approved but now
  that's fixed but it's not easy to get hold of Mark.
  Code for that spec (also -2'd by Mark) is close to be finished and
 requires
  some discussion to get merged by Juno-3.
 
  --
 
  Kind regards, Yuriy.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-12 Thread Yuriy Taraday
On Fri, Jul 11, 2014 at 10:34 PM, Joshua Harlow harlo...@outlook.com
wrote:

 S, how about we can continue this in #openstack-state-management (or
 #openstack-oslo).

 Since I think we've all made the point and different viewpoints visible
 (which was the main intention).

 Overall, I'd like to see asyncio more directly connected into taskflow so
 we can have the best of both worlds.

 We just have to be careful in letting people blow their feet off, vs.
 being to safe; but that discussion I think we can have outside this thread.


That's what I was about to reply to Clint: Let the user shoot ones feet,
one can always be creative in doing that anyway.

Sound good?


Sure. TBH I didn't think this thread is the right place for this discussion
but coroutines can't do that kind of set me off :)

-Josh

 On Jul 11, 2014, at 9:04 AM, Clint Byrum cl...@fewbar.com wrote:

  Excerpts from Yuriy Taraday's message of 2014-07-11 03:08:14 -0700:
  On Thu, Jul 10, 2014 at 11:51 PM, Josh Harlow harlo...@outlook.com
 wrote:
  2. Introspection, I hope this one is more obvious. When the coroutine
  call-graph is the workflow there is no easy way to examine it before it
  executes (and change parts of it for example before it executes). This
 is a
  nice feature imho when it's declaratively and explicitly defined, you
 get
  the ability to do this. This part is key to handling upgrades that
  typically happen (for example the a workflow with the 5th task was
 upgraded
  to a newer version, we need to stop the service, shut it off, do the
 code
  upgrade, restart the service and change 5th task from v1 to v1.1).
 
 
  I don't really understand why would one want to examine or change
 workflow
  before running. Shouldn't workflow provide just enough info about which
  tasks should be run in what order?
  In case with coroutines when you do your upgrade and rerun workflow,
 it'll
  just skip all steps that has already been run and run your new version
 of
  5th task.
 
 
  I'm kind of with you on this one. Changing the workflow feels like self
  modifying code.
 
  3. Dataflow: tasks in taskflow can not just declare workflow
 dependencies
  but also dataflow dependencies (this is how tasks transfer things from
 one
  to another). I suppose the dataflow dependency would mirror to
 coroutine
  variables  arguments (except the variables/arguments would need to be
  persisted somewhere so that it can be passed back in on failure of the
  service running that coroutine). How is that possible without an
  abstraction over those variables/arguments (a coroutine can't store
 these
  things in local variables since those will be lost)?It would seem like
 this
  would need to recreate the persistence  storage layer[5] that taskflow
  already uses for this purpose to accomplish this.
 
 
  You don't need to persist local variables. You just need to persist
 results
  of all tasks (and you have to do it if you want to support workflow
  interruption and restart). All dataflow dependencies are declared in the
  coroutine in plain Python which is what developers are used to.
 
 
  That is actually the problem that using declarative systems avoids.
 
 
 @asyncio.couroutine
 def add_ports(ctx, server_def):
 port, volume = yield from
 asyncio.gather(ctx.run_task(create_port(server_def)),
 
 ctx.run_task(create_volume(server_def))
 if server_def.wants_drbd:
 setup_drbd(volume, server_def)
 
 yield from ctx.run_task(boot_server(volume_az, server_def))
 
 
  Now we have a side effect which is not in a task. If booting fails, and
  we want to revert, we won't revert the drbd. This is easy to miss
  because we're just using plain old python, and heck it already even has
  a test case.
 
  I see this type of thing a lot.. we're not arguing about capabilities,
  but about psychological differences. There are pros and cons to both
  approaches.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-11 Thread Yuriy Taraday
On Thu, Jul 10, 2014 at 11:51 PM, Outlook harlo...@outlook.com wrote:

 On Jul 10, 2014, at 3:48 AM, Yuriy Taraday yorik@gmail.com wrote:

 On Wed, Jul 9, 2014 at 7:39 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Yuriy Taraday's message of 2014-07-09 03:36:00 -0700:
  On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow harlo...@yahoo-inc.com
  wrote:
 
   I think clints response was likely better than what I can write here,
 but
   I'll add-on a few things,
  
  
   How do you write such code using taskflow?
   
 @asyncio.coroutine
 def foo(self):
 result = yield from some_async_op(...)
 return do_stuff(result)
  
   The idea (at a very high level) is that users don't write this;
  
   What users do write is a workflow, maybe the following (pseudocode):
  
   # Define the pieces of your workflow.
  
   TaskA():
 def execute():
 # Do whatever some_async_op did here.
  
 def revert():
 # If execute had any side-effects undo them here.
  
   TaskFoo():
  ...
  
   # Compose them together
  
   flow = linear_flow.Flow(my-stuff).add(TaskA(my-task-a),
   TaskFoo(my-foo))
  
 
  I wouldn't consider this composition very user-friendly.
 


 So just to make this understandable, the above is a declarative structure
 of the work to be done. I'm pretty sure it's general agreed[1] in the
 programming world that when declarative structures can be used they should
 be (imho openstack should also follow the same pattern more than it
 currently does). The above is a declaration of the work to be done and the
 ordering constraints that must be followed. Its just one of X ways to do
 this (feel free to contribute other variations of these 'patterns' @
 https://github.com/openstack/taskflow/tree/master/taskflow/patterns).

 [1] http://latentflip.com/imperative-vs-declarative/ (and many many
 others).


I totally agree that declarative approach is better for workflow
declarations. I'm just saying that we can do it in Python with coroutines
instead. Note that declarative approach can lead to reinvention of entirely
new language and these flow.add can be the first step on this road.

  I find it extremely user friendly when I consider that it gives you
 clear lines of delineation between the way it should work and what
 to do when it breaks.


 So does plain Python. But for plain Python you don't have to explicitly
 use graph terminology to describe the process.



 I'm not sure where in the above you saw graph terminology. All I see there
 is a declaration of a pattern that explicitly says run things 1 after the
 other (linearly).


As long as workflow is linear there's no difference on whether it's
declared with .add() or with yield from. I'm talking about more complex
workflows like one I described in example.


# Submit the workflow to an engine, let the engine do the work to
 execute
   it (and transfer any state between tasks as needed).
  
   The idea here is that when things like this are declaratively
 specified
   the only thing that matters is that the engine respects that
 declaration;
   not whether it uses asyncio, eventlet, pigeons, threads, remote
   workers[1]. It also adds some things that are not (imho) possible with
   co-routines (in part since they are at such a low level) like
 stopping the
   engine after 'my-task-a' runs and shutting off the software,
 upgrading it,
   restarting it and then picking back up at 'my-foo'.
  
 
  It's absolutely possible with coroutines and might provide even clearer
  view of what's going on. Like this:
 
  @asyncio.coroutine
  def my_workflow(ctx, ...):
  project = yield from ctx.run_task(create_project())
  # Hey, we don't want to be linear. How about parallel tasks?
  volume, network = yield from asyncio.gather(
  ctx.run_task(create_volume(project)),
  ctx.run_task(create_network(project)),
  )
  # We can put anything here - why not branch a bit?
  if create_one_vm:
  yield from ctx.run_task(create_vm(project, network))
  else:
  # Or even loops - why not?
  for i in range(network.num_ips()):
  yield from ctx.run_task(create_vm(project, network))
 


 Sorry but the code above is nothing like the code that Josh shared. When
 create_network(project) fails, how do we revert its side effects? If we
 want to resume this flow after reboot, how does that work?

 I understand that there is a desire to write everything in beautiful
 python yields, try's, finally's, and excepts. But the reality is that
 python's stack is lost the moment the process segfaults, power goes out
 on that PDU, or the admin rolls out a new kernel.

 We're not saying asyncio vs. taskflow. I've seen that mistake twice
 already in this thread. Josh and I are suggesting that if there is a
 movement to think about coroutines, there should also be some time spent
 thinking at a high level: how do we resume tasks, revert side effects,
 and control flow?

 If we

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-10 Thread Yuriy Taraday
On Wed, Jul 9, 2014 at 7:39 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Yuriy Taraday's message of 2014-07-09 03:36:00 -0700:
  On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow harlo...@yahoo-inc.com
  wrote:
 
   I think clints response was likely better than what I can write here,
 but
   I'll add-on a few things,
  
  
   How do you write such code using taskflow?
   
 @asyncio.coroutine
 def foo(self):
 result = yield from some_async_op(...)
 return do_stuff(result)
  
   The idea (at a very high level) is that users don't write this;
  
   What users do write is a workflow, maybe the following (pseudocode):
  
   # Define the pieces of your workflow.
  
   TaskA():
 def execute():
 # Do whatever some_async_op did here.
  
 def revert():
 # If execute had any side-effects undo them here.
  
   TaskFoo():
  ...
  
   # Compose them together
  
   flow = linear_flow.Flow(my-stuff).add(TaskA(my-task-a),
   TaskFoo(my-foo))
  
 
  I wouldn't consider this composition very user-friendly.
 

 I find it extremely user friendly when I consider that it gives you
 clear lines of delineation between the way it should work and what
 to do when it breaks.


So does plain Python. But for plain Python you don't have to explicitly use
graph terminology to describe the process.


# Submit the workflow to an engine, let the engine do the work to
 execute
   it (and transfer any state between tasks as needed).
  
   The idea here is that when things like this are declaratively specified
   the only thing that matters is that the engine respects that
 declaration;
   not whether it uses asyncio, eventlet, pigeons, threads, remote
   workers[1]. It also adds some things that are not (imho) possible with
   co-routines (in part since they are at such a low level) like stopping
 the
   engine after 'my-task-a' runs and shutting off the software, upgrading
 it,
   restarting it and then picking back up at 'my-foo'.
  
 
  It's absolutely possible with coroutines and might provide even clearer
  view of what's going on. Like this:
 
  @asyncio.coroutine
  def my_workflow(ctx, ...):
  project = yield from ctx.run_task(create_project())
  # Hey, we don't want to be linear. How about parallel tasks?
  volume, network = yield from asyncio.gather(
  ctx.run_task(create_volume(project)),
  ctx.run_task(create_network(project)),
  )
  # We can put anything here - why not branch a bit?
  if create_one_vm:
  yield from ctx.run_task(create_vm(project, network))
  else:
  # Or even loops - why not?
  for i in range(network.num_ips()):
  yield from ctx.run_task(create_vm(project, network))
 

 Sorry but the code above is nothing like the code that Josh shared. When
 create_network(project) fails, how do we revert its side effects? If we
 want to resume this flow after reboot, how does that work?

 I understand that there is a desire to write everything in beautiful
 python yields, try's, finally's, and excepts. But the reality is that
 python's stack is lost the moment the process segfaults, power goes out
 on that PDU, or the admin rolls out a new kernel.

 We're not saying asyncio vs. taskflow. I've seen that mistake twice
 already in this thread. Josh and I are suggesting that if there is a
 movement to think about coroutines, there should also be some time spent
 thinking at a high level: how do we resume tasks, revert side effects,
 and control flow?

 If we embed taskflow deep in the code, we get those things, and we can
 treat tasks as coroutines and let taskflow's event loop be asyncio just
 the same. If we embed asyncio deep into the code, we don't get any of
 the high level functions and we get just as much code churn.

  There's no limit to coroutine usage. The only problem is the library that
  would bind everything together.
  In my example run_task will have to be really smart, keeping track of all
  started tasks, results of all finished ones, skipping all tasks that have
  already been done (and substituting already generated results).
  But all of this is doable. And I find this way of declaring workflows way
  more understandable than whatever would it look like with Flow.add's
 

 The way the flow is declared is important, as it leads to more isolated
 code. The single place where the flow is declared in Josh's example means
 that the flow can be imported, the state deserialized and inspected,
 and resumed by any piece of code: an API call, a daemon start up, an
 admin command, etc.

 I may be wrong, but it appears to me that the context that you built in
 your code example is hard, maybe impossible, to resume after a process
 restart unless _every_ task is entirely idempotent and thus can just be
 repeated over and over.


I must have not stressed this enough in the last paragraph. The point is to
make run_task method very smart. It should do smth like this (yes, I'm
better 

Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-09 Thread Yuriy Taraday
On Tue, Jul 8, 2014 at 11:31 PM, Joshua Harlow harlo...@yahoo-inc.com
wrote:

 I think clints response was likely better than what I can write here, but
 I'll add-on a few things,


 How do you write such code using taskflow?
 
   @asyncio.coroutine
   def foo(self):
   result = yield from some_async_op(...)
   return do_stuff(result)

 The idea (at a very high level) is that users don't write this;

 What users do write is a workflow, maybe the following (pseudocode):

 # Define the pieces of your workflow.

 TaskA():
   def execute():
   # Do whatever some_async_op did here.

   def revert():
   # If execute had any side-effects undo them here.

 TaskFoo():
...

 # Compose them together

 flow = linear_flow.Flow(my-stuff).add(TaskA(my-task-a),
 TaskFoo(my-foo))


I wouldn't consider this composition very user-friendly.


 # Submit the workflow to an engine, let the engine do the work to execute
 it (and transfer any state between tasks as needed).

 The idea here is that when things like this are declaratively specified
 the only thing that matters is that the engine respects that declaration;
 not whether it uses asyncio, eventlet, pigeons, threads, remote
 workers[1]. It also adds some things that are not (imho) possible with
 co-routines (in part since they are at such a low level) like stopping the
 engine after 'my-task-a' runs and shutting off the software, upgrading it,
 restarting it and then picking back up at 'my-foo'.


It's absolutely possible with coroutines and might provide even clearer
view of what's going on. Like this:

@asyncio.coroutine
def my_workflow(ctx, ...):
project = yield from ctx.run_task(create_project())
# Hey, we don't want to be linear. How about parallel tasks?
volume, network = yield from asyncio.gather(
ctx.run_task(create_volume(project)),
ctx.run_task(create_network(project)),
)
# We can put anything here - why not branch a bit?
if create_one_vm:
yield from ctx.run_task(create_vm(project, network))
else:
# Or even loops - why not?
for i in range(network.num_ips()):
yield from ctx.run_task(create_vm(project, network))

There's no limit to coroutine usage. The only problem is the library that
would bind everything together.
In my example run_task will have to be really smart, keeping track of all
started tasks, results of all finished ones, skipping all tasks that have
already been done (and substituting already generated results).
But all of this is doable. And I find this way of declaring workflows way
more understandable than whatever would it look like with Flow.add's

Hope that helps make it a little more understandable :)

 -Josh


PS: I've just found all your emails in this thread in Spam folder. So it's
probable not everybody read them.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repo

2014-07-04 Thread Yuriy Taraday
Every commit landing to every repo should be synchronized to GitHub. I
filed a bug to track this issue here:
https://bugs.launchpad.net/openstack-ci/+bug/1337735


On Fri, Jul 4, 2014 at 3:30 AM, Salvatore Orlando sorla...@nicira.com
wrote:

 git.openstack.org has an up-to-date log:
 http://git.openstack.org/cgit/openstack/neutron-specs/log/

 Unfortunately I don't know what the policy is for syncing repos with
 github.

 Salvatore


 On 4 July 2014 00:34, Sumit Naiksatam sumitnaiksa...@gmail.com wrote:

 Is this still the right repo for this:
 https://github.com/openstack/neutron-specs

 The latest commit on the master branch shows June 25th timestamp, but
 we have had a lots of patches merging after that. Where are those
 going?

 Thanks,
 ~Sumit.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins] [Cinder] InvocationError in gate-cinder-python26 python27

2014-07-04 Thread Yuriy Taraday
On Fri, Jul 4, 2014 at 12:57 PM, Amit Das amit@cloudbyte.com wrote:

 Hi All,

 I can see a lot of cinder gerrit commits that pass through the
 gate-cinder-python26  gate-cinder-python27 successfully.

 ref - https://github.com/openstack/cinder/commits/master

 Whereas its not the case for my patch
 https://review.openstack.org/#/c/102511/.

 I updated the master  rebased that to my branch before doing a gerrit
 review.

 Am i missing any steps ?


Does 'tox -e py26' works on your local machine? It should fail just as one
in the gate.
You should follow instructions it provides in log just before
'InvocationError' - run tools/config/generate_sample.sh.
The issue is that you've added some options to your driver but didn't
update etc/cinder/cinder.conf.sample.
After generating new sample you should verify its diff (git diff
etc/cinder/cinder.conf.sample) and add it to your commit.


-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-03 Thread Yuriy Taraday
On Thu, Jul 3, 2014 at 5:00 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-07-02 22:19:29 +0400 (+0400), Yuriy Taraday wrote:
 [...]
  It looks like mirrors will have to bear having a number of dead branches
 in
  them - one for each release.

 A release manager will delete proposed/juno when stable/juno is
 branched from it, and branch deletions properly propagate to our
 official mirrors (you may have to manually remove any local tracking
 branches you've created, but that shouldn't be much of a concern).


I mean other mirrors like we have in our local net. Given not so good
connection to upstream repos (the reason we have this mirror in the first
place) I can't think of reliable way to clean them up.
Where can I find scripts that propagate deletions to official mirrors?
Maybe I can get some idea from them?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-02 Thread Yuriy Taraday
Hello.

On Fri, Jun 27, 2014 at 4:44 PM, Thierry Carrez thie...@openstack.org
wrote:

 For all those reasons, we decided at the last summit to use unique
 pre-release branches, named after the series (for example,
 proposed/juno). That branch finally becomes stable/juno at release
 time. In parallel, we abandoned the usage of release branches for
 development milestones, which are now tagged directly on the master
 development branch.


I know that this question has been raised before but I still would like to
clarify this.
Why do we need these short-lived 'proposed' branches in any form? Why can't
we just use release branches for this and treat them as stable when
appropriate tag is added to some commit in them?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Oslo.cfg] Configuration string substitution

2014-07-01 Thread Yuriy Taraday
Hello

On Fri, Jun 20, 2014 at 12:48 PM, Radoslav Gerganov rgerga...@vmware.com
wrote:

 Hi,

  On Wed, Jun 18, 2014 at 4:47 AM, Gary Kotton gkot...@vmware.com wrote:
   Hi,
   I have encountered a problem with string substitution with the nova
   configuration file. The motivation was to move all of the glance
 settings
   to
   their own section (https://review.openstack.org/#/c/100567/). The
   glance_api_servers had default setting that uses the current
 glance_host
   and
   the glance port. This is a problem when we move to the ‘glance’
 section.
   First and foremost I think that we need to decide on how we should
 denote
   the string substitutions for group variables and then we can dive into
   implementation details. Does anyone have any thoughts on this?
   My thinking is that when we use we should use a format of
 $group.key.
   An
   example is below.
  
 
  Do we need to set the variable off somehow to allow substitutions that
  need the literal '.' after a variable? How often is that likely to
  come up?

 I would suggest to introduce a different form of placeholder for this like:

   default=['${glance.host}:${glance.port}']

 similar to how variable substitutions are handled in Bash.  IMO, this is
 more readable and easier to parse.

 -Rado


I couldn't help but trying implement this:
https://review.openstack.org/103884

This change allows both ${glance.host} and ${.host} variants.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Refactor ISCSIDriver to support other iSCSI transports besides TCP

2014-06-16 Thread Yuriy Taraday
Hello, Shlomi.


On Tue, Mar 25, 2014 at 7:07 PM, Shlomi Sasson shlo...@mellanox.com wrote:

  I want to share with the community the following challenge:

 Currently, Vendors who have their iSCSI driver, and want to add RDMA
 transport (iSER), cannot leverage their existing plug-in which inherit from
 iSCSI

 And must modify their driver or create an additional plug-in driver which
 inherit from iSER, and copy the exact same code.



 Instead I believe a simpler approach is to add a new attribute to
 ISCSIDriver to support other iSCSI transports besides TCP, which will allow
 minimal changes to support iSER.

 The existing ISERDriver code will be removed, this will eliminate
 significant code and class duplication, and will work with all the iSCSI
 vendors who supports both TCP and RDMA without the need to modify their
 plug-in drivers.


I remember Ann working on https://review.openstack.org/#/c/45393 and it has
landed since.

That change leaves ISERDriver just for backward compatibility and
allows ISCSIDriver and any its descendant to use iscsi_helper='iseradm' to
provide iSER usage.

Aren't those changes enough for this? What else is needed here?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][ceilometer][glance][all] Loading clients from a CONF object

2014-06-15 Thread Yuriy Taraday
On Fri, Jun 13, 2014 at 3:27 AM, Jamie Lennox jamielen...@redhat.com
wrote:

   And as we're going to have to live with this for a while, I'd rather use
  the more clear version of this in keystone instead of the Heat stanzas.

 Anyone else have an opinion on this?


I like keeping sections' names simple and clear, but it looks like you
should add some common section ([services_common]?) since 6 out of 6
options in your example will very probable be repeated for every client.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Yuriy Taraday
This behavior of os.pipe() has changed in Python 3.x so it won't be an
issue on newer Python (if only it was accessible for us).

From the looks of it you can mitigate the problem by running libguestfs
requests in a separate process (multiprocessing.managers comes to mind).
This way the only descriptors child process could theoretically inherit
would be long-lived pipes to main process although they won't leak because
they should be marked with CLOEXEC before any libguestfs request is run.
The other benefit is that this separate process won't be busy opening and
closing tons of fds so the problem with inheriting will be avoided.


On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use tpool
 to invoke libguestfs, and one external commend is executed in another green
 thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs routine
 in greenthread, rather than another native thread. But it will impact the
 performance very much. So I do not think that is an acceptable solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the issue
 can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch pad
 as below. . I am not sure if libguestfs itself can have certain mechanism
 to free/close the fds that inherited from parent process instead of require
 explicitly calling the tear down. Maybe open a defect to libguestfs to see
 what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this deadlock.
 This problem is related with Python runtime, libguestfs, and eventlet. The
 situation is a little complicated. Is there any expert who can help me to
 look for a solution? I will appreciate for your help!

 --
 Qin Zhao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Qin Zhao

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-05 Thread Yuriy Taraday
Please take a look at
https://docs.python.org/2.7/library/multiprocessing.html#managers -
everything is already implemented there.
All you need is to start one manager that would serve all your requests to
libguestfs. The implementation in stdlib will provide you with all
exceptions and return values with minimum code changes on Nova side.
Create a new Manager, register an libguestfs endpoint in it and call
start(). It will spawn a separate process that will speak with calling
process over very simple RPC.
From the looks of it all you need to do is replace tpool.Proxy calls in
VFSGuestFS.setup method to calls to this new Manager.


On Thu, Jun 5, 2014 at 7:21 PM, Qin Zhao chaoc...@gmail.com wrote:

 Hi Yuriy,

 Thanks for reading my bug!  You are right. Python 3.3 or 3.4 should not
 have this issue, since they have can secure the file descriptor. Before
 OpenStack move to Python 3, we may still need a solution. Calling
 libguestfs in a separate process seems to be a way. This way, Nova code can
 close those fd by itself, not depending upon CLOEXEC. However, that will be
 an expensive solution, since it requires a lot of code change. At least we
 need to write code to pass the return value and exception between these two
 processes. That will make this solution very complex. Do you agree?


 On Thu, Jun 5, 2014 at 9:39 PM, Yuriy Taraday yorik@gmail.com wrote:

 This behavior of os.pipe() has changed in Python 3.x so it won't be an
 issue on newer Python (if only it was accessible for us).

 From the looks of it you can mitigate the problem by running libguestfs
 requests in a separate process (multiprocessing.managers comes to mind).
 This way the only descriptors child process could theoretically inherit
 would be long-lived pipes to main process although they won't leak because
 they should be marked with CLOEXEC before any libguestfs request is run.
 The other benefit is that this separate process won't be busy opening and
 closing tons of fds so the problem with inheriting will be avoided.


 On Thu, Jun 5, 2014 at 2:17 PM, laserjetyang laserjety...@gmail.com
 wrote:

   Will this patch of Python fix your problem? 
 *http://bugs.python.org/issue7213
 http://bugs.python.org/issue7213*


 On Wed, Jun 4, 2014 at 10:41 PM, Qin Zhao chaoc...@gmail.com wrote:

  Hi Zhu Zhu,

 Thank you for reading my diagram!   I need to clarify that this problem
 does not occur during data injection.  Before creating the ISO, the driver
 code will extend the disk. Libguestfs is invoked in that time frame.

 And now I think this problem may occur at any time, if the code use
 tpool to invoke libguestfs, and one external commend is executed in another
 green thread simultaneously.  Please correct me if I am wrong.

 I think one simple solution for this issue is to call libguestfs
 routine in greenthread, rather than another native thread. But it will
 impact the performance very much. So I do not think that is an acceptable
 solution.



  On Wed, Jun 4, 2014 at 12:00 PM, Zhu Zhu bjzzu...@gmail.com wrote:

   Hi Qin Zhao,

 Thanks for raising this issue and analysis. According to the issue
 description and happen scenario(
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720
 ),  if that's the case,  concurrent mutiple KVM spawn instances(*with
 both config drive and data injection enabled*) are triggered, the
 issue can be very likely to happen.
 As in libvirt/driver.py _create_image method, right after iso making 
 cdb.make_drive,
 the driver will attempt data injection which will call the libguestfs
 launch in another thread.

 Looks there were also a couple of libguestfs hang issues from Launch
 pad as below. . I am not sure if libguestfs itself can have certain
 mechanism to free/close the fds that inherited from parent process instead
 of require explicitly calling the tear down. Maybe open a defect to
 libguestfs to see what their thoughts?

  https://bugs.launchpad.net/nova/+bug/1286256
 https://bugs.launchpad.net/nova/+bug/1270304

 --
  Zhu Zhu
 Best Regards


  *From:* Qin Zhao chaoc...@gmail.com
 *Date:* 2014-05-31 01:25
  *To:* OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 *Subject:* [openstack-dev] [Nova] nova-compute deadlock
Hi all,

 When I run Icehouse code, I encountered a strange problem. The
 nova-compute service becomes stuck, when I boot instances. I report this
 bug in https://bugs.launchpad.net/nova/+bug/1313477.

 After thinking several days, I feel I know its root cause. This bug
 should be a deadlock problem cause by pipe fd leaking.  I draw a diagram 
 to
 illustrate this problem.
 https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960h=720

 However, I have not find a very good solution to prevent this
 deadlock. This problem is related with Python runtime, libguestfs, and
 eventlet. The situation is a little complicated. Is there any

Re: [openstack-dev] [all] Hide CI comments in Gerrit

2014-05-29 Thread Yuriy Taraday
On Tue, May 27, 2014 at 6:07 PM, James E. Blair jebl...@openstack.orgwrote:

 I wonder if it would
 be possible to detect them based on the presence of a Verified vote?


Not all CIs always add a vote. Only 3 or so of over 9000 Neutron's CIs put
their +/-1s on the change.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Policy for linking bug or bp in commit message

2014-05-29 Thread Yuriy Taraday
On Wed, May 28, 2014 at 3:54 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, May 23, 2014 at 1:13 PM, Nachi Ueno na...@ntti3.com wrote:

 (2) Avoid duplication of works
 I have several experience of this.  Anyway, we should encourage people
 to check listed bug before
 writing patches.


 That's a very good point, but I don't think requiring a bug/bp for every
 patch is a good way to address this. Perhaps there is another way.


We can require developer to either link to bp/bug or explicitly add
Minor-fix line to the commit message.
I think that would force commit author to at least think about whether
commit worth submitting a bug/bp or not.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Divergence of *-specs style checking

2014-05-20 Thread Yuriy Taraday
Great idea!

On Mon, May 19, 2014 at 8:38 PM, Alexis Lee alex...@hp.com wrote:

 Potentially the TITLES structure could
 be read from a per-project YAML file and the test itself could be drawn
 from some common area?


I think you can get that data from template.rst file by parsing it and
analyzing the tree.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Searching for docs reviews in Gerrit

2014-05-18 Thread Yuriy Taraday
Hello, Anne.


On Sat, May 17, 2014 at 7:03 AM, Anne Gentle a...@openstack.org wrote:

 file:section_networking-adv-config.xml
 project:openstack/openstack-manuals


As it's stated in the manual: The regular expression pattern must start
with ^. Meaning that it'll always look only for files whose paths start
with string matching this regex not just include them.


 nor does:
 file:docs/admin-guide-cloud/networking/section_networking-adv-config.xml
 project:openstack/openstack-manuals


You've misspelled the first dir name - it's doc and it's working.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Debugging tox tests with pdb?

2014-05-07 Thread Yuriy Taraday
Hello, Eric.


On Wed, May 7, 2014 at 10:15 PM, Pendergrass, Eric
eric.pendergr...@hp.comwrote:

 Hi, I’ve read much of the documentation around Openstack tests, tox, and
 testr.  All I’ve found indicates debugging can be done, but only by running
 the entire test suite.



 I’d like the ability to run a single test module with pdb.set_trace()
 breakpoints inserted, then step through the test.  I’ve tried this but it
 causes test failures on a test that would otherewise succeed.  The command
 I use to run the test is similar to this:  tox -e py27 test_module_name



 Is there some way to debug single tests that I haven’t found?  If not, how
 is everyone doing test development without the ability to debug?


You can do it as easy as:
.tox/py27/bin/python -m testtools.run test_module_name

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Yuriy Taraday
On Thu, May 1, 2014 at 8:17 PM, Salvatore Orlando sorla...@nicira.comwrote:

 The patch you've been looking at just changes the way in which SystemExit
 is used, it does not replace it with sys.exit.
 In my experience sys.exit was causing unit test threads to interrupt
 abruptly, whereas SystemExit was being caught by the test runner and
 handled.


According to https://docs.python.org/2.7/library/sys.html#sys.exit ,
sys.exit(n) is an equivalent for raise SystemExit(n), it can be confirmed
in the source code here:
http://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l206
If there's any difference in behavior it seems to be the problem of test
runner. For example, it can mock sys.exit somehow.

 I find therefore a bit strange that you're reporting what appears to be
 the opposite behaviour.

 Maybe if you could share the code you're working on we can have a look at
 it and see what's going on.


I'd suggest finding out what's the difference in both of your cases.

Coming back to topic, I'd prefer using standard library call because it can
be mocked for testing.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron]SystemExit() vs sys.exit()?

2014-05-01 Thread Yuriy Taraday
On Thu, May 1, 2014 at 10:41 PM, Paul Michali (pcm) p...@cisco.com wrote:

 ==
 FAIL: process-returncode
 tags: worker-1
 --
 *Binary content:*
 *  traceback (test/plain; charset=utf8)*
 ==
 FAIL: process-returncode
 tags: worker-0
 --
 *Binary content:*
 *  traceback (test/plain; charset=utf8)*


process-returncode failures means that child process (subunit one) exited
with nonzero code.


 It looks like there was some traceback, but it doesn’t show it. Any ideas
 how to get around this, as it makes it hard to troubleshoot these types of
 failures?


Somehow traceback got MIME type test/plain. I guess, testr doesn't push
this type of attachments to the screen. You can try to see what's there in
.testrepository dir but I doubt there will be anything useful there.

I think this behavior is expected. Subunit process gets terminated because
of uncaught SystemExit exception and testr reports that as an error.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-26 Thread Yuriy Taraday
On Fri, Apr 25, 2014 at 11:41 PM, Zaro zaro0...@gmail.com wrote:

 Do you mean making it default to WIP on every patchset that gets
 uploaded?


No. I mean carrying WIP to all new patch sets once it is set just like
Code-Review -2 is handled by default.

Gerrit 2.8 does allow you to carry the same label score forward[1] if
 it's either a trivial rebase or no code has changed.  We plan to set
 these options for the 'Code-Review' label, but not the Workflow label.

 [1]
 https://gerrit-review.googlesource.com/Documentation/config-labels.html


It looks like copyMinScore option for Workflow label will do what I'm
talking about.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Yuriy Taraday
Hello.

On Wed, Apr 23, 2014 at 2:40 AM, James E. Blair jebl...@openstack.orgwrote:

 * The new Workflow label will have a -1 Work In Progress value which
   will replace the Work In Progress button and review state.  Core
   reviewers and change owners will have permission to set that value
   (which will be removed when a new patchset is uploaded).


Wouldn't it be better to make this label more persistent?
As I remember there were some ML threads about keeping WIP mark across
patch sets. There were even talks about changing git-review to support this.
How about we make it better with the new version of Gerrit?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Yuriy Taraday
On Fri, Apr 25, 2014 at 8:10 PM, Zaro zaro0...@gmail.com wrote:

 Gerrit 2.8 allows setting label values on patch sets either thru the
 command line[1] or REST API[2].  Since we will setup WIP as a -1 score
 on a label this will just be a matter of updating git-review to set
 the label on new patchsets.  I'm no sure if there's a bug entered in
 our the issue tracker for this but you are welcome to create one.

 [1] https://review-dev.openstack.org/Documentation/cmd-review.html
 [2]
 https://review-dev.openstack.org/Documentation/rest-api-changes.html#set-review


Why do you object making it a default behavior on the Gerrit side?
Is there any issue with making this label pass on to new patch sets?

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Decorator behavior

2014-04-01 Thread Yuriy Taraday
Hello.


On Mon, Mar 31, 2014 at 9:32 PM, Dan Smith d...@danplanet.com wrote:

  
  (self, context, [], {'migration': migration, 'image': image,
  'instance': instance, 'reservations': reservations})
 
  while when running a test case, they see these arguments:
 
  (self, context, [instance, image, reservations, migration,
  instance_type], {})

 All RPC-called methods get called with all of their arguments as keyword
 arguments. I think this explains the runtime behavior you're seeing.
 Tests tend to differ in this regard because test writers are human and
 call the methods in the way they normally expect, passing positional
 arguments when appropriate.


It might be wise to add something like
https://pypi.python.org/pypi/kwonlyto all methods that are used in RPC
and modify tests appropriately to avoid
such confusion in future.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-24 Thread Yuriy Taraday
On Mon, Mar 24, 2014 at 9:51 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 Don't discard the first number so quickly.

 For example, say we use a timeout mechanism for the daemon running
 inside namespaces to avoid using too much memory with a daemon in
 every namespace.  That means we'll pay the startup cost repeatedly but
 in a way that amortizes it down.

 Even if it is really a one time cost, then if you collect enough
 samples then the outlier won't have much affect on the mean anyway.


It actually affects all numbers but mean (e.g. deviation is gross).


 I'd say keep it in there.

 Carl

 On Mon, Mar 24, 2014 at 2:04 AM, Miguel Angel Ajo majop...@redhat.com
 wrote:
 
 
  It's the first call starting the daemon / loading config files, etc?,
 
  May be that first sample should be discarded from the mean for all
 processes
  (it's an outlier value).


I thought about cutting max from counting deviation and/or showing
second-max value. But I don't think it matters much and there's not much
people here who're analyzing deviation. It's pretty clear what happens with
the longest run with this case and I think we can let it be as is. It's
mean value that matters most here.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Yuriy Taraday
On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez thie...@openstack.orgwrote:

 Yuriy Taraday wrote:
  Benchmark included showed on my machine these numbers (average over 100
  iterations):
 
  Running 'ip a':
ip a :   4.565ms
   sudo ip a :  13.744ms
 sudo rootwrap conf ip a : 102.571ms
  daemon.run('ip a') :   8.973ms
  Running 'ip netns exec bench_ns ip a':
sudo ip netns exec bench_ns ip a : 162.098ms
  sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
   daemon.run('ip netns exec bench_ns ip a') : 129.876ms
 
  So it looks like running daemon is actually faster than running sudo.

 That's pretty good! However I fear that the extremely simplistic filter
 rule file you fed on the benchmark is affecting numbers. Could you post
 results from a realistic setup (like same command, but with all the
 filter files normally found on a devstack host ?)


I don't have a devstack host at hands but I gathered all filters from Nova,
Cinder and Neutron and got this:
method  :min   avg   max   dev
   ip a :   3.741ms   4.443ms   7.356ms 500.660us
  sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
 daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms

Then I switched back to one file and got:
method  :min   avg   max   dev
   ip a :   4.176ms   4.976ms  22.910ms   1.821ms
  sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
 daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms

There is a difference but it looks like it's because of config files
parsing, not applying filters themselves.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 18, 2014 at 7:38 PM, Yuriy Taraday yorik@gmail.com wrote:

 I'm aiming at ~100 new lines of code for daemon. Of course I'll use some
 batteries included with Python stdlib but they should be safe already.
 It should be rather easy to audit them.


Here's my take on this: https://review.openstack.org/81798

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
  ip a :   4.565ms
 sudo ip a :  13.744ms
   sudo rootwrap conf ip a : 102.571ms
daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
  sudo ip netns exec bench_ns ip a : 162.098ms
sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
 daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running sudo.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Tue, Mar 11, 2014 at 12:58 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 https://etherpad.openstack.org/p/neutron-agent-exec-performance


I've added info on how we can speedup work with namespaces by setting
namespaces by ourselves using setns() without ip netns exec overhead.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-20 Thread Yuriy Taraday
On Thu, Mar 20, 2014 at 7:28 PM, Rick Jones rick.jon...@hp.com wrote:

 On 03/20/2014 05:41 AM, Yuriy Taraday wrote:

 Benchmark included showed on my machine these numbers (average over 100
  iterations):

 Running 'ip a':
ip a :   4.565ms
   sudo ip a :  13.744ms
 sudo rootwrap conf ip a : 102.571ms
  daemon.run('ip a') :   8.973ms
 Running 'ip netns exec bench_ns ip a':
sudo ip netns exec bench_ns ip a : 162.098ms
  sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
   daemon.run('ip netns exec bench_ns ip a') : 129.876ms

 So it looks like running daemon is actually faster than running sudo.


 Interesting result.  Which versions of sudo and ip and with how many
 interfaces on the system?


Here are the numbers:

% sudo -V
Sudo version 1.8.6p7
Sudoers policy plugin version 1.8.6p7
Sudoers file grammar version 42
Sudoers I/O plugin version 1.8.6p7
% ip -V
ip utility, iproute2-ss130221
% ip a | grep '^[^ ]' | wc -l
5


 For consistency's sake (however foolish it may be) and purposes of others
 being able to reproduce results and all that, stating the number of
 interfaces on the system and versions and such would be a Good Thing.


Ok, I'll add them to benchmark output.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >