[openstack-dev] [kuryr] Kuryr-Kubernetes gates broken

2018-11-06 Thread Michał Dulko
Hi,

Kuryr-Kubernetes LBaaSv2 gates are currently broken due to bug [1] in
python-neutronclient. Until commit [2] is merged and a new version of
the client is released I'm proposing to make those gates non-voting
[3].

[1] https://bugs.launchpad.net/python-neutronclient/+bug/1801360
[2] https://review.openstack.org/#/c/615184
[3] https://review.openstack.org/#/c/615861


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Naming the T release of OpenStack

2018-10-18 Thread Michał Dulko
On Thu, 2018-10-18 at 17:35 +1100, Tony Breeds wrote:
> Hello all,
> As per [1] the nomination period for names for the T release have
> now closed (actually 3 days ago sorry).  The nominated names and any
> qualifying remarks can be seen at2].
> 
> Proposed Names
>  * Tarryall
>  * Teakettle
>  * Teller
>  * Telluride
>  * Thomas
>  * Thornton
>  * Tiger
>  * Tincup
>  * Timnath
>  * Timber
>  * Tiny Town
>  * Torreys
>  * Trail
>  * Trinidad
>  * Treasure
>  * Troublesome
>  * Trussville
>  * Turret
>  * Tyrone
> 
> Proposed Names that do not meet the criteria
>  * Train
> 
> However I'd like to suggest we skip the CIVS poll and select 'Train' as
> the release name by TC resolution[3].  My think for this is 
> 
>  * It's fun and celebrates a humorous moment in our community
>  * As a developer I've heard the T release called Train for quite
>sometime, and was used often at the PTG[4].
>  * As the *next* PTG is also in Colorado we can still choose a
>geographic based name for U[5]
>  * If train causes a problem for trademark reasons then we can always
>run the poll
> 
> I'll leave[3] for marked -W for a week for discussion to happen before the
> TC can consider / vote on it.

I'm totally supportive for OpenStack Train, but got to say that
OpenStack Troublesome is a wonderful name as well. :)

> Yours Tony.
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2018-September/134995.html
> [2] https://wiki.openstack.org/wiki/Release_Naming/T_Proposals
> [3] https://review.openstack.org/#/q/I0d8d3f24af0ee8578712878a3d6617aad1e55e53
> [4] https://twitter.com/vkmc/status/1040321043959754752
> [5] https://en.wikipedia.org/wiki/List_of_places_in_Colorado:_T–Z


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Luis Tomás Bolívar for kuryr-kubernetes core

2018-09-21 Thread Michał Dulko
On Thu, 2018-09-20 at 18:33 +0200, Daniel Mellado wrote:
> Hi All,
> 
> Id like to nominate Luis Tomás for Kuryr-Kubernetes core.
> 
> He has been contributing to the project development with both features
> and quality reviews at core reviewer level, as well as being the stable
> branch liaison keeping on eye on every needed backport and bug and
> fighting and debugging lbaas issues.
> 
> Please follow up with a +1/-1 to express your support, even if he makes
> the worst jokes ever!

Looks like Luis is doing most of the review work recently [1], so it's
definitely a confident +1 from me.

[1] http://stackalytics.com/report/contribution/kuryr-group/90

> Thanks!
> 
> Daniel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] [os-vif] kuryr-kubernetes gates unblocked

2018-03-14 Thread Michał Dulko
Hi,

kuryr-kubernetes gates were broken by recent try to switch from
neutron-legacy DevStack to plain neutron [1]. Meanwhile it modified
DevStack jobs we were relying on and introduced us another failure.

Now neutron-legacy change was reverted [2] and fix for the second issue
[3] is getting merged. Once it's in kuryr-kubernetes gates in both
kuryr and os-vif repos should be working again.

I apologize that it took so long, but it was multi-level issue and it
required a lot of debugging from us.

Thanks,
Michal

[1] https://github.com/openstack-dev/devstack/commit/d9c1275c5df55e822a
7df6880a9a1430ab4f24a0
[2] https://github.com/openstack-dev/devstack/com
mit/9f50f541385c929262a2e9c05093881960fe7d8f
[3] https://review.openstac
k.org/#/c/552701/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] review.openstack.org downtime and Gerrit upgrade TODAY 15:00 UTC - 23:59 UTC

2017-09-19 Thread Michał Dulko
On wto, 2017-09-19 at 09:19 +0200, Andreas Jaeger wrote:
> Two things currently:
> 
> * no post jobs are run, I suggest to not tag anything
> 
> * I don't get emails from gerrit anymore
> 

Same here, since the outage I've stopped receiving Gerrit notifications
to my email.

Thanks,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rolling Upgrades

2016-10-24 Thread Michał Dulko
On 10/21/2016 11:57 PM, Zane Bitter wrote:
> On 21/10/16 08:37, Michał Dulko wrote:
>>> > Finally, a note about Oslo versioned objects: they don't really help
>>> > us. They work great for nova where there is just nova-conductor
>>> > reading and writing to the DB, but we have multiple heat-engines
>>> doing
>>> > that that need to be restarted in a rolling manner. See the
>>> references
>>> > below for greater detail.
>> They do help in case you're changing RPC arguments *content*. In
>> particular they make it easier to modify schema of dict-like structures
>> sent over RPC.
>
> This is technically true, but there's a much simpler solution to that
> which we already have: just don't change the content in
> non-backward-compatible ways (i.e. you can add stuff but not
> change/rename/remove stuff).
>
> We have to do that anyway, because this is effectively our user
> interface, so if we didn't we'd break clients. For that reason, we're
> already much more strict about this than required to avoid this
> problem in the RPC layer.

Sure, it's about compatibility, so if nothing ever changes, then you're
fine.

>
> As Crag said, the problem we do have is when we add flags/arguments to
> a message, how can we ensure that older versions of the engine still
> interpret it correctly.

In Cinder we're assuming you cannot send a message containing some new
flags/arguments if the environment is running services in various
versions. It's better to fail fast and possibly with a message to the
user, than to do something he haven't asked for.

Thanks,
Michal


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rolling Upgrades

2016-10-21 Thread Michał Dulko
On 10/21/2016 02:02 AM, Crag Wolfe wrote:
> At Summit, folks will be discussing the rolling upgrade issue across a
> couple of sessions. I personally won't be able to attend, but thought
> I would share my thoughts on the subject.
>
> To handle rolling upgrades, there are two general cases to consider:
> database model changes and RPC method signature changes.
>
> For DB Model changes (this has already been well discussed on the
> mailing list, see the footnotes), let's assume for the moment we don't
> want to use triggers. If we are moving data from one column/table to
> another, the pattern looks like:
>
> legacy release: write to old location
> release+1: write to old and new location, read from old
> release+2: write to old and new location, read from new,
>provide migration utility
> release+3: write to new location, read from new
>
> Works great! The main issue is if the duplicated old and new data
> happens to be large. For a heat-specific example (one that is close to
> my heart), consider moving resource/event properties data into a
> separate table.
>
> We could speed up the process by adding config variables that specify
> where to read from, but that is putting a burden on the operator,
> creating a risk that data is lost if the config variables are not
> updated in the correct order after each full rolling restart, etc.
>
> Which brings us back to triggers. AFAIK, only sqlalchemy+mariadb is
> being used in production, so we really only have one backend we would
> have to write triggers for. If the data duplication is too unpalatable
> for a given migration (using the +1, +2, +3 pattern above), we may
> have to wade into the less simple world of triggers.

I just wanted to remind that Heat has unit test [2] which is blocking
contracting DB migrations.

> For RPC changes, we don't have a great solution right now (looking
> specifically at heat/engine/service.py). If we add a field, an older
> running heat-engine will break if it receives a request from a newer
> running heat-engine. For a relevant example, consider adding the
> "root_id" as an argument (
> https://review.openstack.org/#/c/354621/13/heat/engine/service.py ).
>
> Looking for the simplest solution -- if we introduce a mandatory
> "future_args" arg (a dict) now to all rpc methods (perhaps provide a
> decorator to do so), then we could follow this pattern post-Ocata:
>
> legacy release: accepts the future_args param (but does nothing with it).
> release+1: accept the new parameter with a default of None,
>pass the value of the new parameter in future_args.
> release+2: accept the new parameter, pass the value of the new parameter
>in its proper placeholder, no longer in future_args.
>
> But, we don't have a way of deleting args. That's not super
> awful... old args never die, they just eventually get ignored. As for
> adding new api's, the pattern would be to add them in release+1, but
> not call them until release+2. [If we really have a case where we need
> to add and use a new api in release+1, the solution may be to have two
> rpc api messaging targets in release+1, one for the previous
> major.minor release and another for the major+1.0 release that has the
> new api. Then, we of course we could remove outdated args in
> major+1.0.]

Another solution is adopting Nova's and Cinder's way. You need some kind
of RPC version reporting and detection framework. In Cinder it's
reported into `services` table [1], and supported RPC API version is
detected [2] based on that data. Then requests are backported into
required version on RPC client level (e.g. [3]).

> Finally, a note about Oslo versioned objects: they don't really help
> us. They work great for nova where there is just nova-conductor
> reading and writing to the DB, but we have multiple heat-engines doing
> that that need to be restarted in a rolling manner. See the references
> below for greater detail.

They do help in case you're changing RPC arguments *content*. In
particular they make it easier to modify schema of dict-like structures
sent over RPC.

> --Crag
>
> References
> --
>
> [openstack-dev] [Heat] Versioned objects upgrade patterns
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/thread.html#95245
>
> [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades:
> database triggers and oslo.versionedobjects
> http://lists.openstack.org/pipermail/openstack-dev/2016-September/102698.html
> http://lists.openstack.org/pipermail/openstack-dev/2016-October/105764.html

[1] 
https://github.com/openstack/heat/blob/master/heat/tests/db/test_migrations.py#L114-L137
[2] 
https://github.com/openstack/cinder/blob/325f99a64aeb3e7a768904781d854c19bb540580/cinder/db/sqlalchemy/models.py#L86-L89
[3] 
https://github.com/openstack/cinder/blob/8a4aecb155478e9493f4d36b080ccdf6be406eba/cinder/rpc.py#L208-L224
[4] 

Re: [openstack-dev] [nova] Getting DetachedInstanceError from sqlalchemy on instance.get_by_uuid()

2016-10-10 Thread Michał Dulko
On 10/07/2016 11:04 PM, Beliveau, Ludovic wrote:
>
> Hi all,
>
>  
>
> In kilo (yeah I know it’s an old release, but still :)), I was getting
> a nova errors for DetachedInstanceError on instance.get_by_uuid().
>
>  
>



>  
>
> Has anybody seen this issue before or something similar ?
>
>  
>
> Thanks for the help,
>
> /ludovic
>

Take a look on post [1]. Looks like this was improved in Oslo in Kilo
and adopted in Nova in Mitaka [2].

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/104857.html\
[2]  https://blueprints.launchpad.net/nova/+spec/new-oslodb-enginefacade


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] clean up your git checkout!

2016-09-30 Thread Michał Dulko
On 09/30/2016 04:06 PM, Ihar Hrachyshka wrote:
> Ihar Hrachyshka  wrote:
>
>> Hi all,
>>
>> today we landed https://review.openstack.org/#/c/269658/ (huge!) that
>> removed neutron/objects/network/ directory and replaced it with
>> neutron/objects/network.py file. Though it makes python that sees old
>> .pyc files sad:
>>
>> Failed to import test module: neutron.tests.unit.objects.test_network
>> Traceback (most recent call last):
>>   File
>> "/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
>> line 456, in _find_test_path
>> module = self._get_module_from_name(name)
>>   File
>> "/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
>> line 395, in _get_module_from_name
>> __import__(name)
>>   File "neutron/tests/unit/objects/test_network.py", line 23, in
>> 
>> obj_test_base.BaseObjectIfaceTestCase):
>>   File "neutron/tests/unit/objects/test_network.py", line 24, in
>> NetworkPortSecurityIfaceObjTestCase
>> _test_class = network.NetworkPortSecurity
>> AttributeError: 'module' object has no attribute 'NetworkPortSecurity'
>> The test run didn't actually run any tests
>>
>> Please run git clean -f -x in your checkout to remove all .pyc files.
>> This should solve any import issues you may experience due to the new
>> patch.
>
> I hear that -f -x is not enough. Please add -d too:
>
> $ git clean -f -x -d
>
> Ihar
>

Isn't ``find . -name \*.pyc -delete`` enough? That way you won't remove
anything else. In Cinder we have that in tox.ini [1].

[1]
https://github.com/openstack/cinder/blob/792108f771607b75a25e9c4cfaaa26e5039d1748/tox.ini#L21-L21

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] Proposal to make multinode grenade job voting

2016-09-29 Thread Michał Dulko
On 09/29/2016 12:10 PM, Michał Dulko wrote:
> Hello everyone,
>
> We have a non-voting multinode grenade job in check queue for around a
> month now.
>
> https://goo.gl/Kr10s6

Whoops, I've sent this by mistake. Here's the actual email:

Hello everyone,

We have a non-voting multinode grenade job in check pipeline for around
a month now. It is testing rolling upgrades by checking compatibility of
stable c-vol and c-bak with master c-api and c-sch. This is a
requirement for Cinder to achieve an assert:supports-rolling-upgrade tag
[1]. As the job seems fairly stable in comparison with the tempest job
[2] and in ci-watch [3], I'm proposing to make it voting.

Please note that failures seen in the beginning of September on graph
[2] are results of bug [4], which was actually found by the job
(although the bug itself wasn't related to upgrades).

The patch making the job voting is available at [5].

Thanks,
Michal

[1]
https://governance.openstack.org/reference/tags/assert_supports-rolling-upgrade.html
[2] https://goo.gl/Kr10s6
[3] http://ci-watch.tintri.com/project?project=cinder=7+days
[4] https://bugs.launchpad.net/cinder/+bug/1619246
[5] https://review.openstack.org/#/c/379346/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [qa] Proposal to make multinode grenade job voting

2016-09-29 Thread Michał Dulko
Hello everyone,

We have a non-voting multinode grenade job in check queue for around a
month now.

https://goo.gl/Kr10s6

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][db] lazy loading of an attribute impossible

2016-09-22 Thread Michał Dulko
Hi,

I've just noticed another Cinder bug [1], similar to past bugs [2], [3].
All of them have a common exception causing them:

sqlalchemy.orm.exc.DetachedInstanceError: Parent instance
<{$SQLAlchemyObject} at {$MemoryLocation}> is not bound to a Session;
lazy load operation of attribute '{$ColumnName}' cannot proceed

We've normally fixed them by simply making the $ColumnName eager-loaded,
but as there's another similar bug report, I'm starting to think that we
have some issue with how we're managing our DB connections and
SQLAlchemy objects are losing their sessions too quickly, before we'll
manage to lazy-load required stuff.

I'm not too experienced with SQLAlchemy session management, so I would
welcome any help with investigation.

Thanks,
Michal


[1] https://bugs.launchpad.net/cinder/+bug/1626499
[2] https://bugs.launchpad.net/cinder/+bug/1517763
[3] https://bugs.launchpad.net/cinder/+bug/1501838

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-21 Thread Michał Dulko


On 09/21/2016 03:32 PM, Konstanski, Carlos P wrote:
> Am Mittwoch, den 21.09.2016, 15:07 +0200 schrieb Michał Dulko:
>> On 09/21/2016 02:32 AM, Konstanski, Carlos P wrote:
>>> Am Dienstag, den 20.09.2016, 15:31 -0600 schrieb Konstanski, Carlos P:
>>>> I am currently using python-cinderclient version 1.5.0, though the code in
>>>> question is still in master.
>>>>
>>>> When calling client.services.list() I get this result: "AttributeError:
>>>> service"
>>>>
>>>> The execution path of client.services.list() eventually leads to this
>>>> method
>>>> in
>>>> cinderclient/v2/services.py:24:
>>>>
>>>> def
>>>> __repr__(self):  
>>>>  
>>>> return "" %
>>>> self.service
>>>>   
>>>>
>>>> which in turn triggers a call to Resouce.__getattr__() in
>>>> cinderclient/openstack/common/apiclient/base.py:456.
>>>>
>>>> This custom getter will never find an attribute called service because a
>>>> Service
>>>> instance looks something like the following:
>>>>
>>>> {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone': u'nova',
>>>> u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00', u'state':
>>>> u'up', u'disabled_reason': None}
>>>>
>>>> So it returns the string "AttributeError: service".
>>>>
>>>> One way or another a fix is warranted, and I am ready, willing and able to
>>>> provide the fix. But first I want to find out more about the bigger
>>>> picture.
>>>> could  it be that this __repr__() method actually correct, but the code
>>>> that
>>>> populates my service instance is faulty? This could easily be the case if
>>>> the
>>>> dict that feeds the Service class were to look like the following (for
>>>> example):
>>>>
>>>> {u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler',
>>>> u'zone':
>>>> u'nova', u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
>>>> u'state': u'up', u'disabled_reason': None}}
>>>>
>>>> Somehow I doubt it; why hide all the useful attributes in a dict under a
>>>> single
>>>> parent attribute? But I'm new to cinder and I don't know the rules. I'm
>>>> not
>>>> here
>>>> to question your methods.
>>>>
>>>> Or am I just using it wrong? This code has survived for a long time, and
>>>> certainly someone would have noticed a problem by now. But it seems pretty
>>>> straightforward. How many ways are there to prepare a call to
>>>> client.services.list()? I get a Client instance, call authenticate() for
>>>> fun,
>>>> and then call client.services.list(). Not a lot going on here.
>>>>
>>>> I'll get to work on a patch when I figure out what it is supposed to do,
>>>> if it
>>>> is not already doing it.
>>>>
>>>> Sincerely,
>>>> Carlos Konstanski
>>> I guess the question I should be asking is this: Manager._list() (in
>>> cinderclient/base.py) returns a list of printable representations of
>>> objects,
>>> not a list of the objects themselves. Hopefully there's a more useful method
>>> that returns a list of actual objects, or at least a JSON representation. If
>>> I
>>> can't find such a method then I'll be back, or I'll put up a review to add
>>> one.
>>>
>>> Carlos
>> Is bug being addressed in review [1] somehow related? If so, there's
>> some discussion on solutions going.
>>
>> [1] https://review.openstack.org/#/c/308475
> This neophyte needs a bit of education. What is review [1] ?

I've meant Gerrit review page linked above under [1]:
https://review.openstack.org/#/c/308475

> In the meantime I have a potential fix. I'll see if some of my coworkers who
> have put up patches in the past can help me figure out how it's done the
> Openstack Way.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-21 Thread Michał Dulko
On 09/21/2016 02:32 AM, Konstanski, Carlos P wrote:
> Am Dienstag, den 20.09.2016, 15:31 -0600 schrieb Konstanski, Carlos P:
>> I am currently using python-cinderclient version 1.5.0, though the code in
>> question is still in master.
>>
>> When calling client.services.list() I get this result: "AttributeError:
>> service"
>>
>> The execution path of client.services.list() eventually leads to this method
>> in
>> cinderclient/v2/services.py:24:
>>
>> def __repr__(self):  
>>  
>> return "" % self.service
>>   
>>
>> which in turn triggers a call to Resouce.__getattr__() in
>> cinderclient/openstack/common/apiclient/base.py:456.
>>
>> This custom getter will never find an attribute called service because a
>> Service
>> instance looks something like the following:
>>
>> {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone': u'nova',
>> u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00', u'state':
>> u'up', u'disabled_reason': None}
>>
>> So it returns the string "AttributeError: service".
>>
>> One way or another a fix is warranted, and I am ready, willing and able to
>> provide the fix. But first I want to find out more about the bigger picture.
>> could  it be that this __repr__() method actually correct, but the code that
>> populates my service instance is faulty? This could easily be the case if the
>> dict that feeds the Service class were to look like the following (for
>> example):
>>
>> {u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone':
>> u'nova', u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
>> u'state': u'up', u'disabled_reason': None}}
>>
>> Somehow I doubt it; why hide all the useful attributes in a dict under a
>> single
>> parent attribute? But I'm new to cinder and I don't know the rules. I'm not
>> here
>> to question your methods.
>>
>> Or am I just using it wrong? This code has survived for a long time, and
>> certainly someone would have noticed a problem by now. But it seems pretty
>> straightforward. How many ways are there to prepare a call to
>> client.services.list()? I get a Client instance, call authenticate() for fun,
>> and then call client.services.list(). Not a lot going on here.
>>
>> I'll get to work on a patch when I figure out what it is supposed to do, if 
>> it
>> is not already doing it.
>>
>> Sincerely,
>> Carlos Konstanski
> I guess the question I should be asking is this: Manager._list() (in
> cinderclient/base.py) returns a list of printable representations of objects,
> not a list of the objects themselves. Hopefully there's a more useful method
> that returns a list of actual objects, or at least a JSON representation. If I
> can't find such a method then I'll be back, or I'll put up a review to add 
> one.
>
> Carlos

Is bug being addressed in review [1] somehow related? If so, there's
some discussion on solutions going.

[1] https://review.openstack.org/#/c/308475

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-21 Thread Michał Dulko
On 09/20/2016 05:48 PM, John Griffith wrote:
> On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas
> > wrote:
>
> On 20 September 2016 at 16:24, Nikita Konovalov
> > wrote:
>
> Hi,
>
> From Sahara (and Hadoop workload in general) use-case the
> reason we used BDD was a complete absence of any overhead on
> compute resources utilization. 
>
> The results show that the LVM+Local target perform pretty
> close to BDD in synthetic tests. It's a good sign for LVM. It
> actually shows that most of the storage virtualization
> overhead is not caused by LVM partitions and drivers
> themselves but rather by the iSCSI daemons.
>
> So I would still like to have the ability to attach partitions
> locally bypassing the iSCSI to guarantee 2 things:
> * Make sure that lio processes do not compete for CPU and RAM
> with VMs running on the same host.
> * Make sure that CPU intensive VMs (or whatever else is
> running nearby) are not blocking the storage.
>
>
> So these are, unless we see the effects via benchmarks, completely
> meaningless requirements. Ivan's initial benchmarks suggest
> that LVM+LIO is pretty much close enough to BDD even with iSCSI
> involved. If you're aware of a case where it isn't, the first
> thing to do is to provide proof via a reproducible benchmark.
> Otherwise we are likely to proceed, as John suggests, with the
> assumption that local target does not provide much benefit. 
>
> I've a few benchmarks myself that I suspect will find areas where
> getting rid of iSCSI is benefit, however if you have any then you
> really need to step up and provide the evidence. Relying on vague
> claims of overhead is now proven to not be a good idea. 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
> ​Honestly we can have both, I'll work up a bp to resurrect the idea of
> a "smart" scheduling feature that lets you request the volume be on
> the same node as the compute node and use it directly, and then if
> it's NOT it will attach a target and use it that way (in other words
> you run a stripped down c-vol service on each compute node).

Don't we have at least scheduling problem solved [1] already?

[1]
https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py

>
> Sahara keeps insisting on being a snow-flake with Cinder volumes and
> the block driver, it's really not necessary.  I think we can
> compromise just a little both ways, give you standard Cinder semantics
> for volumes, but allow you direct acccess to them if/when requested,
> but have those be flexible enough that targets *can* be attached so
> they meet all of the required functionality and API implementations. 
> This also means that we don't have to continue having a *special*
> driver in Cinder that frankly only works for one specific use case and
> deployment.
>
> I've pointed to this a number of times but it never seems to
> resonate... but I never learn so I'll try it once again [1].  Note
> that was before the name "brick" was hijacked and now means something
> completely different.
>
> [1]: https://wiki.openstack.org/wiki/CinderBrick
>
> Thanks,
> John​


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] FFE request for RBD replication

2016-09-12 Thread Michał Dulko
+1, thanks for taking care of that!

On 09/12/2016 03:35 AM, Huang Zhiteng wrote:
> +1 for this long-waited feature to land in Newton.
>
> On Sun, Sep 11, 2016 at 1:09 AM, Jay S. Bryant
> >
> wrote:
>
> +1 from me.  It is making good progress and is low risk.
>
> -Jay
>
>
>
> On 09/09/2016 02:32 PM, Gorka Eguileor wrote:
>
> Hi,
>
> As some of you may know, Jon Bernard (jbernard on IRC) has
> been working
> on the RBD v2.1 replication implementation [1] for a while,
> and we would
> like to request a Feature Freeze Exception for that work, as
> we believe
> it is a good candidate being a low risk change for the
> integrity of
> the existing functionality in the driver:
>
> - It's non intrusive if it's not enabled (enabled using
>replication_device configuration option).
> - It doesn't affect existing deployments (disabled by default).
> - Changes are localized to the driver itself (rbd.py) and the
> driver
>unit tests file (test_rbd.py).
>
> Jon would have liked to make this request himself, but due to the
> untimely arrival of his newborn baby this is not possible.
>
> For obvious reasons Jon will not be available for a little
> while, but
> this will not be a problem, as I am well acquainted with the
> code -and
> I'll be able to reach Jon if necessary- and will be taking
> care of the
> final steps of the review process of his patch: replying to
> comments in
> a timely fashion, making changes to the code as required, and
> answering
> pings on IRC regarding the patch.
>
> Since some people may be interested in testing this
> functionality during
> the reviewing process -or just for fun- I'll be publishing a
> post with
> detailed explanation on how to deploy and test this feature as
> well as
> an automated way to deploy 2 Ceph clusters -linked to be
> mirroring one
> another-, and one devstack node with everything ready to test the
> functionality (configuration and keys for the Ceph clusters,
> cinder
> configuration, the latest upstream patch, and a volume type
> with the
> right configuration).
>
> Please, do not hesitate to ask if there are any questions to
> or concerns
> related to this request.
>
> Thank you for taking the time to evaluate this request.
>
> Cheers,
> Gorka.
>
> [1]: https://review.openstack.org/333565
> 
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> -- 
> Regards
> Huang Zhiteng
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] moving driver to open source

2016-09-07 Thread Michał Dulko


On 09/06/2016 05:27 PM, Alon Marx wrote:
> I want to share our plans to open the IBM Storage driver source code.
> Historically we started our way in cinder way back (in Essex if I'm
> not mistaken) with just a small piece of code in the community while
> keeping most of the driver code closed. Since then the code has grown,
> but we kept with the same format. We would like now to open the driver
> source code, while keeping the connectivity to the storage as closed
> source.
> I believe that there are other cinder drivers that have some stuff in
> proprietary libraries. I want to propose and formalize the principles
> to where we draw the line (this has also been discussed in
> https://review.openstack.org/#/c/341780/) on what's acceptable by the
> community.
> Based on previous discussion I understand that the rule of thumb is
> "as long as the majority of the driver logic is in the public driver"
> the community would be fine with that. Is this acceptable to the
> community?

To me it seems impossible to openly measure "majority of the driver
logic"  when any logic is being closed source as you simply don't know
how much logic is being hidden. Normal practice in other Cinder drivers
is communicating with the storage through the REST API, and in that case
community doesn't care about the logic hidden in the REST API. But I
guess this won't work for your requirements as you want to "keep the
connectivity to the storage as closed source". Are my assumptions right?

>
> Regards,
> Alon


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-31 Thread Michał Dulko
On 08/25/2016 07:52 PM, Andrew Laski wrote:
>  On Thu, Aug 25, 2016, at 12:22 PM, Everett Toews wrote:
>> Top posting with general comment...
>>
>> It sounds like there's some consensus in Nova-land around these
>> traits (née "capabilities"). The API Working Group [4] is
>> also aware of similar efforts in Cinder [1][2] and Glance [3].
>
> To be clear, we're looking at exposing both traits and capabilities in
> Nova. This puts us in a weird spot where I think our concept of traits
> aligns with cinders capabilities, but I don't see any match for the
> Nova concept of capabilities. So I'm still open to naming suggestions
> but I think capabilities most accurately describes what it is. Dean
> has it right, I think, that what we really have are 'api capabilities'
> and 'host capabilities'. But people will end up just using
> 'capabilities' and cause confusion.

I think I need to clarify this a bit. In Cinder we're already having a
resource called "capabilities". It returns possible hardware features of
a particular volume backend, like compression or QoS support. This is
returned in a form similar to Glance's Metadata Catalog API (aka
Graffiti), so should be easily consumable by Horizon to produce a
structured UI letting admin define meaningful volume type metadata that
will enable particular backend options. As it's based on internal host
and backend names, it's rather admin-facing API. This is what in current
Nova's definition would be called "traits", right?

Now what we're looking to also expose is possible actions per
deployment, volume type, or maybe even a particular volume. An API that
will make answers to questions like "can I create a volume backup in
this cloud?", "can volumes of this type be included in consistency
groups?" easily discoverable. These are more like Nova's "capabilities".

>> If these are truly the same concepts being discussed across projects,
>> it would be great to see consistency in the APIs and have the
>> projects come together under a new guideline. I encourage the
>> projects and people to propose such a guideline and for someone to
>> step up and champion it. Seems like good fodder for a design session
>> proposal at the upcoming summit.
>
> Here's what all of these different things look like to me:
>
> Cinder is looking to expose hardware capabilities. This pretty closely
> aligns with what traits are intending to do in Nova. This answers the
> question of "can I create a resource that needs/does X in this
> deployment?" However in Nova we ultimately want users to be able to
> specify which traits they want for their instance. That may be
> embedded in a flavor or arbitrarily specified in the request but a
> trait is not implicitly available to all resources like it seems it is
> in Cinder. We assume there could be a heterogeneous environment so
> without requesting a trait there's no guarantee of getting it.

Requesting "traits" in Cinder is still based on an admin-defined volume
types and there are no plans to change that yet, so I think that's one
of the main differences - in Nova's case "traits" API must be user-facing.

> Nova capabilities are intended to answer the question of "as user Y
> with resource X what can I do with it?" This is dependent on user
> authorization, hardware "traits" where the resource lives, and service
> version. I didn't see an analog to this in any of the proposals below.
> And one major difference between this and the other proposals is that,
> if possible, we would like the response to map to the API action that
> will perform that capability. So if a user can perform a resize on
> their instance the response might include 'POST
> .../servers//action -d resize' or whatever form we come up with.

Yup, that's basically what [1] wants to implement in Cinder. I think we
should hold up this patch until either we come up with consistent
cross-project solution, or agree that all the projects should go their
own way on this topic.

> The Glance concept of value discovery maps closely to what Nova
> capabilities are in intent in that it answers the question of "what
> can I do in this API request that will be valid?" But the scope is
> completely different in that it doesn't answer the question of which
> API requests can be made, just what values can be used in this
> specific call.
>
>
> Given the above I find that I don't have the imagination required to
> consolidate those into a consistent API concept that can be shared
> across projects. Cinder capabilities and Nova traits could potentially
> work, but the rest seem too different to me. And if we change
> traits->capabilities then we should find another name for what is
> currently Nova capabilities.
>
> -Andrew

I see similarities between Nova's and Cinder's problem space and I
believe we can come up with a consistent API here. This sounds like a
topic suitable for a cross-project discussion at the Design Summit.

[1] https://review.openstack.org/#/c/350310


Re: [openstack-dev] Announcing Gertty 1.4.0

2016-07-27 Thread Michał Dulko
On 07/27/2016 12:39 AM, James E. Blair wrote:
> Announcing Gertty 1.4.0
> ===
>
> Gertty is a console-based interface to the Gerrit Code Review system.
>
> Gertty is designed to support a workflow similar to reading network
> news or mail.  It syncs information from Gerrit to local storage to
> support disconnected operation and easy manipulation of local git
> repos.  It is fast and efficient at dealing with large numbers of
> changes and projects.
>
> The full README may be found here:
>
>   https://git.openstack.org/cgit/openstack/gertty/tree/README.rst
>
> Changes since 1.3.0:
> 
>
> 

Just wondering - were there tries to implement syntax highlighting in
diff view? I think that's the only thing that keeps me from switching to
Gertty.

Thanks,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Scott D'Angelo to Cinder core

2016-06-28 Thread Michał Dulko
+2

I was wondering when this will happen. Congratulations Scott! :)

On 06/27/2016 07:27 PM, Sean McGinnis wrote:
> I would like to nominate Scott D'Angelo to core. Scott has been very
> involved in the project for a long time now and is always ready to help
> folks out on IRC. His contributions [1] have been very valuable and he
> is a thorough reviewer [2].
>
> Please let me know if there are any objects to this within the next
> week. If there are none I will switch Scott over by next week, unless
> all cores approve prior to then.
>
> Thanks!
>
> Sean McGinnis (smcginnis)
>
> [1] 
> https://review.openstack.org/#/q/owner:%22Scott+DAngelo+%253Cscott.dangelo%2540hpe.com%253E%22+status:merged
> [2] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

2016-05-24 Thread Michał Dulko


On 05/24/2016 04:38 PM, Gorka Eguileor wrote:
> On 23/05, Ivan Kolodyazhny wrote:
>> Hi developers and operators,
>> I would like to get any feedback from you about my idea before I'll start
>> work on spec.
>>
>> In Nova, we've got max_concurrent_builds option [1] to set 'Maximum number
>> of instance builds to run concurrently' per each compute. There is no
>> equivalent Cinder.
> Hi,
>
> First I want to say that I think this is a good idea because I know this
> message will get diluted once I start with my mumbling.  ;-)
>
> The first thing we should allow to control is the number of workers per
> service, since we currently only allow setting it for the API nodes and
> all other nodes will use a default of 1000.  I posted a patch [1] to
> allow this and it's been sitting there for the last 3 months.  :'-(
>
> As I see it not all mentioned problems are equal, and the main
> distinction is caused by Cinder being not only in the control path but
> also in the data path. Resulting in some of the issues being backend
> specific limitations, that I believe should be address differently in
> the specs.
>
> For operations where Cinder is in the control path we should be
> limiting/queuing operations in the cinder core code (for example the
> manager) whereas when the limitation only applies to some drivers this
> should be addressed by the drivers themselves.  Although the spec should
> provide a clear mechanism/pattern to solve it in the drivers as well so
> all drivers can use a similar pattern which will provide consistency,
> making it easier to review and maintain.
>
> The queuing should preserve the order of arrival of operations, which
> file locks from Oslo concurrency and Tooz don't do.

I would be seriously opposed to queuing done inside Cinder code. It
makes draining a service harder and increases impact of a failure of a
single service. We already have a queue system and it is whatever you're
running under oslo.messaging (RabbitMQ mostly). Making our RPC workers
number configurable for each service sounds like a best shot to me.

>> Why do we need it for Cinder? IMO, it could help us to address following
>> issues:
>>
>>- Creation of N volumes at the same time increases a lot of resource
>>usage by cinder-volume service. Image caching feature [2] could help us a
>>bit in case when we create volume form image. But we still have to upload 
>> N
>>images to the volumes backend at the same time.
> This is an example where we are in the data path.
>
>>- Deletion on N volumes at parallel. Usually, it's not very hard task
>>for Cinder, but if you have to delete 100+ volumes at once, you can fit
>>different issues with DB connections, CPU and memory usages. In case of
>>LVM, it also could use 'dd' command to cleanup volumes.
> This is a case where it is a backend limitation and should be handled by
> the drivers.
>
> I know some people say that deletion and attaching have problems when a
> lot of them are requested to the c-vol nodes and that cinder cannot
> handle the workload properly, but in my experience these cases are
> always due to suboptimal cinder configuration, like a low number of DB
> connections configured in cinder that make operations fight for a DB
> connection creating big delays to complete operations.
>
>>- It will be some kind of load balancing in HA mode: if cinder-volume
>>process is busy with current operations, it will not catch message from
>>RabbitMQ and other cinder-volume service will do it.
> I don't understand what you mean with this.  Do you mean that Cinder
> service will stop listening to the message queue when it reaches a
> certain workload on the "heavy" operations?  Then wouldn't it also stop
> processing "light" operations?
>
>>- From users perspective, it seems that better way is to create/delete N
>>volumes a bit slower than fail after X volumes were created/deleted.
> I agree, it's better not to fail.  :-)
>
> Cheers,
> Gorka.
>
>>
>> [1]
>> https://github.com/openstack/nova/blob/283da2bbb74d0a131a66703516d16698a05817c7/nova/conf/compute.py#L161-L163
>> [2]
>> https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-19 Thread Michał Dulko
On 05/18/2016 10:39 PM, Zane Bitter wrote:
> On 17/05/16 20:27, Crag Wolfe wrote:
>> Now getting very Heat-specific. W.r.t. to
>> https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
>> raw_template.files (this is a dict of template filename to contents),
>> both in the DB and in RAM. The approach this patch is taking is that,
>> when one template is created by reference to another, we just re-use the
>> original template's files (ultimately in a new table,
>> raw_template_files). In the case of nested stacks, this saves on quite a
>> bit of duplication.
>>
>> If we follow the 3-step pattern discussed earlier in this thread, we
>> would be looking at P release as to when we start seeing DB storage
>> improvements. As far as RAM is concerned, we would see improvement in
>> the O release since that is when we would start reading from the new
>> column location (and could cache the template files object by its ID).
>> It also means that for the N release, we wouldn't see any RAM or DB
>> improvements, we'll just start writing template files to the new
>> location (in addition to the old location). Is this acceptable, or do
>> impose some sort of downtime restrictions on the next Heat upgrade?
>>
>> A compromise could be to introduce a little bit of downtime:
>>
>> For the N release:
>
> There's also a step 0, which is to run the DB migrations for Newton.
>
>>   1. Add the new column (no need to shut down heat-engine).
>>   2. Shut down all heat-engine's.
>>   3. Upgrade code base to N throughout cluster.
>>   4. Start all heat engine's. Read from new and old template files
>> locations, but only write to the new one.
>>
>> For the O release, we could perform a rolling upgrade with no downtime
>> where we are only reading and writing to the new location, and then drop
>> the old column as a post-upgrade migration (i.e, the typical N+2 pattern
>> [1] that Michal referenced earlier and I'm re-referencing :-).
>>
>> The advantage to the compromise is we would immediately start seeing RAM
>> and DB improvements with the N-release.
>
> +1, and in fact this has been the traditional way of doing it. To be
> able to stop recommending that to operators, we need a solution both
> to the DB problem we're discussing here and to the problem of changes
> to the RPC API parameters. (Before anyone asks, and I know someone
> will... NO, versioned objects do *not* solve either of those problems.)
>
> I've already personally made one backwards-incompatible change to the
> RPC in this version:
>
> https://review.openstack.org/#/c/315275/

If you want to support rolling upgrades, you need a way to prevent
introduction of such incompatibilities. This particular one seems pretty
easy once you get RPC version pinning framework (either auto or
config-based) in place. Nova and Cinder already have such features.

It would work by just don' send template_id when there are older
services in the deployment and make your RPC server be able to
understand also the requests without template_id.

> So we won't be able to recommend rolling updates from Mitaka->Newton
> anyway.
>
> I suggest that as far as this patch is concerned, we should implement
> the versioning that allows the VO to decide whether to write old or
> new data and leave it at that. That way, if someone manages to
> implement rolling upgrade support in Newton we'll have it, and if we
> don't we'll just fall back to the way we've done it in the past.
>
> cheers,
> Zane.
>
>> [1]
>> http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations
>>
>>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-18 Thread Michał Dulko
On 05/17/2016 09:40 PM, Crag Wolfe wrote:
 


> That helps a lot, thanks! You are right, it would have to be a 3-step
> upgrade to avoid the issue you mentioned in 6.
>
> Another thing I am wondering about: if my particular object is not
> exposed by RPC, is it worth making it a full blown o.vo or not? I.e, I
> can do the 3 steps over 3 releases just in the object's .py file -- what
> additional value do I get from o.vo?

Unfortunately Zane's right - none. In case of DB schema compatibility
you can benefit from o.vo if you have something like nova-conductor -
upgraded atomically and being able to backport object to previous
versions getting data from newer DB schema. Also there shouldn't be DB
accesses in your n-cpu-like services.

o.vo are mostly useful in Cinder to model dictionaries sent over RPC
(like request_spec), which we're backporting if there are older versions
of services in the deployment. Versioning and well-defining dict blobs
is essential to control compatibility. Also sending whole o.vo instead
of plain id in RPC methods can give you more flexibility in complicated
compatibility issues, but it turns out in Cinder we haven't yet hit a
case when that would be useful.

> I'm also shying away from the idea of allowing for config-driven
> upgrades. The reason is, suppose an operator updates a config, then does
> a rolling restart to go from X to X+1. Then again (and probably again)
> as needed. Everything works great, run a victory lap. A few weeks later,
> some ansible or puppet automation accidentally blows away the config
> value saying that heat-engine should be running at the X+3 version for
> my_object. Ouch. Probably unlikely, but more likely than say
> accidentally deploying a .py file from three releases ago.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Versioned objects upgrade patterns

2016-05-17 Thread Michał Dulko
On 05/17/2016 06:30 PM, Crag Wolfe wrote:
> Hi all,
>
> I've read that versioned objects are favored for supporting different
> versions between RPC services and to support rolling upgrades. I'm
> looking to follow the pattern for Heat. Basically, it is the classic
> problem where we want to migrate from writing to a column in one table
> to having that column live in a different table. Looking at nova code,
> the version for a given versioned object is a constant in the given
> object/.py file. To properly support rolling upgrades
> where we have older and newer heat-engine processes running
> simultaneously (thus avoiding downtime), we have to write to both the
> old column and the new column. Once all processes have been upgraded,
> we can upgrade again to only write to the new location (but still able
> to read from the old location of course). Following the existing
> pattern, this means the operator has to upgrade 
> twice (it may be possible to increment VERSION in 
> only once, however, the first time).
>
> The drawback of the above is it means cutting two releases (since two
> different .py files). However, I wanted to check if anyone has gone
> with a different approach so only one release is required. One way to
> do that would be by specifying a version (or some other flag) in
> heat.conf. Then, only one .py release would be
> required -- the logic of whether to write to both the old and new
> location (the intermediate step) versus just the new location (the
> final step) would be in .py, dictated by the config
> value. The advantage to this approach is now there is only one .py
> file released, though the operator would still have to make a config
> change and restart heat processes a second time to move from the
> intermediate step to the final step.

Nova has the pattern of being able to do all that in one release by
exercising o.vo, but there are assumptions they are relying on (details
[1]):

  * nova-compute accesses the DB through nova-conductor.
  * nova-conductor gets upgraded atomically.
  * nova-conductor is able to backport an object if nova-compute is
older and doesn't understand it.

Now if you want to have heat-engines running in different versions and
all of them are freely accessing the DB, then that approach won't work
as there's no one who can do a backport.

We've faced same issue in Cinder and developed a way to do such
modifications in three releases for columns that are writable and two
releases for columns that are read-only. This is explained in spec [2]
and devref [3]. And yes, it's a little painful.

If I got everything correctly, your idea of two-step upgrade will work
only for read-only columns. Consider this situation:

 1. We have deployment running h-eng (A and B) in version X.
 2. We apply X+1 migration moving column `foo` to `bar`.
 3. We upgrade h-eng A to X+1. Now it writes to both `foo` and `bar`.
 4. A updates `foo` and `bar`.
 5. B updates `foo`. Now correct value is in `foo` only.
 6. A want to read the value. But is latest one in `foo` or `bar`? No
way to tell that.


I know Keystone team is trying to solve that with some SQLAlchemy magic,
but I don't think the design is agreed on yet. There was a presentation
at the summit [4] that mentions it (and attempts clarification of
approaches taken by different projects).

Hopefully this helps a little.

Thanks,
Michal (dulek on freenode)

[1] 
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/

[2] 
http://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html

[3] 
http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations

[4] https://www.youtube.com/watch?v=ivcNI7EHyAY


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovo] NeutronDbObject concurrency issues

2016-05-16 Thread Michał Dulko
On 05/16/2016 05:49 PM, Ilya Chukhnakov wrote:
> On 16 May 2016, at 18:18, Michał Dulko <michal.du...@intel.com
> <mailto:michal.du...@intel.com>> wrote:
>> In Cinder we're handling similar problems related to race conditions
>> between status check and status change with conditional updates.
>> Basically we're doing "UPDATE table SET status='abc' WHERE id=1 AND
>> status='status_allowing_transition_to_foo';".
> Thanks for the info. I'll certainly look into it. But for now I'm
> planning to reuse
> revision numbers from [1].
>
>> In general the problem you're mentioning is related more to the
>> concurrent DB updates and o.vo never aimed for magically solving that
>> problem. I believe you've had same problem with raw SQLA objects.
> For SQLA we at least have an option to use with_for_update (I've found
> it is being
> used in some places). But with OVO we do not have that level of
> control yet.
>
> [1] https://review.openstack.org/#/c/303966
>

It's not directly related, but this reminds me of tests done by geguileo
[1] some time ago that were comparing different methods of preventing DB
race conditions in concurrent environment. Maybe you'll also find them
useful as you'll probably need to do something like conditional update
to increment a revision number.

[1] https://github.com/Akrog/test-cinder-atomic-states


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovo] NeutronDbObject concurrency issues

2016-05-16 Thread Michał Dulko
On 05/12/2016 08:26 PM, Ilya Chukhnakov wrote:
> Hi everyone.
>
> I’ve recently found that straightforward use of NeutronDbObject is prone to
> concurrency-related problems.
>
> I’ve submitted a patch set [3] with some tests to show that without special
> treatment using NeutronDbObject could lead to unexpected results.
>
> Further patch sets will provide acquire_object/acquire_objects contextmanager
> methods to the NeutronDbObject class. These methods are to be used in place of
> get_object/get_objects whenever the user intends to make changes to the 
> object.
> These methods would start an autonested_transaction.
>
> There are (at least) two potential options for the implementation:
>
> 1. Based on the DB locks (e.g. SELECT FOR UPDATE/SqlAlchemy with_for_update).
>
>pros:
>  - the object is guaranteed to not be changed while within the context
>
>cons:
>  - prone to deadlocks ([1] and potentially when locking multiple objects)
>
> 2. Lock-free CAS based on object version counter. Can use SqlAlchemy version
>counter [2] or add our own. If conflicting changes are detected upon 
> exiting
>the context (i.e. version counter held differs from the one in the DB), 
> will
>raise OSLO RetryRequest exception.
>
>pros:
>  - does not require locking
>
>cons:
>  - require an additional field in the models
>
> While opt.2 only prevents the conflicting changes, but does not guarantee that
> the object does not change while within the context, opt.1 may seem
> preferential. But even with opt.1 the user should not expect that the changes
> made to the object while within the context will get to the database as the
> autonested_transaction could fail on flush/commit.
>
> So I’d like to hear others’ opinion on the problem and which of the two
> implementation options would be preferred? Or maybe someone has a better idea.
>
> [1] 
> https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad
> [2] http://docs.sqlalchemy.org/en/rel_0_9/orm/versioning.html
>
> [3] https://review.openstack.org/#/c/315705/

In Cinder we're handling similar problems related to race conditions
between status check and status change with conditional updates.
Basically we're doing "UPDATE table SET status='abc' WHERE id=1 AND
status='status_allowing_transition_to_foo';". You can check out o.vo
layer of that stuff at [1].

You could provide fields you care not to be modified in the
expected_values and retry DB operations on failure.

In general the problem you're mentioning is related more to the
concurrent DB updates and o.vo never aimed for magically solving that
problem. I believe you've had same problem with raw SQLA objects.

[1]
https://github.com/openstack/cinder/blob/master/cinder/objects/base.py#L173-L289



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Nominating Micha?? Dulko to Cinder Core

2016-05-10 Thread Michał Dulko
On 05/10/2016 07:46 AM, Sean McGinnis wrote:
> It has been one week and all very positive feedback. I have now added
> Michał to the cinder-core group.
>
> Welcome Michał! Glad to have your expertise in the group.
>
> Sean

Thank you all for mentoring and support! I'll do my best to fulfill the
expectations.

…but I'll start making it happen in the next week, when I'll return from
post-summit vacations. :)

> On Tue, May 03, 2016 at 01:16:59PM -0500, Sean McGinnis wrote:
>> Hey everyone,
>>
>> I would like to nominate Michał Dulko to the Cinder core team. Michał's
>> contributions with both code reviews [0] and code contributions [1] have
>> been significant for some time now.
>>
>> His persistence with versioned objects has been instrumental in getting
>> support in the Mitaka release for rolling upgrades.
>>
>> If there are no objections from current cores by next week, I will add
>> Michał to the core group.
>>
>> [0] http://cinderstats-dellstorage.rhcloud.com/cinder-reviewers-90.txt
>> [1]
>> https://review.openstack.org/#/q/owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22++status:merged
>>
>> Thanks!
>>
>> Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] API features discoverability

2016-04-19 Thread Michał Dulko
On 04/18/2016 09:17 AM, Ramakrishna, Deepti wrote:
> Hi Michal,
>
> This seemed like a good idea when I first read it. What more, the server code 
> for extension listing [1] does not do any authorization, so it can be used 
> for any logged in user.
>
> However, I don't know if requiring the admin to manually disable an extension 
> is practical. First, admins can always forget to do that. Second, even if 
> they wanted to, it is not clear how they could disable specific extensions. I 
> assume they would need to edit the cinder.conf file. This file currently 
> lists the set of extensions to load as 
> cinder.api.contrib.standard_extensions. The server code [2] implements this 
> by walking the cinder/api/contrib directory and loading all discovered 
> extensions. How is it possible to subtract just one extension from the 
> "standard extensions"? Also, system capabilities and extensions may not have 
> a 1:1 relationship in general.

Good point, to make that a standard for Cinder API feature discovery we
would still need to make that more admin-friendly. This also implies
that probably no admin is actually caring about setting the set of
extensions correctly.

> Having a new extension API (as proposed by me in [3]) for returning the 
> available services/functionality does not have the above problems. It will 
> dynamically check the existence of the cinder-backup service, so it does not 
> need manual action from admin. I have published a BP [4] related to this. Can 
> you please comment on that?

Yes, but I don't think you can run away from setting things manually.
For example CGs are supported only for certain backends. This set of
features should also be discoverable. Anyway I think the spec makes sense.

> Thanks,
> Deepti
>
> [1] 
> https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L152
> [2] 
> https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L312
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-October/077209.html
> [4] https://review.openstack.org/#/c/306930/

This is unfortunately going against the recent efforts of standardizing
how OpenStack works between deployments. In Cinder we have API features
that may or may not be available in different installations. This
certainly isn't addressed by microversions efforts, which may seem
related. My feeling is that this goes beyond Cinder and hits a more
general topic of API discoverability. I think that we should seek the
API WG advice in that matter. Do we have other OpenStack project
suffering from similar issue?

>
> -Original Message-
> From: Michał Dulko [mailto:michal.du...@intel.com] 
> Sent: Thursday, April 14, 2016 7:06 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Cinder] API features discoverability
>
> Hi,
>
> When looking at bug [1] I've thought that we could simply use 
> /v2//extensions to signal features available in the deployment - 
> in this case backups, as these are implemented as API extension too. Cloud 
> admin can disable an extension if his cloud doesn't support a particular 
> feature and this is easily discoverable using aforementioned call. Looks like 
> that solution weren't proposed when the bug was initially raised.
>
> Now the problem is that we're actually planning to move all API extensions to 
> the core API. Do we plan to keep this API for features discovery? How to 
> approach API compatibility in this case if we want to change it? Do we have a 
> plan for that?
>
> We could keep this extensions API controlled from the cinder.conf, regardless 
> of the fact that we've moved everything to the core, but that doesn't seem 
> right (API will still be functional, even if administrator disables it in 
> configuration, am I right?)
>
> Anyone have thoughts on that?
>
> Thanks,
> Michal
>
> [1] https://bugs.launchpad.net/cinder/+bug/1334856
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -The lifecycle of deploying openstack on production environment

2016-04-15 Thread Michał Dulko
You should probably try openstack-operators mailing list next time, as
openstack-dev is more developer oriented. But anyway:

On 04/15/2016 08:53 AM, Kenny Ji-work wrote:
> Hi all,
>
> We wanner to deploy openstack to the production environment, so what's
> the recommended way to complete it? By using *puppet-openstack *or any
> else tool?

According to latest OpenStack User Survey [1] most popular tools are
either Puppet or Ansible. There are OpenStack projects maintaining
manifests or playbooks for both of them (and more).

[1]
https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf

> *Secondly*, we want to add *custom codes* into the openstack's. So the
> problem is that if new version of openstack is released, the efforts
> taken to port the codes to the new version must be put into action. In
> one word, is there some convenient way to realize it?

There are efforts to standardize OpenStack between deployments, so
convenient ways of injecting your own code are currently being
deprecated and removed. See nova hooks for example [2].

[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-February/087782.html

> *Thirdly*, if we want to *upgrade *our openstack version in the
> production environment, how can we do it easier? Thank you for answering!

Then you may want to look at Kolla project [3], as it deploys OpenStack
in Docker containers and provides automated procedure allowing upgrade
between releases with minimal downtime.

[3] https://wiki.openstack.org/wiki/Kolla

>
> Sincerely,
> Kenny Ji
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] API features discoverability

2016-04-14 Thread Michał Dulko
Hi,

When looking at bug [1] I've thought that we could simply use
/v2//extensions to signal features available in the
deployment - in this case backups, as these are implemented as API
extension too. Cloud admin can disable an extension if his cloud doesn't
support a particular feature and this is easily discoverable using
aforementioned call. Looks like that solution weren't proposed when the
bug was initially raised.

Now the problem is that we're actually planning to move all API
extensions to the core API. Do we plan to keep this API for features
discovery? How to approach API compatibility in this case if we want to
change it? Do we have a plan for that?

We could keep this extensions API controlled from the cinder.conf,
regardless of the fact that we've moved everything to the core, but that
doesn't seem right (API will still be functional, even if administrator
disables it in configuration, am I right?)

Anyone have thoughts on that?

Thanks,
Michal

[1] https://bugs.launchpad.net/cinder/+bug/1334856

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-13 Thread Michał Dulko
On 04/13/2016 11:16 AM, Thierry Carrez wrote:
> Fox, Kevin M wrote:
>> I think my head just exploded. :)
>>
>> That idea's similar to neutron sfc stuff, where you just say what
>> needs to connect to what, and it figures out the plumbing.
>>
>> Ideally, it would map somehow to heat & docker COE & neutron sfc to
>> produce a final set of deployment scripts and then just runs it
>> through the meat grinder. :)
>>
>> It would be awesome to use. It may be very difficult to implement.
>>
>> If you ignore the non container use case, I think it might be fairly
>> easily mappable to all three COE's though.
>
> This feels like Heat with a more readable descriptive language. I
> don't really like this approach, because you end up with the lowest
> common denominator between COE's functionality. They are all
> different. And they are at the peak of the differentiation phase. 

Are we able to define that lowest common denominator at this stage?
Maybe that subset of features is still valuable?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Let's do presentations/sessions on Mitaka's new complex features in Design Summit

2016-03-15 Thread Michał Dulko
On 03/14/2016 09:52 PM, Gorka Eguileor wrote:
> Hi,
>
> As you all probably know, during this cycle we have introduced quite a
> big number of changes in cinder that will have a great impact in the
> development of the new functionality as well as changes to existing ones
> moving forward from an implementation perspective.
>
> These changes to the cinder code include, but are not limited to,
> microversions, rolling upgrades, and conditional DB update functionality
> to remove API races, and while the latter has a good number of examples
> already merged and more patches under review, the other 2 have just been
> introduced and there are no patches in cinder that can serve as easy
> reference on how to use them.
>
> As cinder developers we will all have to take these changes into account
> in our new patches, but it is hard to do so when one doesn't have an
> in-depth knowledge of them, and while we all probably know quite a bit
> about them, it will take some time to get familiar enough to be aware of
> *all* the implications of the changes made by newer patches.
>
> And it's for this reason that I would like to suggest that during this
> summit's cinder design sessions we take the time to go through the
> changes giving not only an example of how they should be used in a
> patch, but also the do's, dont's and gotchas.
>
> A possible format for these explanations could be a presentation -around
> 30 minutes- by the people that were involved in the development,
> followed by Q
>
> I would have expected to see some of these in the "Upstream Dev" track,
> but unfortunately I don't (maybe I'm just missing them with all the cool
> title names).  And maybe these talks are not relevant for that track,
> being so specific and only relevant to cinder developers and all.
>
> I believe these presentations would help the cinder team increase the
> adoption speed of these features while reducing the learning curve and
> the number of bugs introduced in the code caused by gaps in our
> knowledge and misinterpretations of the new functionality.
>
> I would take lead on the conditional DB updates functionality, and I
> would have no problem doing the Rolling upgrades presentation as well.
> But I believe there are people more qualified and more deserving of
> doing that one; though I offer my help if they want it.
>
> I have added those 3 topics to the Etherpad with Newton Cinder Design
> Summit Ideas [1] so people can volunteer and express their ideas in
> there.
>
> Cheers,
> Gorka.

I can certainly do one on rolling upgrades from developer's perspective.
I think I've got this knowledge summed up in patch to enhance the devref
[1], but of course presentation and Q would be beneficial.

And by the way - I think that for all the stuff that's worthy of such
presentation, we should have a detailed devref page.

[1] https://review.openstack.org/#/c/279186/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Proposal: changes to our current testing process

2016-03-02 Thread Michał Dulko
On 03/02/2016 04:11 PM, Gorka Eguileor wrote:
> On 02/03, Ivan Kolodyazhny wrote:
>> Eric,
>>
>> There are Gorka's patches [10] to remove API Races
>>
>>
>> [10]
>> https://review.openstack.org/#/q/project:openstack/cinder+branch:master+topic:fix/api-races-simplified
>>
> I looked at Rally a long time ago so apologies if I'm totally off base
> here, but it looked like it was a performance evaluation tool, which
> means that it probably won't help to check for API Races (at least I
> didn't see how when I looked).
>
> Many of the API races only happen if you simultaneously try the same
> operation multiple times against the same resource or if there are
> different operations that are trying to operate on the same resource.
>
> On the first case if Rally allowed it we could test it because we know
> only 1 of the operations should succeed, but on the second case when we
> are talking about preventing races from different operations there is no
> way to know what the result should be, since the order in which those
> operations are executed on each test run will determine which one will
> fail and which one will succeed.
>
> I'm not trying to go against the general idea of adding rally tests, I
> just think that they won't help in the case of the API races.

You're probably right - Rally would need to cache API responses to
parallel runs, predict the result of accepted requests (these which
haven't received VolumeIsBusy) and then verify it. In case of API race
conditions things explode inside the stack, and not on the API response
level. The issue is that two requests, that shouldn't ever be accepted
together, get positive API response.

I cannot say it's impossible to implement a situation like that as Rally
resource, but definitely it seems non-trivial to verify if result is
correct.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Cinder - Wishlist

2016-03-01 Thread Michał Dulko
On 03/01/2016 11:31 AM, mohammed.asha...@wipro.com wrote:
>
> Hi,
>
>  
>
> Would like to know if there’s  feature wish list/enhancement request
> for Open stack Cinder  I.e. a list of features that we would like to
> add to Cinder Block Storage ; but hasn’t been taken up for development
> yet.
>
> We have couple  developers who are interested to work on OpenStack
> Cinder... Hence would like to take a look at those wish list…
>
>  
>
> Thanks ,
>
> Ashraf
>
>

Hi!

At the Cinder Midcycle Meetup in January we've created a list of
developers answers to "if you would have time what would you want to
sort out in Cinder?". The list can be find at the bottom of etherpad
[1]. It may seem a little vague for someone not into Cinder's internals,
so I can provide some highlights:

* Quotas - Cinder have issues with quota management. Right now there are
efforts to sort this out.
* Notifications - we do not version or standardize notifications sent
over RPC. That's a problem if someone relies on them.
* A/A HA - there are ongoing efforts to make cinder-volume service
scalable in A/A manner.
* Cinder/Nova API - the way Nova talks with Cinder needs revisiting as
the limitations of current design are blocking us.
* State management - the way Cinder resources states are handled isn't
strongly defined. We may need some kind of state machine for that? (this
one is controversial ;)).
* Objectification - we've started converting Cinder to use
oslo.versionedobjects back in Kilo cycle. This still needs to be finished.
* Adding CI that tests rolling upgrades - starting from Mitaka we have
tech preview of upgrades without downtime. To get this feature out of
experimental stage we need a CI that will test it in gate.
* Tempest testing - we should increase our integration tests coverage.

If you're interested in any of these items feel free to ask me on IRC
(dulek on freenode) so I can point you to correct people for details.

Apart from that you can look through the blueprint list [2]. Note that a
lot of items there may be outdated and not fitting well into current
state of Cinder.

[1] https://etherpad.openstack.org/p/mitaka-cinder-midcycle-day-3
[2] https://blueprints.launchpad.net/cinder

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] solving API case sensitivity issues

2016-02-25 Thread Michał Dulko
On 02/24/2016 01:48 PM, Sean Dague wrote:
> We have a specific bug around aggregrate metadata setting in Nova which
> exposes a larger issue with our mysql schema.
> https://bugs.launchpad.net/nova/+bug/1538011
>
> On mysql the following will explode with a 500:
>
>> nova aggregate-create agg1
>> nova aggregate-set-metadata agg1 abc=1
>> nova aggregate-set-metadata agg1 ABC=2
> mysql (by default) treats abc == ABC. However the python code does not.
>
> We have a couple of options:
>
> 1) make the API explicitly case fold
>
> 2) update the mysql DB to use latin_bin collation for these columns
>
> 3) make this a 400 error because duplicates were found
>
>
> Options 1 & 2 make all OpenStack environments consistent regardless of
> backend.
>
> Option 2 is potentially expensive TABLE alter.
>
> Option 3 gets rid of the 500 error, however at the risk that the
> behavior for this API is different depending on DB backend. Which is
> less than ideal.
>
>
> My preference is slightly towards #1. It's taken a long time for someone
> to report this issue, so I think it's an edge case, and people weren't
> think about this being case sensitive. It has the risk of impacting
> someone on an odd db platform that has been using that feature.
>
> There are going to be a few other APIs to clean up in a similar way. I
> don't think this comes in under a microversion because of how deep in
> the db api layer this is, and it's just not viable to keep both paths.
>
>   -Sean

We've faced similar issues in Cinder and as solution we've moved
filtering to Python code. Like in for example [1] or [2]. But no, we
haven't had UNIQUE constraint on the DB column in these cases, only on IDs.

[1] https://review.openstack.org/225024
[2] https://review.openstack.org/#/c/274589/12/cinder/db/sqlalchemy/api.py

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-25 Thread Michał Dulko
On 02/25/2016 09:13 AM, Qiming Teng wrote:
> Hi, All,
>
> After reading through all the +1's and -1's, we realized how difficult
> it is to come up with a proposal that makes everyone happy. When we are
> discussing this proposal with some other contributors, we came up with a
> proposal which is a little bit different. This idea could be very
> impractical, very naive, given that we don't know much about the huge
> efforts behind the scheduling, planning, coordination ... etc etc. So,
> please treat this as a random thought.
>
> Maybe we can still have the Summit and the Design Summit colocated, but
> we can avoid the overlap that has been the source of many troubles. The
> idea is to have both events scheduled by the end of a release cycle. For
> example:
>
> Week 1:
>   Wednesday-Friday: 3 days Summit.
> * Primarily an event for marketing, sales, CTOs, architects,
>   operators, journalists, ...
> * Contributors can decide whether they want to attend this.
>   Saturday-Sunday:
> * Social activities: contributors meet-up, hang outs ...
>
> Week 2:
>   Monday-Wednesday: 3 days Design Summit
> * Primarily an event for developers.
> * Operators can hold meetups during these days, or join project
>   design summits.
>
> If you need to attend both events, you don't need two trips. Scheduling
> both events by the end of a release cycle can help gather more
> meaningful feedbacks, experiences or lessons from previous releases and
> ensure a better plan for the coming release.
>
> If you want to attend just the main Summit or only the Design Summit,
> you can plan your trip accordingly.
>
> Thoughts?
>
>  - Qiming

>From my perspective this idea is more appealing. If someone doesn't want
to join main conference he don't need to and we avoid distractions
coming from both events intersecting.

What isn't solved here is placing developer conference in "cheaper"
cities, but I have a feeling that even if hotels cost less, it will be
harder to travel to less popular locations.

Another problem is the timing of the main conference, which in this
proposition won't happen in the middle of the cycle. I wonder however if
it really makes a difference here and if companies want to present
products based on latest release in 2.5 month timeline.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-brick][nova][cinder] os-brick/privsep change is done and awaiting your review

2016-02-24 Thread Michał Dulko
On 02/24/2016 04:51 AM, Angus Lees wrote:
> Re: https://review.openstack.org/#/c/277224
>
> Most of the various required changes have flushed out by now, and this
> change now passes the dsvm-full integration tests(*).
>
> (*) well, the experimental job anyway.  It still relies on a
> merged-but-not-yet-released change in oslo.privsep so gate + 3rd party
> won't pass until that happens.
>
> What?
> This change replaces os-brick's use of rootwrap with a quick+dirty
> privsep-based drop-in replacement.  Privsep doesn't actually provide
> much security isolation when used in this way, but it *does* now run
> commands with CAP_SYS_ADMIN (still uid=0/gid=0) rather than full root
> superpowers.  The big win from a practical point of view is that it
> also means os-brick's rootwrap filters file is essentially deleted and
> no longer has to be manually merged with downstream projects.
>
> Code changes required in nova/cinder:
> There is one change each to nova+cinder to add the relevant
> privsep-helper command to rootwrap filters, and a devstack change to
> add a nova.conf/cinder.conf setting.  That's it - this is otherwise a
> backwards/forwards compatible change for nova+cinder.
>
> Deployment changes required in nova/cinder:
> A new "privsep_rootwrap.helper_command" needs to be defined in
> nova/cinder.conf (default is something sensible using sudo), and
> rootwrap filters or sudoers updated depending on the exact command
> chosen.  Be aware that any commands will now be run with CAP_SYS_ADMIN
> (only), and if that's insufficient for your hardware/drivers it can be
> tweaked with other oslo_config options.
>
> Risks:
> The end-result is still just running the same commands as before, via
> a different path - so there's not a lot of adventurousness here.  The
> big behavioural change is CAP_SYS_ADMIN, and (as highlighted above)
> it's conceivable that the driver for some exotic os-brick/cinder
> hardware out there wants something more than that.
>
> Work remaining:
> - global-requirements change needed (for os-brick) once the latest
> oslo.privsep release is made
> - cinder/nova/devstack changes need to be merged
> - after the above, the os-brick gate integration jobs will be able to
> pass, and it can be merged
> - If we want to *force* the new version of os-brick, we then need an
> appropriate global-requirements os-brick bump
> - Documentation, release notes, etc
>
> I'll continue chewing through those remaining work items, but
> essentially this is now in your combined hands to prioritise for
> mitaka as you deem appropriate.
>
>  - Gus
>

It seems too me that risks are higher than advantages. Moreover final
release for libraries like os.brick should happen in just 2 days and I
don't believe we have time to get every part of the job merged given how
long TODO list is.

Just my $0.02.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Michał Dulko
On 02/22/2016 04:49 PM, Daniel P. Berrange wrote:
> On Mon, Feb 22, 2016 at 04:14:06PM +0100, Thierry Carrez wrote:
>> The idea would be to split the events. The first event would be for upstream
>> technical contributors to OpenStack. It would be held in a simpler,
>> scaled-back setting that would let all OpenStack project teams meet in
>> separate rooms, but in a co-located event that would make it easy to have
>> ad-hoc cross-project discussions. It would happen closer to the centers of
>> mass of contributors, in less-expensive locations.
> The idea that we can choose less expensive locations is great, but I'm a
> little wary of focusing too much on "centers of mass of contributors", as
> it can easily become an excuse to have it in roughly the same places each
> time. As a non-USA based contributor, I really value the fact the the
> summits rotate around different regions instead of spending all the time
> in the USA as was the case earlier in openstcck days. Minimizing travel
> costs is no doubt a welcome aim for companies' budgets, but it should not
> be allowed to dominate to such a large extent that we miss representation
> of different regions. ie if we never went back to Asia because the it is
> cheaper for the /current/ majority of contributors to go to the US, we'll
> make it harder to attract new contributors from those regions we avoid on
> cost ground. The "center of mass of contributors" could become a self-
> fullfilling prophecy.
>
> IOW, I'm onboard with choosing less expensive locations, but would like
> to see us still make the effort to reach out across different regions
> for the events, and not become too US focused once again.

As an EU-based contributor I have similar concerns. First OpenStack
Summit I was able to attend was in Paris and the fact that it was close
let us send almost entire team of contributors. That fact helped us in
future funding negotiations and we were able to maintain constant number
of contributors sent also to Summits far more expensive for us. I don't
believe that would be ever possible if all the conferences were
organized in the US.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] the spec of Add-ServiceGroup-using-Tooz

2016-02-01 Thread Michał Dulko
On 01/31/2016 10:43 PM, dong.wenj...@zte.com.cn wrote:
>
> Hi all,
>
> I proposed the spec of Add-ServiceGroup-using-Tooz in Ciner[1].
>
> Project doctor[2] in OPNFV community is its upstream.
> The goal of this project is to build fault management and
> maintenance framework for high availability of Network Services on top
> of virtualized infrastructure.
> The key feature is immediate notification of unavailability of
> virtualized resources from VIM, to process recovery of VNFs on them.
>
> But in Cinder, the service reports it's status with a delay. So I
> proposed adding Tooz as cinder ServiceGroup driver to report the
> service states without a dely.
>
> I'm a new in Cinder. :) So I wants to invite some Cinder exports
> to discuss the spec in the doctor's weekly meeting at 14:00 on Tuesday
> this week. Is anyone interested in it? Thanks~
>
> [1]https://review.openstack.org/#/c/258968/
> [2]https://wiki.opnfv.org/doctor
>

So basically doctor wants to know state of Cinder services as soon as
possible, right? Why use cinder to inform of its own services then? What
if cinder-api service is down?

If your use case is monitoring of services state, then what prevents you
from using some external monitoring tools for that purpose? Or even a
combination of both external monitoring tool and Cinder service-group API?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][cinder][horizon] Projects acting as a domain at the top of the project hierarchy

2016-02-01 Thread Michał Dulko
On 01/30/2016 07:02 PM, Henry Nash wrote:
> Hi
>
> One of the things the keystone team was planning to merge ahead of 
> milestone-3 of Mitaka, was “projects acting as a domain”. Up until now, 
> domains in keystone have been stored totally separately from projects, even 
> though all projects must be owned by a domain (even tenants created via the 
> keystone v2 APIs will be owned by a domain, in this case the ‘default’ 
> domain). All projects in a project hierarchy are always owned by the same 
> domain. Keystone supports a number of duplicate concepts (e.g. domain 
> assignments, domain tokens) similar to their project equivalents.
>
> 
>
> I’ve got a couple of questions about the impact of the above:
>
> 1) I already know that if we do exactly as described above, the cinder gets 
> confused with how it does quotas today - since suddenly there is a new parent 
> to what it thought was a top level project (and the permission rules it 
> encodes requires the caller to be cloud admin, or admin of the root project 
> of a hierarchy).

These problems are there because our nested quotas code is really buggy
right now. Once Keystone merges a fix allowing non-admin users to fetch
his own project hierarchy - we should be able to fix it.

> 2) I’m not sure of the state of nova quotas - and whether it would suffer a 
> similar problem?

As far as I know Nova haven't had merged nested quotas code and will not
do that in Mitaka due to feature freeze.

> 3) Will Horizon get confused by this at all?
>
> Depending on the answers to the above, we can go in a couple of directions. 
> The cinder issues looks easy to fix (having had a quick look at the code) - 
> and if that was the only issue, then that may be fine. If we think there may 
> be problems in multiple services, we could, for Mitaka, still create the 
> projects acting as domains, but not set the parent_id of the current top 
> level projects to point at the new project acting as a domain - that way 
> those projects acting as domains remain isolated from the hierarchy for now 
> (and essentially invisible to any calling service). Then as part of Newton we 
> can provide patches to those services that need changing, and then wire up 
> the projects acting as a domain to their children.
>
> Interested in feedback to the questions above.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Testing Cinder upgrades - c-bak upgrade

2016-01-29 Thread Michał Dulko
On 01/20/2016 09:11 PM, Li, Xiaoyan wrote:
> @ DuncanT and @dule: 
>
> I noticed from IRC log that you are discussing about c-bak upgrade, and I am 
> working on this, please see following message. Hope I don't miss anything.
>
> As you know, currently c-bak and c-vol are in same nodes. c-bak depends on 
> c-vol service. 
>
> But it is not necessary that all c-vols needs to be upgraded before c-backs.
>
> The sequences can be random. As described in the patch 
> https://review.openstack.org/#/c/269412/,
> when c-api decides which c-bak service to create/restore, it checks the 
> version of c-vol. If c-vol is new version, find a c-bak in new version.
> If c-vol is in old version, find a c-bak in old version.
>
> Let's us think a real case. Customers upgrade c-api, c-sch, and start to 
> upgrade c-vol and c-bak.
> There are two c-vol services c-vol1 and c-vol2, and two c-bak services c-bak1 
> and c-bak2.
> There are four typical upgrade sequences as following. 
> Meanwhile, please notice that c-vol and c-bak are in same nodes in Liberty. 
> So during upgrades, if c-vol went down to upgrade, c-bak is also down. 

It's not exactly like that. You may upgrade services on a single node
one-by-one if you're for example running them in containers.

> 1. c-vol1->c-bak1->c-vol2->c-bak2
> The only insufficiency is when c-vol1 upgrades, and other c-bak services 
> haven't upgraded, the request to create/restore backups from/to volumes in 
> c-vol1 will fail with reason "no valid c-bak service
> found". It is acceptable, as it is similar to scenario in Liberty: c-vol 
> active and c-bak fails.
>
> 2. c-vol1->c-vol2->c-bak1->c-bak2
> Before c-bak1 upgrades, no back request can be completed as no active c-bak 
> services. This is reasonable.
>
> 3. c-bak1->c-vol1->c-bak2->c-vol2:
> The issue is when c-bak2 upgrades, the request to create/restore backups 
> from/to volumes in c-vol2 will fail with reason c-vol not active. This is 
> consistent with behaviors in Liberty.
>
> 4. c-bak1->c-bak2->c-vol1->c-vol2: 
> Before c-vol1 upgrades, no back request can be completed as c-vol services 
> not active This is reasonable.

Resolution on this matter from the Cinder mid-cycle is that we're fine
as long as we safely fail in case of upgrade conducted in an improper
order. And it seems we can implement that in a simple way by raising an
exception from volume.rpcapi when c-vol is pinned to a version too old.
This means that scalable backup patches aren't blocked by this issue.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] 40 hour time commitment requested from core reviewers before Feb 9th/10th midcycle

2016-01-26 Thread Michał Dulko
On 01/23/2016 02:32 AM, Steven Dake (stdake) wrote:
> Hello,
>
> In our weekly IRC meeting, all developers and others in attendance
> were in unanimous agreement that upgrades are the #1 priority for
> Kolla to solve [1].  inc0 has developed both a simple database
> migration implementation  around keystone [2] as well as a more
> complex playbook around nova [3].  Keystone has merged – nova looks
> really solid but needs a little more attention.
>
> We have 10 core reviewers on the Kolla core review team.  I would ask
> that each core reviewer choose 1 or 2 services to solve upgrades on
> and spend 40 hours out of the next two weeks prior to our midcycle
> implementing the work items they have committed themselves to.  If you
> are not a core reviewer but feel you want to get involved in this
> work, please assign yourself to a maximum of one service and commit as
> described below.

In case of Cinder we're still in the process of deciding how *exactly*
will everything work in Mitaka and this will be a discussion point at
the current mid-cycle meetup There's a PoC based on latest Nova's
approach [1], which works, but we don't know if this will be the final
solution. If you're following the milestones then you will probably need
to merge pieces utilizing latest Cinder capabilities after M-3. Current
Cinder master will fail in a spectacular way when trying to run Liberty
and Mitaka services together (it isn't with patches [1] in place :)).

Anyone taking the Cinder piece - feel free to contact me with any questions.

Thanks,
Michal Dulko (IRC: dulek)

[1]
https://review.openstack.org/#/q/topic:bp/rpc-object-compatibility+owner:%22Michal+Dulko+%253Cmichal.dulko%2540intel.com%253E%22

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] 40 hour time commitment requested from core reviewers before Feb 9th/10th midcycle

2016-01-26 Thread Michał Dulko
On 01/26/2016 10:11 AM, Michał Jastrzębski wrote:
> Well we still can perform upgrades with some API downtime, so it's not
> like we're bound to rolling upgrades on architectural level. I'll talk
> to you after midcycle and we'll find a good way to tackle it.

Sure, this will work. projects.yaml says that Kolla is released
independently, so I think you may tweak the process after the Mitaka
release when all the Cinder pieces will be in place.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TaskFlow] TaskFlow persistence

2016-01-26 Thread Michał Dulko
On 01/26/2016 10:23 AM, pn kk wrote:
> Hi,
>
> I use taskflow for job management and now trying to persist the state
> of flows/tasks in mysql to recover incase of process crashes.
>
> I could see the state and the task results stored in the database.
>
> Now I am looking for some way to store the input parameters of the tasks.
>
> Please share your inputs to achieve this.
>
> -Thanks
>
I've played with that some time ago and if I recall correctly input
parameters should be available in the flow's storage, which means these
are also saved to the DB. Take a look on resume_workflows method on my
old PoC [1] (hopefully TaskFlow haven't changed much since then).

[1] https://review.openstack.org/#/c/152200/4/cinder/scheduler/manager.py

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Core/minimum features requirements for Cinder Driver

2016-01-22 Thread Michał Dulko
On 01/22/2016 01:17 PM, vishal yadav wrote:
> Hey Guys,
>
> I would like to know the requirement for core/minimum features that
> any new Cinder driver should implement. Provided lot many features
> have been added in Cinder since Icehouse release, Is the core feature
> requirements mentioned in [1] still valid for recent and upcoming
> releases or there is addition of more minimum/core requirements for
> the driver.
>
> [1] http://docs.openstack.org/developer/cinder/devref/drivers.html
>

At the Tokio Summit Cinder team decided to not update the list of
minimum requirements. See [1] for details (look for " New minimum
required features" topic).

[1] https://etherpad.openstack.org/p/mitaka-cinder-contributor-meetup

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [grenade] [upgrades] [cinder] Testing Cinder upgrades

2016-01-20 Thread Michał Dulko
Hi,

In Mitaka Cinder team is implementing rolling upgrades capabilities. I
want to get some feedback on possibilities of implementing some partial
or multinode grenade check for our purposes.

Basically Cinder consists of 4 services calling each other over RPC the
in following fashion:

  +-+ c-api +-+
  | + |
  | | |
  v v v
c-sch <-> c-vol <-+ c-bak

The order of upgrades we're forced to use (at least in this release) is
c-api->c-sch->c-vol->c-bak. I wonder how we can test interoperability of
services in different versions in that model. I have three ideas:

1. One idea would be to have two nodes - one with full Cinder deployment
in latest master and second one running c-sch, c-vol and c-bak in latest
stable version. That way we would test interoperability of the services
versions. A problem I see is that tests would be strongly
nondeterministic, as a particular test result would depend on which RPC
service got the request. That would make debugging CI failures harder
and may result in breaking patches slipping in.

2. We may simply run a controller<->compute multinode, similar to how
Nova is running multinode grenade job. Controller would run c-api, c-sch
and compute c-vol, c-bak. Disadvantage of this model is the fact that we
wouldn't test c-api->c-sch compatibility.

3. Do a single upgrade for each of the services. That would mean testing
master c-api with stable rest of services, then upgrading c-sch,
retesting, upgrading c-vol, retesting, upgrading c-bak, retesting. That
way we would test all the possibilities but such run would take a lot of
time. Moreover if I recall correctly such idea isn't possible in current
state of Grenade.

Comments and feedback is very welcome.

Thanks,
Michal (IRC: dulek)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-18 Thread Michał Dulko
On 01/18/2016 03:31 PM, Duncan Thomas wrote:
> On 5 January 2016 at 18:55, Ryan Rossiter  > wrote:
>
> This is definitely good to know. Are you planning on setting up
> something off to the side of o.vo within that holds a dictionary
> of all values for a release? Something like:
>
> {‘liberty’: {‘volume’: ‘1.3’, …},
>  ‘mitaka’: {‘volume’: ‘1.8’, …}, }
>
> With the possibility of replacing the release name with the RPC
> version or some other version placeholder. Playing devil’s
> advocate, how does this work out if I want to be continuously
> deploying Cinder from HEAD?
>
>
> As far as I know (the design has iterated a bit, but I think I'm still
> right), there is no need for such a table - before you start a rolling
> upgrade, you call the 'pin now' api, and all of the services write
> their max supported version to the DB. Once the DB is written to by
> all services, the running services can then read that table and cache
> the max value. Any new services bought up will also build a max volume
> cache on startup. Once everything is upgraded, you can call 'pin now'
> again and the services can figure out a new (hopefully higher) version
> limit.
>

You're right, that was the initial design we've agreed on in Liberty.
Personally I'm now more in favor of how it's implemented in Nova [1].
Basically on service startup RPC API is pinned to the lowest version
among all the managers running in the environment. I've prepared PoC
patches and successfully executed multiple runs of Tempest on deployment
with Mitaka's c-api and mixed Liberty and Mitaka c-sch, c-vol, c-bak
(two of each service).

I think we should discuss this in details at the mid-cycle meetup next week.

[1] https://blueprints.launchpad.net/nova/+spec/service-version-behavior
[2] https://review.openstack.org/#/c/268025/
[3] https://review.openstack.org/#/c/268026/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Should we fix XML request issues?

2016-01-15 Thread Michał Dulko
On 01/15/2016 07:14 AM, Jethani, Ravishekar wrote:
> Hi Devs,
>
> I have come across a few 500 response issues while sending request
> body as XML to cinder service. For example:
>
> 
>
> I can see that XML support has been marked as depricated and will be
> removed in 'N' release. So is it still worth trying fixing these
> issues during Mitaka time frame?
>
> Thanks.
> Ravi Jethani

One of the reasons XML API was deprecated is the fact that it weren't
getting much CI testing and as Doug Hellmann once mentioned - "if
something isn't tested then it isn't working".

I'm okay with fixing it (if someone really needs that feature), but we
don't have any means to prevent further regressions, so it may not be
worth it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][neutron][cinder][swift][ceilometer][nova][keystone][sahara][glance][neutron-lbaas][imm] stylistic changes to code, how do we handle them?

2016-01-12 Thread Michał Dulko
On 01/12/2016 03:02 PM, Chris Dent wrote:
> On Tue, 12 Jan 2016, Amrith Kumar wrote:
>
>> if var > 0:
>> ... something ...
>>
>> To
>>
>> if var:
>> ... something ...
>
> I may be missing something but the above is not a stylistic change
> if var can ever be negative. In one of the ceilometer changes[1] for
> example, this change will change the flow of the code. In this
> particular example if some caller do _do_test_iter_images sends
> page_size=-1 bad things will happen. Since it is test code the scope
> of the damage is limited, but I prefer the more explicit > 0.
>
> I've not checked all the reviews but if it is showing up in one
> place seems like it could in others.
>
> [1]
> https://review.openstack.org/#/c/266211/1/ceilometer/tests/unit/image/test_glance.py
>

Same thing for Cinder change - it's failing multiple unit tests because
this changes the logic of the statement.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] oslo_config.PortOp is undefined

2016-01-05 Thread Michał Dulko
PortOpt definitely exists in oslo.config [1]. Try executing "pip install
-U -r /opt/stack/cinder/requirements.txt".

[1]
https://github.com/openstack/oslo.config/blob/f5e2fab3ae5af5bd47fe3526a73f13fbaa27c1f0/oslo_config/cfg.py#L1180-L1216

On 01/05/2016 12:33 PM, Pradip Mukhopadhyay wrote:
> I did not do anything specific.
>
> Oslo Config has IntOpt, not PortOpt.
>
> Any clue how can I upgrade oslo.config?
>
>
>
> --pradip
>
>
>
> On Tue, Jan 5, 2016 at 4:43 PM, Julien Danjou  > wrote:
>
> On Tue, Jan 05 2016, Pradip Mukhopadhyay wrote:
>
> Upgrade oslo.config?
>
> > Hello,
> >
> >
> > I have a devstack created on 12/22/15. Just seeing that (after a
> vacation)
> > it stops working.
> >
> > Tried to restart the cinder services, getting the error:
> >
> > stack@openstack4:~/devstack$ /usr/local/bin/cinder-api --config-file
> > /etc/cinder/cinder.conf & echo $!
> >/opt/stack/status/stack/c-api.pid; fg ||
> > echo "c-api failed to start" | tee
> "/opt/stack/status/stack/c-api.failure"
> > [1] 23828
> > /usr/local/bin/cinder-api --config-file /etc/cinder/cinder.conf
> > Traceback (most recent call last):
> >   File "/usr/local/bin/cinder-api", line 6, in 
> > from cinder.cmd.api import main
> >   File "/opt/stack/cinder/cinder/cmd/api.py", line 37, in 
> > from cinder import service
> >   File "/opt/stack/cinder/cinder/service.py", line 65, in 
> > cfg.PortOpt('osapi_volume_listen_port',
> > AttributeError: 'module' object has no attribute 'PortOpt'
> > c-api failed to start
> > stack@openstack4:~/devstack$
> >
> >
> >
> > Looks like something to do with oslo_config.PortOpt.
> >
> > I dont have any port mentioned in cinder.conf (tried also
> specifying 8070 -
> > same failure).
> >
> > When commenting out the lines, getting the following for rabbit:
> >
> > 2016-01-05 05:44:10.421 TRACE cinder plugin = ep.resolve()
> > 2016-01-05 05:44:10.421 TRACE cinder   File
> >
> "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line
> > 2386, in resolve
> > 2016-01-05 05:44:10.421 TRACE cinder module =
> > __import__(self.module_name, fromlist=['__name__'], level=0)
> > 2016-01-05 05:44:10.421 TRACE cinder   File
> >
> 
> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/impl_rabbit.py",
> > line 94, in 
> > 2016-01-05 05:44:10.421 TRACE cinder cfg.PortOpt('rabbit_port',
> > 2016-01-05 05:44:10.421 TRACE cinder AttributeError: 'module'
> object has no
> > attribute 'PortOpt'
> > 2016-01-05 05:44:10.421 TRACE cinder
> >
> >
> >
> > Any workaround (unstack and stacking want to avoid) would be hightly
> > appreciated.
> >
> >
> >
> > Thanks in advance,
> > Pradip
> >
> >
>
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-05 Thread Michał Dulko
On 01/04/2016 11:41 PM, Ryan Rossiter wrote:
> My first question is: what will be handling the object backports that the 
> different cinder services need? In Nova, we have the conductor service, which 
> handles all of the messy RPC and DB work. When anyone needs something 
> backported, they ask conductor, and it handles everything. That also gives us 
> a starting point for the rolling upgrades: start with conductor, and now he 
> has the new master list of objects, and can handle the backporting of objects 
> when giving them to the older services. From what I see, the main services in 
> cinder are API, scheduler, and volume. Does there need to be another service 
> added to handle RPC stuff?
What Duncan is describing is correct - we intent to backport objects on
sender's side in a similar manner like RPC methods backporting (version
pinning). This model was discussed a few times and seems to be fine, but
if you think otherwise - please let us know.
> The next question is: are there plans to do manifest backports? That is a 
> very o.vo-jargoned question, but basically from what I can see, Cinder’s 
> obj_to_primitive() calls do not use o.vo’s newer method of backporting, which 
> uses a big dictionary of known versions (a manifest) to do one big backport 
> instead of clogging up RPC with multiple backport requests every time a 
> subobject needs to be backported after a parent has been backported (see [1] 
> if you’re interested). I think this is a pretty simple change that I can help 
> out with if need be (/me knocks on wood).
We want to backport on sender's side, so no RPC calls should be needed.
This is also connected with the fact that in Cinder we have all the
services accessing the DB directly (and currently no plans to to change
it). This means that o.vo are of no use for us to support schema
upgrades in an upgradeable way (as described in [1]). We intent to use
o.vo just to version the payloads sent through RPC methods arguments.

This however rises a question that came to my mind a few times - why do
we even mark any of our o.vo methods as remoteable?

I really want to thank you for giving all this stuff in Cinder a good
double check. It's very helpful to have an insight of someone more
experienced with o.vo stuff. :)

I think we have enough bricks and blocks in place to show a complete
rolling upgrade case that will include DB schema upgrade, o.vo
backporting and RPC API version pinning. I'll be working on putting this
all together before the mid cycle meetup.

[1]
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][stable] Horizon kilo gate fails due to testrepository dependency

2016-01-04 Thread Michał Dulko
On 01/04/2016 03:41 PM, Ihar Hrachyshka wrote:
> UPD: Turns out it breaks Liberty gate too, f.e. for Neutron. It’s
> interesting that it did not break the thing for e.g. Neutron master.
>
> Matthias Runge  wrote:

We observe this on Cinder's stable/liberty in Grenade tests (e.g. [1])
and on stable/kilo in whole Tempest (e.g. [2]).

[1] https://review.openstack.org/#/c/262162/
[2] https://review.openstack.org/#/c/246646/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit performance

2015-12-29 Thread Michał Dulko
On 12/29/2015 10:44 AM, Gary Kotton wrote:
> Hi,
> I know it is holidays but gerrit response is so slow. If anyone from
> Infra is online please can you help.
> Thanks
> Gary
>

Actually it's not only slow, but occasionally drops/timeouts requests.
As the UI is making multiple HTTP calls to display a page and almost
each click is a new request, this efficiently prevents ~50% pages from
loading. My review bandwidth is greatly affected by these issues.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] whether the ServiceGroup in Cinder is necessary

2015-12-28 Thread Michał Dulko
On 12/28/2015 05:03 AM, hao wang wrote:
> hi, Janice
>
> This idea seems to me that is useful to detect the state of
> cinder-volume process more quickly, but I feel there is another issue
> that if the back-end device go to fail you still
> can't keep cloud in ha or create volume successfully since the service
> is up but device is down.
>
> So, what I want to say is we maybe need to consider to detect and
> report the device state priority[1] and then consider to improve
> service if we need that.
>
> [1]https://review.openstack.org/#/c/252921/

We're already doing something similar in terms of driver initialization
state [1]. c-vols with uninitialized drivers will show up as "down".
Your idea also seems to make sense to me.

https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L474-L481

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11, completed.

2015-12-23 Thread Michał Dulko
On 12/22/2015 10:52 PM, Carl Baldwin wrote:
> On Mon, Dec 21, 2015 at 6:21 PM, Zaro  wrote:
>> Hit '?' and it says '/' is find, give that a try.
> '/' isn't really much better.  It seems to highlight all of the
> occurrences but I can't find a way to navigate to the next/previous
> occurence with the keyboard.  I see that the scroll bar shows a small
> indication that there are many matches within the file and so I could
> scroll to them if I want to move my fingers to the trackpad to scroll.

n goes to next occurrence and N (shift+n) to a previous one. These are
same keybindings as in Vim. Actually a lot of Vim-like movements are
functional in new Gerrit and I really like that fact.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] whether the ServiceGroup in Cinder is necessary

2015-12-23 Thread Michał Dulko
On 12/22/2015 09:01 PM, Erlon Cruz wrote:
> Hmm, I see. There's this spec[1] that was discussed in the past with a
> similar proposal. There's a SPEC with some other points on the
> discussion, I think Janice forgot to mention.
>
> Erlon
>
> [1] https://review.openstack.org/#/c/176233/
> [2] https://review.openstack.org/#/c/258968/

It seems to me that these two are actually not that much related. [1]
proposes to use RPC calls to determine if a service is alive, but only
in cinder-manage commands. In [2] whole health check mechanism that is
used by scheduler is proposed to be replaced by Tooz, which isn't using RPC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] whether the ServiceGroup in Cinder is necessary

2015-12-22 Thread Michał Dulko
On 12/17/2015 04:49 AM, li.yuanz...@zte.com.cn wrote:
> Hi all,
>
>   I'd like to start discussion on whether the servicegroup in Cinder
> is necessary.
>  
>   Recently, cinder can only support db driver, and doesn't have
> servicegroup concept.
>   our team wants to implement the servicegroup feature using on
> tooz[1] library. Like nova[2], when the state of service is required,
> it can be got through servicegroup.
>  
>   Besides, due to the cinder-volume-active-active-support[3] merged,
> we think it makes the Service Group do more.
>  
>   Before the cinder-volume-active-active-support was proposed, Cinder
> has no concept of cluster. Therefore, we have a doubt that, if without
> cinder-volume-active-active-support, is it necessary to add feature of
> servicegroup?
>  
>   Any comments or suggestions?
>  
>   [1]
> https://github.com/openstack/tooz/blob/master/doc/source/tutorial/group_membership.rst
>
>   [2]
> https://github.com/openstack/nova-specs/blob/master/specs/liberty/approved/service-group-using-tooz.rst
>
>   [3]
> https://github.com/openstack/cinder-specs/blob/master/specs/mitaka/cinder-volume-active-active-support.rst
>
>  
>  
>   Best Regards,
>   Janice

Hi,

It will not be possible to use A/A HA configuration of cinder-volume
service with LVM driver. According to latest User Survey [1] this driver
is running in 22% of deployments. ZooKeeper service groups will be still
useful there, as it will allow scheduler to know about failed
services/nodes much quicker and prevent from scheduling volumes there.

As we have initial Tooz integration [2] already merged for locking
purposes, I think that if we'll be able to implement SG in a
non-intrusive manner (without changing the default behavior) it would be
an interesting option for some deployments.

[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
[2] https://review.openstack.org/#/c/183537/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] whether the ServiceGroup in Cinder is necessary

2015-12-22 Thread Michał Dulko
On 12/22/2015 01:29 PM, Erlon Cruz wrote:
> Hi Li,
>
> Can you give a quick background on servicegroups (or links to. The
> spec you linked only describe the process on Nova to change from what
> they are using to tooz)? Also, what are the use cases and benefits of
> using this?
>
> Erlon
>

This is simply and idea to be able to use something more sophisticated
than DB heartbeats to monitor services states. With Tooz implemented for
that we would be able to use for example ZooKeeper to know about service
failure in a matter of seconds instead of around a minute. This would
shrink the window in which c-sch doesn't-know-yet that c-vol failed and
sends RPC messages to a service that will never answer. I think there
are more use cases related to service monitoring and failover.

Service groups isn't probably a correct name for proposed enhancement -
we have this concept somehow implemented, but proposed idea seems to be
related to making it pluggable.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.3.0

2015-12-18 Thread Michał Dulko
On 12/17/2015 11:59 PM, James E. Blair wrote:
> Announcing Gertty 1.3.0
> ===
>
> Gertty is a console-based interface to the Gerrit Code Review system.
>
> Gertty is designed to support a workflow similar to reading network
> news or mail.  It syncs information from Gerrit to local storage to
> support disconnected operation and easy manipulation of local git
> repos.  It is fast and efficient at dealing with large numbers of
> changes and projects.

As this was sent to openstack-dev list, I'll ask a Gertty usage question
here. Was anyone able to use Gertty successfully behind a proxy? My
environment doesn't allow any traffic outside the proxy and I haven't
noticed a config option to set it up.

Thanks,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-18 Thread Michał Dulko
On 12/16/2015 11:40 PM, Zaro wrote:
> We have identified this and will look into it soon.  Thanks for
> reporting the issue.
>
> On Wed, Dec 16, 2015 at 2:02 PM, Michał Dulko <michal.du...@intel.com> wrote:
>> On 12/16/2015 10:22 PM, Zaro wrote:
>>> Thanks to everyone for their patience while we upgraded to Gerrit
>>> 2.11.  I'm happy to announce that we were able to successfully
>>> completed this task at around 21:00 UTC.  You may hack away once more.
>>>
>>> If you encounter any problems, please let us know here or in
>>> #openstack-infra on Freenode.
>>>
>>> Enjoy,
>>> -Khai
>>>
>> Good job! :)
>>
>> In Cinder we have an impressive number of Third-Party CIs. Even with
>> "Toggle CI" option set to not-showing CIs comments, the comment frame is
>> displayed. E.g. [1]. This makes reading reviewers comments harder. Is
>> there any way of disabling that? Or any chances of fixing it up in
>> Gerrit deployment itself?
>>
>> [1] https://review.openstack.org/#/c/248768/
>>
>> ___
>> OpenStack-Infra mailing list
>> openstack-in...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

I see this is already fixed. Thank you very much for such a fast
reaction! :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Gerrit Upgrade to ver 2.11 completed.

2015-12-16 Thread Michał Dulko
On 12/16/2015 10:22 PM, Zaro wrote:
> Thanks to everyone for their patience while we upgraded to Gerrit
> 2.11.  I'm happy to announce that we were able to successfully
> completed this task at around 21:00 UTC.  You may hack away once more.
>
> If you encounter any problems, please let us know here or in
> #openstack-infra on Freenode.
>
> Enjoy,
> -Khai
>

Good job! :)

In Cinder we have an impressive number of Third-Party CIs. Even with
"Toggle CI" option set to not-showing CIs comments, the comment frame is
displayed. E.g. [1]. This makes reading reviewers comments harder. Is
there any way of disabling that? Or any chances of fixing it up in
Gerrit deployment itself?

[1] https://review.openstack.org/#/c/248768/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-15 Thread Michał Dulko
On 12/14/2015 03:59 PM, Ryan Rossiter wrote:
> Hi everyone,
>
> I have a change submitted that lays the groundwork for using custom enums and 
> fields that are used by versioned objects [1]. These custom fields allow for 
> verification on a set of valid values, which prevents the field from being 
> mistakenly set to something invalid. These custom fields are best suited for 
> StringFields that are only assigned certain exact strings (such as a status, 
> format, or type). Some examples for Nova: PciDevice.status, 
> ImageMetaProps.hw_scsi_model, and BlockDeviceMapping.source_type.
>
> These new enums (that are consumed by the fields) are also great for 
> centralizing constants for hard-coded strings throughout the code. For 
> example (using [1]):
>
> Instead of
> if backup.status == ‘creating’:
> 
>
> We now have
> if backup.status == fields.BackupStatus.CREATING:
> 
>
> Granted, this causes a lot of brainless line changes that make for a lot of 
> +/-, but it centralizes a lot. In changes like this, I hope I found all of 
> the occurrences of the different backup statuses, but GitHub search and grep 
> can only do so much. If it turns out this gets in and I missed a string or 
> two, it’s not the end of the world, just push up a follow-up patch to fix up 
> the missed strings. That part of the review is not affected in any way by the 
> RPC/object versioning.
>
> Speaking of object versioning, notice in cinder/objects/backup.py the version 
> was updated to appropriate the new field type. The underlying data passed 
> over RPC has not changed, but this is done for compatibility with older 
> versions that may not have obeyed the set of valid values.
>
> [1] https://review.openstack.org/#/c/256737/
>
>
> -
> Thanks,
>
> Ryan Rossiter (rlrossit)

Thanks for starting this work with formalizing the statuses, I've
commented on the review with a few remarks.

I think we should start a blueprint or bugreport to be able track these
efforts.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Rolling upgrades

2015-12-15 Thread Michał Dulko
Hi,

At the meeting recently it was mentioned that our rolling upgrades
efforts are pursuing an "elusive unicorn" that makes development a lot
more complicated and restricted. I want to try to clarify this a bit,
explain the strategy more and give an update on the status of the whole
affair.

So first of all - it's definitely achievable, as Nova supports rolling
upgrades from Kilo. It makes developer's life harder, but the feature is
useful, e.g. CERN was able to upgrade their compute nodes after control
plane services in their enormously big environment in their Juno->Kilo
upgrade [1].

Rolling upgrades are all about interoperability of services running in
different versions. We want to give operators ability to upgrade service
instances one-by-one, starting form c-api, through c-sch to c-vol and
c-bak. Moreover we want to be sure that old and new version of a single
service can coexist. This means we need to be backward compatible with
at least one previous release. There are 3 planes on which
incompatibilities may happen:
* API of RPC methods
* Structure of composite data sent over RPC
* DB schemas

API of RPC methods
--
Here we're strictly following Nova's solution described in [2]. We need
to support RPC version pinning, so each RPC API addition needs to be
versioned and we need to be able to downgrade the request to required
version in rpcapi.py modules. On the other side manager.py should be
able to process the request even when it doesn't receive newly added
parameter. There are already some examples of this approach in tree
([3], [4]). Until the upgrade is completed the RPC API version is pinned
so everything should be compatible with older release. Once only new
services are running the pin may be released.

Structure of composite data sent over RPC
-
Again RPC version pinning is utilized with addition of versioned
objects. Before sending the object we will translate it to the lower
version - according to the version pin. This will make sure that object
can be understand by older services. Note that newer services can
translate the object back to the new version when receiving an old one.

DB schemas
--
This is a hard one. We've needed to adapt approach described in [5] to
our needs as we're calling the DB from all of our services and not only
from nova-conductor as Nova does. This means that in case of a
non-backward compatible migration we need to stretch the process through
3 releases. Good news is that we haven't needed such migration since
Juno (in M we have a few candidates… :(). Process for Cinder is
described in [6]. In general we want to ban migrations that are
non-backward compatible or exclusively lock the table for an extended
period of time ([7] is a good source of truth for MySQL) and allow them
only if they follow 3-relase period of migration (so that N+2 release
has no notion of a column or table so we can drop it).

Right now we're finishing the oslo.versionedobjects adoption -
outstanding patches can be found in [8] (there are still a few to come -
look at table at the bottom of [9]). In case of DB schemas upgrades
we've merged the spec and a test that's banning contracting migrations
is in review [10]. In case of RPC API compatibility I'm actively
reviewing the patches to make sure every change there is done properly.

Apart from that in the backlog is documenting all this in devref and
implementing partial upgrade Grenade tests that will gate on version
interoperability.

I hope this clarifies a bit how we're progressing to be able to upgrade
Cinder with minimal or no downtime.

[1]
http://openstack-in-production.blogspot.de/2015/11/our-cloud-in-kilo.html
[2] http://www.danplanet.com/blog/2015/10/05/upgrades-in-nova-rpc-apis/
[3]
https://github.com/openstack/cinder/blob/12e4d9236/cinder/scheduler/rpcapi.py#L89-L93
[4]
https://github.com/openstack/cinder/blob/12e4d9236/cinder/scheduler/manager.py#L124-L128
[5]
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
[6]
https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/online-schema-upgrades.html
[7]
https://dev.mysql.com/doc/refman/5.7/en/innodb-create-index-overview.html
[8]
https://review.openstack.org/#/q/branch:master+topic:bp/cinder-objects,n,z
[9] https://etherpad.openstack.org/p/cinder-rolling-upgrade
[10]
https://review.openstack.org/#/q/branch:master+topic:bp/online-schema-upgrades,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-15 Thread Michał Dulko
On 12/15/2015 04:08 PM, Ryan Rossiter wrote:
> Thanks for the review Michal! As for the bp/bug report, there’s four options:
>
> 1. Tack the work on as part of bp cinder-objects
> 2. Make a new blueprint (bp cinder—object-fields)
> 3. Open a bug to handle all changes for enums/fields
> 4. Open a bug for each changed enum/field
>
> Personally, I’m partial to #1, but #2 is better if you want to track this 
> work separately from the other objects work. I don’t think we should go with 
> bug reports because #3 will be a lot of Partial-Bug and #4 will be kinda 
> spammy. I don’t know what the spec process is in Cinder compared to Nova, but 
> this is nowhere near enough work to be spec-worthy.
>
> If this is something you or others think should be discussed in a meeting, I 
> can tack it on to the agenda for tomorrow.

bp/cinder-object topic is a little crowded with patches and it tracks
mostly rolling-upgrades-related stuff. This is more of a refactoring
than a ovo essential change, so simple specless bp/cinder-object-fields
is totally fine to me.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev