[openstack-dev] [nova] Policy check for network:attach_external_network

2016-06-08 Thread Ryan Rossiter
Taking a look at [1], I got curious as to why all of the old network policies 
were deleted except for network:attach_external_network. With the help of 
mriedem, it turns out that policy is checked indirectly on the compute node, in 
allocate_for_instance(). mriedem pointed out that this policy doesn’t work very 
well from an end-user perspective, because if you have an existing instance and 
want to now attach it to an external network, it’ll reschedule it, and if you 
don’t have permission to attach to an external network, it’ll bounce around the 
scheduler until the user receives the infamous “No Valid Host”.

My main question is: how do we want to handle this? I’m thinking because 
Neutron has all of the info as to whether or not the network we’re creating a 
port on is external, we could just let Neutron handle all of the policy work. 
That way eventually the policy can just leave nova’s policy.json. But that’ll 
take a while.

A temporary alternative is we move that policy check to the API. That way we 
can accurately deny the user instead of plumbing things down into the compute 
for them to be denied there. I did a scavenger hunt and found that the policy 
check was added because of [2], which, in the end, is just a permissions thing. 
So that could get added to the API checks when 1) creating an instance and 2) 
attaching an existing instance to another network. Are there any other places 
this API check would be needed?

[1]: https://review.openstack.org/#/c/320751/
[2]: https://bugs.launchpad.net/nova/+bug/1352102

-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Austin summit versioned notification

2016-05-03 Thread Ryan Rossiter

> On May 3, 2016, at 8:58 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com> wrote:
> 
> On 5/3/2016 3:10 AM, Balázs Gibizer wrote:
>> Hi,
>> 
>> Last week Friday in Austin we discussed the way forward with the versioned
>> notification transformation in Nova.
>> 
>> We agreed that when we separate the object model use for notifications from
>> the nova object model we still use the NovaObject as a base class to avoid
>> change in the wire format and the major version bump it would cause.
>> However we won't register the notification object into the 
>> NovaObjectRegistry.
> 
> We also said that since the objects won't be registered, we still want to 
> test their hashes in case something changes, so register the notification 
> objects in the test that checks for changes (even though they aren't 
> registered globally), this will keep us from slipping.

I found yesterday that we do this for the DeviceBus object here [1]. We’ll be 
doing something similar with all objects that inherit from the notification 
base objects in either the test_versions(), or in setUp() of 
TestObjectVersions, whichever gives us the most coverage and least interference 
on other tests.

[1]: 
https://github.com/openstack/nova/blob/master/nova/tests/unit/objects/test_objects.py#L1254-L1260

> 
>> In general we agreed that we move forward with the transformation according
>> to the spec [1].
>> 
>> Regarding the schema generation for the notifications we agreed to
>> propose a general JSON Schema generation implementation to
>> oslo.versionedobjects [2] that can be used in Nova later to generate
>> schemas for the notification object model.
>> 
>> To have a way to synchronize our effort I'd like to restart the weekly
>> subteam meeting [5]. As the majority of the subteam is in US and EU I propose
>> to continue the currently existing time slot UTC 17:00 every Tuesday.
>> I proposed the frequency increase from biweekly to weekly here [3].
>> This means that we can meet today 17:00 UTC [4] on #openstack-meeting-4.
>> 
>> Cheers,
>> Gibi
>> 
>> [1] https://review.openstack.org/#/c/286675/ Versioned notification 
>> transformation
>> [2] https://review.openstack.org/#/c/311194/ versionedobjects: add json 
>> schema generation
>> [3] https://review.openstack.org/#/c/311948/
>> [4] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160503T17
>> [5] https://wiki.openstack.org/wiki/Meetings/NovaNotification
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.versionedobjects] Is it possible to make changes to oslo repos?

2016-01-29 Thread Ryan Rossiter

> On Jan 28, 2016, at 5:01 PM, Dan Smith <d...@danplanet.com> wrote:
> 
>> I know we have some projects (heat, I think?) that don't have UUIDs at
>> all. Are they using oslo.versionedobjects? I suppose we could move them
>> to a string field instead of a UUID field, instead of flipping the
>> enforcement flag on. Alternately, if we add a new class we wouldn't have
>> to ever actually deprecate the UUIDField class, though we might add a
>> warning to the docs that it isn't doing any validation and the new class
>> is preferred.
> 
> If a project isn't using UUIDs then they have no reason to use a
> UUIDField and thus aren't affected. If they are, then they're doing
> something wrong.
> 
>> I'll be curious to see what Dan and Sean have to say when they catch up
>> on this thread.
> 
> I think Ryan summed it up very well earlier, but I will echo and
> elaborate here for clarity.
> 
> To be honest, I think the right thing to do is deprecate the
> non-validating behavior and have projects use in-project validating
> fields for UUIDs (i.e. their own UUIDField implementation). When we can,
> release a major version with the validating behavior turned on.
> 
> I don't like the validate=False flag because it's hard (or at least
> ugly) to deprecate. Even allowing the call signature to tolerate it for
> compatibility but still doing the validation is terrible, IMHO. If
> people feel it is absolutely critical to have an implementation in o.vo
> right now, then we can do the parallel class option, but we basically
> just have to alias the old one to the new one when we "drop" the
> non-validating functionality, IMHO, which is just more baggage. To quote
> Ryan, "if you know you're going to break people, just don't do it."
> 
> This is a really minor issue in my opinion -- the amount of code a
> project needs to replicate is exceedingly small, especially if they just
> subclass the existing field and override the one method required to
> ensure coercion. For a point of reference, nova has a lot of these
> fields which are too nova-specific to put into o.vo; adding one more for
> this is immeasurably small:
> 
> https://github.com/openstack/nova/blob/master/nova/objects/fields.py#L76-L621
> 
> Cinder also has some already:
> 
> https://github.com/openstack/cinder/blob/master/cinder/objects/fields.py

You’re welcome for the extra Cinder evidence, Dan ;).

> 
> And to again echo Ryan's comments, we have always landed things in nova,
> sync'd them to o.vo and removed our local copy once we can depend on a
> new-enough o.vo release. This is effectively the same behavior for this
> change and really just isn't that big of a deal. Please, let's do the
> safest thing for the projects that consume our library, and for the
> people who have to use the same gate as all the rest of us.

For anyone who cares how this works, here’s a typical process for doing custom 
fields:

1) Put the field in Nova [1]
2) Put the new field in o.vo [2]
3) After o.vo is released, re-sync [3]

[1] 
https://github.com/openstack/nova/commit/b9247f52d17e18d075b995ac8a438ec5e65eacbf
[2] 
https://github.com/openstack/oslo.versionedobjects/commit/2e083bce6e4b325cb89baea4b1d6173d58c8f5bd
[3] 
https://github.com/openstack/nova/commit/3c83a47bb70ad9db6c7900e6c752f08777fa0787

> 
> --Dan
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.versionedobjects] Is it possible to make changes to oslo repos?

2016-01-28 Thread Ryan Rossiter

> On Jan 28, 2016, at 4:04 PM, Joshua Harlow <harlo...@fastmail.com> wrote:
> 
> Ryan Rossiter wrote:
>> 
>> As the-guy-in-nova-that-does-this-a-bunch, I’ve pushed a bunch of changes to 
>> nova because we didn’t have them yet in o.vo, and once they get released, I 
>> “re-sync” them back to nova to use the o.vo version, see:
>> 
>> https://review.openstack.org/#/c/272641/
>> https://github.com/openstack/oslo.versionedobjects/commit/442ddcaeab0184268ff987d4ff1bf0f95dd87f2e
>> https://github.com/openstack/oslo.versionedobjects/commit/b8818720862490c31a412e6cf6abf1962fd7a2b0
>> https://github.com/openstack/oslo.versionedobjects/commit/cb898f382e9a22dc9dcbc2de753d37b1b107176d
>> 
>> I don’t find it all that terrible, but maybe that’s because I’m only 
>> watching it in one project.
> 
> Just out of curiosity but is this due to o.vo not merging reviews fast 
> enough? Not releasing fast enough? A desire to experiment/prove out the 
> additions in nova before moving to o.vo? Or something else?

Even though I am an impatient person, it’s not because of #1 or #2. Part of it 
is to prove it works in nova. But I think the main reasoning behind most of 
them are because nova already has most of the duplicated code, and in order to 
let us use o.vo, there needs to be a slight change to it. So my steps in 
committing to both are 1) fix it in nova’s duplicated code, 2) fix it in o.vo 
3) get the change merged/released in o.vo 4) eliminate all duped code from 
nova, cut over completely to o.vo.

So the main reasons are forks and to answer the question “why does o.vo need 
this?”. The change this thread discusses doesn’t fall under this realm, we know 
why we need/want it. But my situations are just a data point to add towards 
carrying it in both places isn’t the end of the world (though I’m not saying 
this approach is the best way to do it).

> 
> From my point of view it feels odd to have a upstream (o.vo) and a downstream 
> (nova) where the downstream in a way carries 'patches' on-top of o.vo when 
> both o.vo and nova are in the same ecosystem (openstack).

I would totally agree, if o.vo was purely upstream of nova. Unfortunately, nova 
is still carrying a lot of leftovers, and because of the “forks” I need to push 
something to both of them so I can bring them together in the future. I’m doing 
my best at bringing these two closer together.

> 
>> -
>> Thanks,
>> 
>> Ryan Rossiter (rlrossit)
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.versionedobjects] Is it possible to make changes to oslo repos?

2016-01-28 Thread Ryan Rossiter
line for the project. (I would hope
>>> implementing a FIXME is not out of line for the project)
>> 
>> No, but fixing it in a way that is known to break other projects
>> is. In this case, the change is known to break at least one project.
>> We have to be extremely careful with things that look like breaking
>> changes, since we can break *everyone* with a release. So I think
>> in this case the -2 is warranted.
>> 
>> The other case you've pointed out on IRC, of the logging timezone
>> thing [1], is my -2. It was originally implemented as a breaking
>> change.  That has been fixed, but it still needs some discussion
>> on the mailing list, at least in part because I don't see the point
>> of the change.
>> 
>> Doug
>> 
>>> 
>>> Thanks,
>>> 
>>> Graham
>>> 
>>> 0 - 
>>> https://git.openstack.org/cgit/openstack/oslo.versionedobjects/tree/oslo_versionedobjects/fields.py#n305
>>> 
>>> 1 - https://review.openstack.org/#/c/250493/
>>> 
>>> 2 - 
>>> https://review.openstack.org/#/c/250493/9/oslo_versionedobjects/fields.py
>>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.versionedobjects] Is it possible to make changes to oslo repos?

2016-01-28 Thread Ryan Rossiter
on and I guess not everyone
>>>> likes to do that(?)
>>>> 
>>>> In general I hope it's not all of oslo u are grouping here because, if its
>>>> just a few cases, hopefully we can work with the person that is -2ing stuff
>>>> to not do it willy nilly...
>>>> 
>>>> My 2 cents,
>>>> 
>>>> -Josh
>>>> 
>>>> 
>>>> 
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-05 Thread Ryan Rossiter

> On Jan 5, 2016, at 7:13 AM, Michał Dulko <michal.du...@intel.com> wrote:
> 
> On 01/04/2016 11:41 PM, Ryan Rossiter wrote:
>> My first question is: what will be handling the object backports that the 
>> different cinder services need? In Nova, we have the conductor service, 
>> which handles all of the messy RPC and DB work. When anyone needs something 
>> backported, they ask conductor, and it handles everything. That also gives 
>> us a starting point for the rolling upgrades: start with conductor, and now 
>> he has the new master list of objects, and can handle the backporting of 
>> objects when giving them to the older services. From what I see, the main 
>> services in cinder are API, scheduler, and volume. Does there need to be 
>> another service added to handle RPC stuff?
> What Duncan is describing is correct - we intent to backport objects on
> sender's side in a similar manner like RPC methods backporting (version
> pinning). This model was discussed a few times and seems to be fine, but
> if you think otherwise - please let us know.
This is definitely good to know. Are you planning on setting up something off 
to the side of o.vo within that holds a dictionary of all values for a release? 
Something like:

{‘liberty’: {‘volume’: ‘1.3’, …},
 ‘mitaka’: {‘volume’: ‘1.8’, …}, }

With the possibility of replacing the release name with the RPC version or some 
other version placeholder. Playing devil’s advocate, how does this work out if 
I want to be continuously deploying Cinder from HEAD? I will be pinned to the 
previous release’s version until the new release comes out right? I don’t think 
that’s a bad thing, just something to think about. Nova’s ability to be 
continuously deployable off of HEAD is still a big magical black box to me, so 
to be fair I have no idea how a rolling upgrade works when doing CD off of HEAD.

>> The next question is: are there plans to do manifest backports? That is a 
>> very o.vo-jargoned question, but basically from what I can see, Cinder’s 
>> obj_to_primitive() calls do not use o.vo’s newer method of backporting, 
>> which uses a big dictionary of known versions (a manifest) to do one big 
>> backport instead of clogging up RPC with multiple backport requests every 
>> time a subobject needs to be backported after a parent has been backported 
>> (see [1] if you’re interested). I think this is a pretty simple change that 
>> I can help out with if need be (/me knocks on wood).
> We want to backport on sender's side, so no RPC calls should be needed.
> This is also connected with the fact that in Cinder we have all the
> services accessing the DB directly (and currently no plans to to change
> it). This means that o.vo are of no use for us to support schema
> upgrades in an upgradeable way (as described in [1]). We intent to use
> o.vo just to version the payloads sent through RPC methods arguments.
Is this documented in specs/bps somewhere? This is a pretty big detail that I 
didn’t know about. The only thing I could find was [1] from kilo (which I 
totally understand if it hasn’t been updated since merging, I don’t think *any* 
project that I’ve seen keeps the merged specs up to date).

> 
> This however rises a question that came to my mind a few times - why do
> we even mark any of our o.vo methods as remoteable?
Well, is there hope to change over to do o.vo more like Nova in the future? If 
so, then there’s basically no cost to doing @base.remotable right now if you 
want to add it in the future. But that’s not for me to decide :)

> 
> I really want to thank you for giving all this stuff in Cinder a good
> double check. It's very helpful to have an insight of someone more
> experienced with o.vo stuff. :)
I try to make Dan Smith proud ;). I can’t hold a candle to Dan’s knowledge of 
this stuff, but I definitely have more free time than he does.
> 
> I think we have enough bricks and blocks in place to show a complete
> rolling upgrade case that will include DB schema upgrade, o.vo
> backporting and RPC API version pinning. I'll be working on putting this
> all together before the mid cycle meetup.
Record it, document it, post it somewhere when you get it done! I’ve never 
actually done a rolling upgrade on my own (thank goodness for grenade) and I 
would love to see it.
> 
> [1]
> http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

This is definitely a huge undertaking that takes multiple releases to get done. 
I think you are doing a go

[openstack-dev] [cinder] Object backporting and the associated service

2016-01-04 Thread Ryan Rossiter
Hey everybody, your favorite versioned objects guy is back!

So as I’m helping out more and more with the objects stuff around Cinder, I’m 
starting to notice something that may be a problem with rolling upgrades/object 
backporting. Feel free to say “you’re wrong” at any point during this email, I 
very well may have missed something.

My first question is: what will be handling the object backports that the 
different cinder services need? In Nova, we have the conductor service, which 
handles all of the messy RPC and DB work. When anyone needs something 
backported, they ask conductor, and it handles everything. That also gives us a 
starting point for the rolling upgrades: start with conductor, and now he has 
the new master list of objects, and can handle the backporting of objects when 
giving them to the older services. From what I see, the main services in cinder 
are API, scheduler, and volume. Does there need to be another service added to 
handle RPC stuff?

The next question is: are there plans to do manifest backports? That is a very 
o.vo-jargoned question, but basically from what I can see, Cinder’s 
obj_to_primitive() calls do not use o.vo’s newer method of backporting, which 
uses a big dictionary of known versions (a manifest) to do one big backport 
instead of clogging up RPC with multiple backport requests every time a 
subobject needs to be backported after a parent has been backported (see [1] if 
you’re interested). I think this is a pretty simple change that I can help out 
with if need be (/me knocks on wood).

I don’t mean to pile more work onto this, I understand that this is a big task 
to take on, and so far, it’s progressing very well. Michal’s been really 
helpful as a liaison so far, he’s been a lot of help :).

[1] 
https://github.com/openstack/oslo.versionedobjects/blob/master/oslo_versionedobjects/base.py#L522

-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Custom fields for versioned objects

2015-12-15 Thread Ryan Rossiter
Thanks for the review Michal! As for the bp/bug report, there’s four options:

1. Tack the work on as part of bp cinder-objects
2. Make a new blueprint (bp cinder—object-fields)
3. Open a bug to handle all changes for enums/fields
4. Open a bug for each changed enum/field

Personally, I’m partial to #1, but #2 is better if you want to track this work 
separately from the other objects work. I don’t think we should go with bug 
reports because #3 will be a lot of Partial-Bug and #4 will be kinda spammy. I 
don’t know what the spec process is in Cinder compared to Nova, but this is 
nowhere near enough work to be spec-worthy.

If this is something you or others think should be discussed in a meeting, I 
can tack it on to the agenda for tomorrow.

> On Dec 15, 2015, at 3:52 AM, Michał Dulko <michal.du...@intel.com> wrote:
> 
> On 12/14/2015 03:59 PM, Ryan Rossiter wrote:
>> Hi everyone,
>> 
>> I have a change submitted that lays the groundwork for using custom enums 
>> and fields that are used by versioned objects [1]. These custom fields allow 
>> for verification on a set of valid values, which prevents the field from 
>> being mistakenly set to something invalid. These custom fields are best 
>> suited for StringFields that are only assigned certain exact strings (such 
>> as a status, format, or type). Some examples for Nova: PciDevice.status, 
>> ImageMetaProps.hw_scsi_model, and BlockDeviceMapping.source_type.
>> 
>> These new enums (that are consumed by the fields) are also great for 
>> centralizing constants for hard-coded strings throughout the code. For 
>> example (using [1]):
>> 
>> Instead of
>>if backup.status == ‘creating’:
>>
>> 
>> We now have
>>if backup.status == fields.BackupStatus.CREATING:
>>
>> 
>> Granted, this causes a lot of brainless line changes that make for a lot of 
>> +/-, but it centralizes a lot. In changes like this, I hope I found all of 
>> the occurrences of the different backup statuses, but GitHub search and grep 
>> can only do so much. If it turns out this gets in and I missed a string or 
>> two, it’s not the end of the world, just push up a follow-up patch to fix up 
>> the missed strings. That part of the review is not affected in any way by 
>> the RPC/object versioning.
>> 
>> Speaking of object versioning, notice in cinder/objects/backup.py the 
>> version was updated to appropriate the new field type. The underlying data 
>> passed over RPC has not changed, but this is done for compatibility with 
>> older versions that may not have obeyed the set of valid values.
>> 
>> [1] https://review.openstack.org/#/c/256737/
>> 
>> 
>> -
>> Thanks,
>> 
>> Ryan Rossiter (rlrossit)
> 
> Thanks for starting this work with formalizing the statuses, I've
> commented on the review with a few remarks.
> 
> I think we should start a blueprint or bugreport to be able track these
> efforts.
> 
> 
> ______
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Custom fields for versioned objects

2015-12-14 Thread Ryan Rossiter
Hi everyone,

I have a change submitted that lays the groundwork for using custom enums and 
fields that are used by versioned objects [1]. These custom fields allow for 
verification on a set of valid values, which prevents the field from being 
mistakenly set to something invalid. These custom fields are best suited for 
StringFields that are only assigned certain exact strings (such as a status, 
format, or type). Some examples for Nova: PciDevice.status, 
ImageMetaProps.hw_scsi_model, and BlockDeviceMapping.source_type.

These new enums (that are consumed by the fields) are also great for 
centralizing constants for hard-coded strings throughout the code. For example 
(using [1]):

Instead of
if backup.status == ‘creating’:


We now have
if backup.status == fields.BackupStatus.CREATING:


Granted, this causes a lot of brainless line changes that make for a lot of 
+/-, but it centralizes a lot. In changes like this, I hope I found all of the 
occurrences of the different backup statuses, but GitHub search and grep can 
only do so much. If it turns out this gets in and I missed a string or two, 
it’s not the end of the world, just push up a follow-up patch to fix up the 
missed strings. That part of the review is not affected in any way by the 
RPC/object versioning.

Speaking of object versioning, notice in cinder/objects/backup.py the version 
was updated to appropriate the new field type. The underlying data passed over 
RPC has not changed, but this is done for compatibility with older versions 
that may not have obeyed the set of valid values.

[1] https://review.openstack.org/#/c/256737/


-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-24 Thread Ryan Rossiter



On 11/24/2015 8:35 AM, Andrew Laski wrote:

On 11/24/15 at 10:26am, Balázs Gibizer wrote:






I think see your point, and it seems like a good way forward. Let's 
turn the black list to
a white list. Now I'm thinking about creating a new Field type 
something like
WhiteListedObjectField which get a type name (as the ObjectField) but 
also get
a white_list that describes which fields needs to be used from the 
original type.
Then this new field serializes only the white listed fields from the 
original type
and only forces a version bump on the parent object if one of the 
white_listed field

changed or a new field added to the white_list.
What it does not solve out of the box is the transitive dependency. 
If today we
Have an o.vo object having a filed to another o.vo object and we want 
to put
the first object into a notification payload but want to white_list 
fields from
the second o.vo then our white list needs to be able to handle not 
just first
level fields but subfields too. I guess this is doable but I'm 
wondering if we
can avoid inventing a syntax expressing something like 
'field.subfield.subsubfield'

in the white list.


Rather than a whitelist/blacklist why not just define the schema of 
the notification within the notification object and then have the 
object code handle pulling the appropriate fields, converting formats 
if necessary, from contained objects.  Something like:


class ServicePayloadObject(NovaObject):
SCHEMA = {'host': ('service', 'host'),
  'binary': ('service', 'binary'),
  'compute_node_foo': ('compute_node', 'foo'),
 }

fields = {
'service': fields.ObjectField('Service'),
'compute_node': fields.ObjectField('ComputeNode'),
}

def populate_schema(self):
self.compute_node = self.service.compute_node
notification = {}
for key, (obj, field) in schema.iteritems():
notification[key] = getattr(getattr(self, obj), field)

Then object changes have no effect on the notifications unless there's 
a major version bump in which case a SCHEMA_VNEXT could be defined if 
necessary.
To be fair, that is basically a whitelist ;) [1]. But if we use this 
method, don't we lose a lot of o.vo's usefulness? When we serialize, we 
have to specifically *not* use the fields because that is the master 
sheet of information that we don't want to expose all of. Either that or 
we have to do the transform as part of the serialization using the 
schema, which you may be aiming at, I might just be looking at the 
snippet too literally.



[1] http://www.smbc-comics.com/index.php?id=3907

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-23 Thread Ryan Rossiter
stack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-23 Thread Ryan Rossiter



On 11/23/2015 2:23 PM, Andrew Laski wrote:

On 11/23/15 at 04:43pm, Balázs Gibizer wrote:

From: Andrew Laski [mailto:and...@lascii.com]
Sent: November 23, 2015 17:03

On 11/23/15 at 08:54am, Ryan Rossiter wrote:
>
>
>On 11/23/2015 5:33 AM, John Garbutt wrote:
>>On 20 November 2015 at 09:37, Balázs Gibizer
>><balazs.gibi...@ericsson.com> wrote:
>>>
>>>

>>

>>There is a bit I am conflicted/worried about, and thats when we start
>>including verbatim, DB objects into the notifications. At least you
>>can now quickly detect if that blob is something compatible with your
>>current parsing code. My preference is really to keep the
>>Notifications as a totally separate object tree, but I am sure there
>>are many cases where that ends up being seemingly stupid duplicate
>>work. I am not expressing this well in text form :(
>Are you saying we don't want to be willy-nilly tossing DB objects
>across the wire? Yeah that was part of the rug-pulling of just having
>the payload contain an object. We're automatically tossing everything
>with the object then, whether or not some of that was supposed to be a
>secret. We could add some sort of property to the field like
>dont_put_me_on_the_wire=True (or I guess a notification_ready()
>function that helps an object sanitize itself?) that the notifications
>will look at to know if it puts that on the wire-serialized dict, but
>that's adding a lot more complexity and work to a pile that's already
>growing rapidly.

I don't want to be tossing db objects across the wire.  But I also 
am not
convinced that we should be tossing the current objects over the 
wire either.
You make the point that there may be things in the object that 
shouldn't be
exposed, and I think object version bumps is another thing to watch 
out for.
So far the only object that has been bumped is Instance but in doing 
so no

notifications needed to change.  I think if we just put objects into
notifications we're coupling the notification versions to db or RPC 
changes

unnecessarily.  Some times they'll move together but other times, like
moving flavor into instance_extra, there's no reason to bump 
notifications.



Sanitizing existing versioned objects before putting them to the wire 
is not hard to do.

You can see an example of doing it in
https://review.openstack.org/#/c/245678/8/nova/objects/service.py,cm 
L382.
We don't need extra effort to take care of minor version bumps 
because that does not
break a well written consumer. We do have to take care of the major 
version bumps
but that is a rare event and therefore can be handled one by one in a 
way John

suggested, by keep sending the previous major version for a while too.


That review is doing much of what I was suggesting.  There is a 
separate notification and payload object.  The issue I have is that 
within the ServiceStatusPayload the raw Service object and version is 
being dumped, with the filter you point out.  But I don't think that 
consumers really care about tracking Service object versions and 
dealing with compatibility there, it would be easier for them to track 
the ServiceStatusPayload version which can remain relatively stable 
even if Service is changing to adapt to db/RPC changes.
Not only do they not really care about tracking the Service object 
versions, they probably also don't care about what's in that filter list.


But I think you're getting on the right track as to where this needs to 
go. We can integrate the filtering into the versioning of the payload. 
But instead of a blacklist, we turn the filter into a white list. If the 
underlying object adds a new field that we don't want/care if people 
know about, the payload version doesn't have to change. But if we add 
something (or if we're changing the existing fields) that we want to 
expose, we then assert that we need to update the version of the 
payload, so the consumer can look at the payload and say "oh, in 1.x, 
now I get ___" and can add the appropriate checks/compat. Granted 
with this you can get into rebase nightmares ([1] still haunts me in my 
sleep), but I don't see us frantically changing the exposed fields all 
too often. This way gives us some form of pseudo-pinning of the 
subobject. Heck, in this method, we could even pass the whitelist on the 
wire right? That way we tell the consumer explicitly what's available to 
them (kinda like a fake schema).


I think I can whip a PoC up for this (including the tests, since I'm so 
intimately familiar with them now that I'm THE nova-objects guy) if we 
want to see where this goes.




[1] https://review.openstack.org/#/c/198730/

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Versioned notifications... who cares about the version?

2015-11-19 Thread Ryan Rossiter
Reading through [1] I started getting worries in the back of my head 
about versioning these notifications. The main concern being how can the 
consumer know about the versions and what's different between them? 
Because these versioned notification payloads hold live nova objects, 
there can be a lot of rug-pulling going on underneath these 
notifications. If the payload doesn't pin itself to a certain level of 
the object, a consumer can never be guaranteed the version of the 
payload's object they will be receiving. I ran through a few of the 
scenarios about irregular versions in the notifications subteam meeting 
on Tuesday [2].


My question is do we care about the consumer? Or is it a case of 
"the consumer is always right" so we need to make sure we hand them 
super consistent, well-defined blobs across the wire? Consumers will 
have no idea of nova object internals, unless they feel like `python -c 
import nova`. I do think we get one piece of help from o.vo though. When 
the object is serialized, it hands the version with the object. So 
consumers can look at the object and say "oh, I got 1.4 I know what to 
do with this". But... they will have to implement their own compat 
logic. Everyone will have to implement their own compat logic.


We could expose a new API for getting the schema for a specific version 
of a notification, so a consumer will know what they're getting with 
their notifications. But I think that made mriedem nauseous. We could 
make an oslo library that stuffs a shim in between o.vo and nova's 
notifications to help out with compat/versioning, but that sounds like a 
lot of work, especially because the end goal is still not clearly defined.


Thoughts?

[1] https://review.openstack.org/#/c/247024
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2015-11-17.log.html#t2015-11-17T20:22:29


--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Autoscaling both clusters and containers

2015-11-17 Thread Ryan Rossiter

Hi all,

I was having a discussion with a teammate with respect to container 
scaling. He likes the aspect of nova-docker that allows you to scale 
(essentially) infinitely almost instantly, assuming you are using a 
large pool of compute hosts. In the case of Magnum, if I'm a container 
user, I don't want to be paying for a ton of vms that just sit idle, but 
I also want to have enough vms to handle my scale when I infrequently 
need it. But above all, when I need scale, I don't want to suddenly have 
to go boot vms and wait for them to start up when I really need it.


I saw [1] which discusses container scaling, but I'm thinking we can 
take this one step further. If I don't want to pay for a lot of vms when 
I'm not using them, could I set up an autoscale policy that allows my 
cluster to expand when my container concentration gets too high on my 
existing cluster? It's kind of a case of nested autoscaling. The 
containers are scaled based on request demand, and the cluster vms are 
scaled based on container count.


I'm unsure of the details of Senlin, but at least looking at Heat 
autoscaling [2], this would not be very hard to add to the Magnum 
templates, and we would forward those on through the bay API. (I figure 
we would do this through the bay, not baymodel, because I can see 
similar clusters that would want to be scaled differently).


Let me know if I'm totally crazy or if this is a good idea (or if you 
guys have already talked about this before). I would be interested in 
your feedback.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html

[2] https://wiki.openstack.org/wiki/Heat/AutoScaling#AutoScaling_API

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Ryan Rossiter
ribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe><mailto:openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe><mailto:openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Is magnum db going to be removed for k8s resources?

2015-09-17 Thread Ryan Rossiter

Here's what I see if I look at this from a matter-of-fact standpoint.

When Nova works with libvirt, libvirt might have something that Nova 
doesn't know about, but Nova doesn't care. Nova's database is the only 
world that Nova cares about. This allows Nova to have one source of data.


With Magnum, if we take data from both our database and the k8s API, we 
will have a split view of the world. This has both positives and negatives.


It does allow an end-user to do whatever they want with their cluster, 
and they don't necessarily have to use Magnum to do things, but Magnum 
will still have the correct status of stuff. It allows the end-user to 
choose what they want to use. Another positive is that because each 
clustering service is architected slightly different, it allows each 
service to know what it knows, without Magnum trying to guess some 
commonality between them.


A problem I see arising is the complexity added when gathering data from 
separate clusters. If I have one of every cluster, what happens when I 
need to get my list of containers? I would rather do just one call to 
the DB and get them, otherwise I'll have to call k8s, then call swarm, 
then mesos, and then aggregate all of them to return. I don't know if 
the only thing we will be retrieving from k8s are k8s-unique objects, 
but this is a situation that comes to my mind. Another negative is the 
possibility that the API does not perform as well as the DB. If the nova 
instance running the k8s API is super overloaded, the k8s API return 
will take longer than a call to the DB.


Let me know if I'm way off-base in any of these points. I'm not going to 
give an opinion at this point, this is just how I see things.


On 9/17/2015 7:53 AM, Jay Lau wrote:

Anyone who have some comments/suggestions on this? Thanks!

On Mon, Sep 14, 2015 at 3:57 PM, Jay Lau <jay.lau@gmail.com> wrote:


Hi Vikas,

Thanks for starting this thread. Here just show some of my comments here.

The reason that Magnum want to get k8s resource via k8s API including two
reasons:
1) Native clients support
2) With current implantation, we cannot get pod for a replication
controller. The reason is that Magnum DB only persist replication
controller info in Magnum DB.

With the bp of objects-from-bay, the magnum will always call k8s API to
get all objects for pod/service/rc. Can you please show some of your
concerns for why do we need to persist those objects in Magnum DB? We may
need to sync up Magnum DB and k8s periodically if we persist two copies of
objects.

Thanks!

<https://blueprints.launchpad.net/openstack/?searchtext=objects-from-bay>

2015-09-14 14:39 GMT+08:00 Vikas Choudhary <choudharyvika...@gmail.com>:


Hi Team,

As per object-from-bay blueprint implementation [1], all calls to magnum db
are being skipped for example pod.create() etc.

Are not we going to use magnum db at all for pods/services/rc ?


Thanks
Vikas Choudhary


[1] https://review.openstack.org/#/c/213368/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Associating patches with bugs/bps (Please don't hurt me)

2015-09-17 Thread Ryan Rossiter
I'm going to start out by making this clear: I am not looking to incite 
a flame war.


I've been working in Magnum for a couple of weeks now, and I'm starting 
to get down the processes for contribution. I'm here to talk about the 
process of always needing to have a patch associated with a bug or 
blueprint.


I have a good example of this being too strict. I knew the rules, so I 
opened [1] to say there are some improperly named variables and classes. 
I think it took longer for me to open the bug than it did to actually 
fix it. I think we need to start taking a look at how strict we need to 
be in requiring bugs to be opened and linked to patches. I understand 
it's a fine line between "it's broken" to "it would be nice to make this 
better".


I remember the debate when I was originally putting up [2] for review. 
The worry was that if these new tests would cut into developer 
productivity because it is more strict. The same argument can be applied 
to opening these bugs. If we have to open something up for everything we 
want to upload a patch for, that's just another step in the process to 
take up time.


Now, with my opinion out there, if we still want to take the direction 
of opening up bugs for everything, I will comply (I'm not the guy making 
decisions around here). I would like to see clear and present 
documentation explaining this to new contributors, though ([3] would 
probably be a good place to explain this).


Once again, not looking to start an argument. If everyone feels the way 
it works now is the best, I'm more than happy to join in :)


[1] https://bugs.launchpad.net/magnum/+bug/1496568
[2] https://review.openstack.org/#/c/217342/
[3] http://docs.openstack.org/developer/magnum/contributing.html

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] API response on k8s failure

2015-09-14 Thread Ryan Rossiter
I was giving a devstacked version of Magnum a try last week, and from a 
new user standpoint, I hit a big roadblock that caused me a lot of 
confusion. Here's my story:


I was attempting to create a pod in a k8s bay, and I provided it with an 
sample manifest from the Kubernetes repo. The Magnum API then returned 
the following error to me:


ERROR: 'NoneType' object has no attribute 'host' (HTTP 500)

I hunted down the error to be occurring here [1]. The k8s_api call was 
going bad, but conductor was continuing on anyways thinking the k8s API 
call went fine. I dug through the API calls to find the true cause of 
the error:


{u'status': u'Failure', u'kind': u'Status', u'code': 400, u'apiVersion': 
u'v1beta3', u'reason': u'BadRequest', u'message': u'Pod in version v1 
cannot be handled as a Pod: no kind "Pod" is registered for version 
"v1"', u'metadata': {}}


It turned out the error was because the manifest I was using had 
apiVersion v1, not v1beta3. That was very unclear by Magnum originally 
sending the 500.


This all does occur within a try, but the k8s API isn't throwing any 
sort of exception that can be caught by [2]. Was this caused by a 
regression in the k8s client? It looks like the original intention of 
this was to catch something going wrong in k8s, and then forward on the 
message & error code on to let the magnum API return that.


My question here is: does this classify as a bug? This happens in more 
places than just the pod create. It's changing around API returns (quite 
a few of them), and I don't know how that is handled in the Magnum 
project. If we want to have this done as a blueprint, I can open that up 
and target it for Mitaka, and get to work. If it should be opened up as 
a bug, I can also do that and start work on it ASAP.


[1] 
https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L88-L108
[2] 
https://github.com/openstack/magnum/blob/master/magnum/conductor/handlers/k8s_conductor.py#L94


--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Maintaining cluster API in upgrades

2015-09-14 Thread Ryan Rossiter
I have some food for thought with regards to upgrades that was provoked 
by some incorrect usage of Magnum which led me to finding [1].


Let's say we're running a cloud with Liberty Magnum, which works with 
Kubernetes API v1. During the Mitaka release, Kubernetes released v2, so 
now Magnum conductor in Mitaka works with Kubernetes v2 API. What would 
happen if I upgrade from L to M with Magnum? My existing Magnum/k8s 
stuff will be on v1, so having Mitaka conductor attempt to interact with 
that stuff will cause it to blow up right? The k8s API calls will fail 
because the communicating components are using differing versions of the 
API (assuming there are backwards incompatibilities).


I'm running through some suggestions in my head in order to handle this:

1. Have conductor maintain all supported older versions of k8s, and do 
API discovery to figure out which version of the API to use

  - This one sounds like a total headache from a code management standpoint

2. Do some sort of heat stack update to upgrade all existing clusters to 
use the current version of the API
  - In my head, this would work kind of like a database migration, but 
it seems like it would be a lot harder


3. Maintain cluster clients outside of the Magnum tree
  - This would make maintaining the client compatibilities a lot easier
  - Would help eliminate the cruft of merging 48k lines for a swagger 
generated client [2]

  - Having the client outside of tree would allow for a simple pip install
  - Not sure if this *actually* solves the problem above

This isn't meant to be a "we need to change this" topic, it's just meant 
to be more of a "what if" discussion. I am also up for suggestions other 
than the 3 above.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074448.html

[2] https://review.openstack.org/#/c/217427/

--
Thanks,

Ryan Rossiter (rlrossit)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-27 Thread Ryan Rossiter
If you want my inexperienced opinion, a young project is the perfect 
time to start this. Nova has had a bunch of problems with versioned 
objects that don't get realized until the next release (because that's 
the point in time at which grenade (or worse, operators) catch this). At 
that point, you then need to hack things around and backport them in 
order to get them working in the old branch. [1] is an excellent example 
of Nova having to backport a fix to an object because we weren't using 
strict object testing.


I don't feel that this should be adding overhead to contributors and 
reviewers. With [2], this test absolutely helps both contributors and 
reviewers. Yes, it requires fixing things when a change happens to an 
object. Learning to do this fix to update object hashes is extremely 
easy to do and I hope my updated comment on there makes it even easier 
(also be aware I am new to OpenStack  Nova as of about 2 months ago, so 
this stuff was new to me too not very long ago).


I understand that something like [2] will cause a test to fail when you 
make a major change to a versioned object. But you *want* that. It helps 
reviewers more easily catch contributors to say You need to update the 
version, because the hash changed. The sooner you start using versioned 
objects in the way they are designed, the smaller the upfront cost, and 
it will also be a major savings later on if something like [1] pops up.


[1]: https://bugs.launchpad.net/nova/+bug/1474074
[2]: https://review.openstack.org/#/c/217342/

On 8/27/2015 9:46 AM, Hongbin Lu wrote:


-1 from me.

IMHO, the rolling upgrade feature makes sense for a mature project 
(like Nova), but not for a young project like Magnum. It incurs 
overheads for contributors  reviewers to check the object 
compatibility in each patch. As you mentioned, the key benefit of this 
feature is supporting different version of magnum components running 
at the same time (i.e. running magnum-api 1.0 with magnum-conductor 
1.1). I don’t think supporting this advanced use case is a must at the 
current stage.


However, I don’t mean to against merging patches of this feature. I 
just disagree to enforce the rule of object version change in the near 
future.


Best regards,

Hongbin

*From:*Grasza, Grzegorz [mailto:grzegorz.gra...@intel.com]
*Sent:* August-26-15 4:47 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev] [magnum] versioned objects changes

Hi,

I noticed that right now, when we make changes (adding/removing 
fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we 
don't change object versions.


The idea of objects is that each change in their fields should be 
versioned, documentation about the change should also be written in a 
comment inside the object and the obj_make_compatible method should be 
implemented or updated. See an example here:


https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades 
from next release or maybe it's still too early?


If yes, I think core reviewers should start checking for these 
incompatible changes.


To clarify, rolling upgrades means support for running magnum services 
at different versions at the same time.


In Nova, there is an RPC call in the conductor to backport objects, 
which is called when older code gets an object it doesn’t understand. 
This patch does this in Magnum: https://review.openstack.org/#/c/184791/ .


I can report bugs and propose patches with version changes for this 
release, to get the effort started.


In Mitaka, when Grenade gets multi-node support, it can be used to add 
CI tests for rolling upgrades in Magnum.


/ Greg



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,

Ryan Rossiter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] versioned objects changes

2015-08-26 Thread Ryan Rossiter
I've been working with Nova's versioned objects lately to help catch 
people when object changes are made. There is a lot of object-related 
tests in Nova for this, and a major one I can see helping this situation 
is this test [1]. Looking through the different versioned objects within 
Magnum, I don't see any objects that hold subobjects, so tests like [2] 
are not really necessary yet.


I have uploaded a review for bringing [1] from Nova into Magnum [3]. I 
think this will be a major step in the right direction towards keeping 
track of object changes that will help with rolling upgrades.


[1]: 
https://github.com/openstack/nova/blob/master/nova/tests/unit/objects/test_objects.py#L1262-L1286
[2]: 
https://github.com/openstack/nova/blob/master/nova/tests/unit/objects/test_objects.py#L1314

[3]: https://review.openstack.org/#/c/217342/

On 8/26/2015 3:47 AM, Grasza, Grzegorz wrote:


Hi,

I noticed that right now, when we make changes (adding/removing 
fields) in 
https://github.com/openstack/magnum/tree/master/magnum/objects , we 
don't change object versions.


The idea of objects is that each change in their fields should be 
versioned, documentation about the change should also be written in a 
comment inside the object and the obj_make_compatible method should be 
implemented or updated. See an example here:


https://github.com/openstack/nova/commit/ad6051bb5c2b62a0de6708cd2d7ac1e3cfd8f1d3#diff-7c6fefb09f0e1b446141d4c8f1ac5458L27

The question is, do you think magnum should support rolling upgrades 
from next release or maybe it's still too early?


If yes, I think core reviewers should start checking for these 
incompatible changes.


To clarify, rolling upgrades means support for running magnum services 
at different versions at the same time.


In Nova, there is an RPC call in the conductor to backport objects, 
which is called when older code gets an object it doesn’t understand. 
This patch does this in Magnum: https://review.openstack.org/#/c/184791/ .


I can report bugs and propose patches with version changes for this 
release, to get the effort started.


In Mitaka, when Grenade gets multi-node support, it can be used to add 
CI tests for rolling upgrades in Magnum.


/ Greg



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Thanks,

Ryan Rossiter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev