> As the person who -2'd the review, I'm thankful you raised this issue on
> the ML, Jay. Much appreciated.
The "metadetails" term isn't being invented in this patch, of course. I
originally complained about the difference when this was being added:
https://review.openstack.org/#/c/109505/1/nova/
> Looks reasonable to me.
+1
--Dan
signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> I'm not questioning the value of f2f - I'm questioning the idea of
> doing f2f meetings sooo many times a year. OpenStack is very much
> the outlier here among open source projects - the vast majority of
> projects get along very well with much less f2f time and a far
> smaller % of their contrib
On 8/13/14 11:20 AM, Mike Bayer wrote:
> On Aug 13, 2014, at 1:44 PM, Russell Bryant
> wrote:
>> I disagree. IMO, *expecting* people to travel, potentially across
>> the globe, 4 times a year is an unreasonable expectation, and
>> quite uncharacteristic of open source projects. If we can't figur
> You may have noticed that this has merged, along with a further change
> that shows the latest results in a table format. (You may need to
> force-reload in your browser to see the change.)
Friggin. Awesome.
> Thanks again to Radoslav Gerganov for writing the original change.
Thanks to all in
> == Move Virt Drivers to use Objects (Juno Work) ==
>
> I couldn't actually find any code out for review for this one apart
> from https://review.openstack.org/#/c/94477/, is there more out there?
This was an umbrella one to cover a bunch of virt driver objects work
done early in the cycle. Much
> OS or os is operating system. I am starting to see some people us OS or
> os to mean OpenStack. This is confusing and also incorrect[0].
Except in the nova API code, where 'os' means 'openstack'.
I too think that policing the use of abbreviations is silly. It's a
confusing abbreviation, but ser
> Feature freeze is only a few weeks away (Sept 4). How about we just
> leave it in experimental until after that big push? That seems pretty
> reasonable.
Joe just proposed dropping a bunch of semi-useless largeops runs. Maybe
that leaves room to sneak this in? If the infra team is okay with it
> The main sr-iov patches have gone through lots of code reviews, manual
> rebasing, etc. Now we have some critical refactoring work on the
> existing infra to get it ready. All the code for refactoring and sr-iov
> is up for review.
I've been doing a lot of work on this recently, and plan to se
> All the other patches from this blueprint have merged, the only
> remaining patch really just needs a +W as it has been extensively
> reviewed and already approved previously. This may be an easy
> candidate since Andrew Laski, Jay Pipes and Dan Smith have reviewed
&g
>> The last few days have been interesting as I watch FFEs come through.
>> People post explaining their feature, its importance, and the risk
>> associated with it. Three cores sign on for review. All of the ones
>> I've looked at have received active review since being posted. Would
>> it be bonk
> As far as I understand it, though, that's a patch for a read-only
> mode. It seems bizzare, and possibly dangerous, to proxy read
> commands, but not write commands. It gives the impression that
> everything's fine until it's not fine (because someone tried to use
> an existing script to do a c
> 1) Is this tested anywhere? There are no unit tests in the patch and
> it's not clear to me that there would be any Tempest coverage of this
> code path. Providing this and having it break a couple of months down
> the line seems worse than not providing it at all. This is obviously
> fixable
> Please respond with +1/-1, or any further comments.
+1 from me -- Matt has been helping a lot lately.
--Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> So just to clarify: the native driver for another hypervisor (bhyve)
> would not be accepted into Nova even if it met testing coverage
> criteria? As I said the libvirt route is an option we consider, but we
> would like to have the possibility of a native FreeBSD api integration
> as well, simil
> My overall concern, and I think the other guys doing this for virt
> drivers will agree, is trying to scope down the exposure to unrelated
> failures.
But, if it's not related to your driver, then it also failed in the
upstream gate, right? If it didn't fail the upstream gate, then it is
some we
> eg use a 'env_' prefix for glance image attributes
>
> We've got a couple of cases now where we want to overrides these
> same things on a per-instance basis. Kernel command line args
> is one other example. Other hardware overrides like disk/net device
> types are another possibility
>
> Rathe
> I am fine with this, but I will never be attending the 1400 UTC
> meetings, as I live in utc-8
I too will miss the 1400UTC meetings during the majority of the year.
During PDT I will be able to make them, but will be uncaffeinated.
--Dan
___
OpenStac
> Using this approach I think we can support live upgrades from N-1 to N
> while still being able to drop some backwards compatibility code each
> release cycle.
Agreed. We've been kinda slack about bumping the RPC majors for a while,
which means we end up with a lot of cruft and comments like "#N
> If an object A contains another object or object list (called
> sub-object), any change happened in the sub-object can't be detected
> by obj_what_changed() in object A.
Well, like the Instance object does, you can override obj_what_changed()
to expose that fact to the caller. However, I think
> Sounds good to me. The list base objects don't have methods to make changes
> to the list - so it would be a case of iterating looking at each object in
> the list. That would be ok.
Hmm? You mean for NovaObjects that are lists? I hesitate to expose lists
as changed when one of the objects in
> This patch set makes the extra_resources a list of object, instead of
> opaque json string. How do you think about that?
Sounds better to me, I'll go have a look.
> However, the compute resource object is different with current
> NovaObject, a) it has no corresponding table, but just a field in
> ObjectListBase has a field called objects that is typed
> fields.ListOfObjectsField('NovaObject'). I can see methods for count
> and index, and I guess you are talking about adding a method for "are
> any of your contents changed" here. I don't see other list operations
> (like append, insert, re
> Hi Dan, are you going to cook a patch to expand the base class? Or we can do
> that ourselves?
Yeah, I'll try to get to that today.
--Dan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/lis
> Effective immediately, I would like to unfreeze nova-network
> development.
I fully support this plan, while also agreeing that Neutron is the
future of networking for OpenStack. As we have seen with recent
performance-related gate failures, we cannot continue to ignore
nova-network while the re
> I was thinking for the upgrade process that we could leverage the port
> attach/detach BP done by Dan Smith a while ago. This has libvirt support
> and there are patches pending approval for Xen and Vmware. Not sure about
> the other drivers.
>
> If the guest can deal with
> If necessary the tasks work could be done solely as an extension, but I
> would really prefer to avoid that so I'll get this ball rolling quickly.
I agree that doing it as a bolt-on to v3 would be significantly less
favorable than making it an integrated feature of the API. IMHO, if a
server cre
> I'm of the opinion that the scheduler should use objects, for all the
> reasons that Nova uses objects, but that they should not be Nova
> objects. Ultimately what the scheduler needs is a concept of capacity,
> allocations, and locality of resources. But the way those are modeled
> doesn't nee
> Basically, if object A has object B as a child, and deserialization
> finds object B to be an unrecognized version, it will try to back
> port the object A to the version number of object B.
Right, which is why we rev the version of, say, the InstanceList when we
have to rev Instance itself, and
> And the error messages, which look like this:
>
> Returning exception Unexpected task state: expecting [u'scheduling',
> None] but the actual state is deleting to caller
>
> don't make sense -- at least in the English language.
It's missing some grouping operator to help with order of operatio
> The inner exception is a thing and the outer pieces are a thing. The
> inner means that some instance update was attempted, but should be
> aborted if the instance state is not what we think it is.
And looking a little closer, I think it means that the
messaging.expected_exceptions decorator isn
>> Whats the underlying problem here? nova notifications aren't
>> versioned? Nova should try to support ceilometer's use case so
>> it sounds like there is may be a nova issue in here as well.
>
> Oh you're far from it.
>
> Long story short, the problem is that when an instance is detroyed,
>
> We don't have to add a new notification, but we have to add some
> new datas in the nova notifications. At least for the delete
> instance notification to remove the ceilometer nova notifier.
>
> A while ago, I have registered a blueprint that explains which
> datas are missing in the current no
> I would also like to see CI (either third party or in the gate) for
> the nova driver before merging it. There's a chicken and egg problem
> here if its in the gate, but I'd like to see it at least proposed as a
> review.
Yeah, I think that the existing nova-baremetal driver is kinda frozen in
a
> - "fat model" approach - put the db interaction in objects
If it's just DB interaction, then yes, in the object for sure.
> - put the db interactions in the conductor itself
There is a reasonable separation between using conductor for mechanics
(i.e. API deferring a long-running activity to co
> - We want to make backwards incompatible changes to the API
> and whether we do it in-place with V2 or by releasing V3
> we'll have some form of dual API support burden.
IMHO, the cost of maintaining both APIs (which are largely duplicated)
for almost any amount of time outweighs the cost of
> The API layer is a actually quite a very thin layer on top of the
> rest of Nova. Most of the logic in the API code is really just
> checking incoming data, calling the underlying nova logic and then
> massaging what is returned in the correct format. So as soon as you
> change the format the cos
> So the deprecation message in the patch says:
>
>LOG.warning(_('XML support has been deprecated and will be
> removed in the Juno release.'))
>
> perhaps that should be changed :-)
Maybe, but I think we can continue with the plan to rip it out in Juno.
In the past when we've a
> onSharedStorage = True
> on_shared_storage = False
This is a good example. I'm not sure it's worth breaking users _or_
introducing a new microversion for something like this. This is
definitely what I would call a "purity" concern as opposed to "usability".
Things like the twenty different date
> I thought micro versioning was so we could make backwards compatible changes.
> If we make breaking changes we need to support the old and the new for
> a little while.
Adding a new field alongside an old one in a structure that we return is
not a breaking change to me, IMHO. We can clean up the
> I think we need to find an alternative way to support the new and old
> formats, like Accepts Headers, and retro-fitting a version to
> extensions so we can easily advertise new attributes, to those parsers
> that will break when they encounter those kinds of things.
Agreed.
> Now I am tempted
> +1, seems would could explore for another cycle just to find out that
> backporting everything to V2 isn't going to be what we want, and now
> we've just wasted more time.
> If we say it's just deprecated and frozen against new features, then
> it's maintenance is just limited to bug fixes right
> Yeah, so objects is the big one here.
Objects, and everything else. With no-db-compute we did it for a couple
cycles, then objects, next it will be retooling flows to conductor, then
dealing with tasks, talking to gantt, etc. It's not going to end any
time soon.
> So what kind of reaction are t
> This would reduce the amount of duplication which is required (I doubt
> we could remove all duplication though) and whether its worth it for say
> the rescue example is debatable. But for those cases you'd only need to make
> the modification in one file.
Don't forget the cases where the call c
> So I was thinking about this and Ken'ichi has basically said pretty
> much the same thing in his reply to this thread. I don't think it
> makes client moves any easier though - this is all about lowering our
> maintenance costs.
So, in the other fork of this thread, I think you said we can't im
> We may need to differentiate between breaking the API and breaking
> corner-case behavior.
Totally agreed.
> In one case you force everyone in the ecosystem to
> adapt (the libraries, the end user code). In the other you only
> (potentially) affect those that were not following the API correctl
> Users actually care about the latter. If the API accepts 'red' as a
> valid UUID then that is part of the implicit contract.
Yeah, but right now, many of those things are "would fail on postgres
and succeed on mysql" (not uuids necessarily, but others). Since we
can't honor them in all cases, I
> So if we make backwards incompatible changes we really need a major
> version bump. Minor versions don't cut it, because the expectation is
> you have API stability within a major version.
I disagree. If the client declares support for it, I think we can very
reasonably return new stuff.
If we
> So I think once we start returning different response codes, or
> completely different structures (such as the tasks change will be), it
> doesn't matter if we make the change in effect by invoking /v2 prefix
> or /v3 prefix or we look for a header. Its a major api revision. I
> don't think we sh
> Sure, but that's still functionally equivalent to using the /v2 prefix.
> So we could chuck the current /v3 code and do:
>
> /v2: Current thing
> /v3: invalid, not supported
> /v4: added simple task return for server create
> /v5: added the event extension
> /v6: added a new event for cinder to
> I do think client headers instead of urls have some pragmatic
> approach here that is very attractive. Will definitely need a good
> chunk of plumbing to support that in a sane way in the tree that
> keeps the overhead from a review perspective low.
Aside from some helper functions to make this
> So whilst we still have extensions (and that's a separate debate) we
> need versioning on a per extension basis. Otherwise people are forced
> to upgrade their extensions in lockstep with each other.
I think that some people would argue that requiring the extensions to go
together linearly is
> In a chat with Dan Smith on IRC, he was suggesting that the important
> thing was not to use class paths in the config file. I can see that
> internal implementation should not be exposed in the config files -
> that way the implementation can change without impacting the nova
>
> How about using 'unstable' as a component of the entrypoint group?
> E.g., "nova.unstable.events"…
Well, this is a pretty heavy way to ensure that the admin gets the
picture, but maybe appropriate :)
What I don't think we want is the in-tree plugins having to hook into
something called "unstabl
> What I'd like to do next is work through a new proposal that includes
> keeping both v2 and v3, but with a new added focus of minimizing the
> cost. This should include a path away from the dual code bases and to
> something like the "v2.1" proposal.
I think that the most we can hope for is con
> However, it presents a problem when we consider NovaObjects, and
> dependencies between them.
I disagree with this assertion, because:
> For example, take Instance.save(). An
> Instance has relationships with several other object types, one of which
> is InstanceInfoCache. Consider the followin
> * Flavor.save() makes an unbounded number of db calls in separate
> transactions.
This is actually part of the design of the original flavors public API.
Since we can add and remove projects/specs individually, we avoid ending
up with just one or the other group of values, for competing requests
> However, because of nova objects pylint is progressively less and less
> useful. So the fact that no one else looked at it means that people
> didn't seem to care that it was provably broken. I think it's better
> that we just delete the jobs and save a node on every nova patch instead.
Agreed.
> 3. vish brought up one draw back of versioned objects: the difficulty in
> cherry picking commits for stable branches - Is this a show stopper?.
After some discussion with some of the interested parties, we're
planning to add a third .z element to the version numbers and use that
to handle backp
> I'm going to be honest and say I'm confused here.
>
> We've always said we expect cores to maintain an average of two
> reviews per day. That's not new, nor a rule created by me. Padraig is
> a great guy, but has been working on other things -- he's done 60
> reviews in the last 60 days -- which
> The argument boils down to there is a communications cost to adding
> someone to core, and therefore there is a maximum size before the
> communications burden becomes to great.
I'm definitely of the mindset that the core team is something that has a
maximum effective size. Nova is complicated
> [joehuang] Could you pls. make it more clear for the deployment mode
> of cells when used for globally distributed DCs with single API. Do
> you mean cinder/neutron/glance/ceilometer will be shared by all
> cells, and use RPC for inter-dc communication, and only support one
> vendor's OpenStack d
>> 2. There's no way to add an existing server to this "group".
>
> In the original API there was a way to add existing servers to the
> group. This didn't make it into the code that was submitted. It is
> however supported by the instance group db API in nova.
>
>> 3. There's no way to remove
> I'd like to propose the ability to support a pluggable trove conductor
> manager. Currently the trove conductor manager is hard-coded [1][2] and
> thus is always 'trove.conductor.manager.Manager'. I'd like to see this
> conductor manager class be pluggable like nova does [3].
Note that most of u
dn't seem to be an obvious advantage to using RPC rather
> than the rest interface. Lastly, this new interface that nova exposes
> is generic and not neutron specific as it can be used for other type
> of notifications that things want to send nova. I added Dan Smith to
> CC to keep
> I have additional concern that API is something that's user facing
> so basically now Nova is exposing some internal synchronization
> detail to the outside world.
We have lots of admin-only APIs.
> Does it make sense that the user would now be able to send messages
> to this API?
Potentially.
> It would be nice to have an informal discussion / unconference session
> before the actual summit session on SR-IOV. During the previous IRC
> meeting, we were really close to identifying the different use cases.
> There was a dangling discussion on introducing another level of
> indirection betw
> On a compute manager that is still running the old version of the code
> (i.e using the previous object version), if a method that hasn’t yet
> been converted to objects gets a dict created from the new version of
> the object (e.g. rescue, get_console_output), then object_compat()
> decorator w
I think it'd be OK to move them to the experimental queue and a periodic
nightly job until the v2.1 stuff shakes out. The v3 API is marked
experimental right now so it seems fitting that it'd be running tests in
the experimental queue until at least the spec is approved and
microversioning starts
Thats true, though I was suggesting as v2.1microversions rolls out we
drop the test out of v3 and move it to v2.1microversions testing, so
there's no change in capacity required.
Right now we run a full set over /v2 and a full set over /v3. Certainly
as we introduce /v2.1 we'll need full cover
> Why accept it?
>
> * It's low risk but needed refactoring that will make the code that has
> been a source of occasional bugs.
> * It is very low risk internal refactoring that uses code that has been
> in tree for some time now (BDM objects).
> * It has seen it's fair share of reviews
Yeah, th
> However, when attempting to boot an instance, the Nova network service
> fails to retrieve network information from the controller. Adding the
> the database keys resolves the problem. I'm using
> the 2014.1~b3-0ubuntu1~cloud0 packages on Ubuntu 12.04.
Can you file a bug with details from the lo
> https://bugs.launchpad.net/nova/+bug/1290568
Thanks. Note that the objects work doesn't really imply that the service
doesn't hit the database. In fact, nova-compute stopped hitting the
database before we started on the objects work.
Anyway, looks like there are still some direct-to-database th
> Hmm... I guess the blueprint summary led me to believe that nova-network
> no longer needs to hit the database.
Yeah, using objects doesn't necessarily mean that the rest of the direct
database accesses go away. However, I quickly cooked up the rest of what
is required to get this done:
https:/
> I'm confused as to why we arrived at the decision to revert the commits
> since Jay's patch was accepted. I'd like some details about this
> decision, and what new steps we need to take to get this back in for Juno.
Jay's fix resolved the immediate problem that was reported by the user.
However,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> Here is the latest marked fail -
> http://logs.openstack.org/28/79628/4/check/check-tempest-dsvm-neutron/11f8293/
So,
>
looking at this a little bit, you can see from the n-cpu log that
it is getting failures when talking to neutron. Specifically,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> Because of where we are in the freeze, I think this should wait
> until Juno opens to fix. Icehouse will only be compatible with
> SQLA 0.8, which I think is fine. I expect the rest of the issues
> can be addressed during Juno 1.
Agreed. I think we
> Just to answer this point, despite the review latency, please don't be
> tempted to think one big change will get in quicker than a series of
> little, easy to review, changes. All changes are not equal. A large
> change often scares me away to easier to review patches.
>
> Seems like, for Juno-
> If we managed to break Horizon, its likely we've broken (or will break)
> other people's scripts or SDKs.
>
> The patch was merged in October (just after Icehouse opened) and so has
> been used in clouds that do CD for quite a while. After some discussion
> on IRC I think we'll end up having to
> Any ideas on what might be going on would be appreciated.
This looks like something that should be filed as a bug. I don't have
any ideas off hand, bit I will note that the reconnection logic works
fine for us in the upstream upgrade tests. That scenario includes
starting up a full stack, then t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> Where can I obtain more information about this feature?
- From the blog post that I've yet to write :D
> Does above imply that database is upgraded along with control
> service update as well?
Yes, but only for the services that interact directly
Hi all,
I would like to run for the OpenStack Compute (Nova) PTL position.
Qualifications
-
I have been working almost exclusively on Nova since mid-2012, and have
been on the nova-core team since late 2012. I am also a member of
nova-drivers, where I help to target and prioritize
> At run time there are decorators that behave in an unexpected manner.
> For instance, in nova/compute/manager.py when ComputeManager's
> resize_instance method is called, the "migration" positional argument
> is somehow added to kwargs (paired with the "migration" key) and is
> stripped out of th
>> So I'm a soft -1 on dropping it from hacking.
Me too.
> from testtools import matchers
> ...
>
> Or = matchers.Or
> LessThan = matchers.LessThan
> ...
This is the right way to do it, IMHO, if you have something like
matchers.Or that needs to be treated like part of the syntax. Otherwise,
mod
> If feels like the right sequence is:
>
> - Deploy the new code in Nova and at the same time set
> vif_plugging_is-fatal=False, so that Nova will wait for Neutron, but
> will still continue if the event never turns up (which is kind of like
> the code was before, but with a wait)
Yes, b
> There may be some consistency work needed. I spent some time/text in
> justification around no security impact in a spec. I was guided
> specifically that None was a better statement.
I think you're referring to me. What I said was, you went into a lot of
depth explaining why there was no secu
> I'm not asking for 100% consistency. I'm just raising it since it seems
> to be early in the process change and want to work out these kinds of
> things. If it turns out to be an outlier then great.
Sure, and the spec reviewers are learning in this process as well. It
takes a certain amount of
> Do we really want to -1 for spelling mistake in nova-specs?
I do, yes. These documents are intended to be read by deployers and
future developers. I think it's really important that they're useful in
that regard.
> This is really a bad news for non-native speaker like me because I'm
> really n
> and I don't see how https://review.openstack.org/#/c/121663/ is actually
> dependent on https://review.openstack.org/#/c/119521/.
Yeah, agreed. I think that we _need_ the fix patch in Juno. The query
optimization is good, and something we should take, but it makes me
nervous sliding something li
> The value it adds (and that an underscore would add in hvtype ->
> hv_type) is that the name would match the naming style for the vast
> majority of everything else in OpenStack. See, for examples:
Agreed.
> As mentioned in the review, I disagree on this point, since "doing a
> cleanup afterwar
> OK, so in reviewing Dan B's patch series that refactors the virt
> driver's get_available_resource() method [1], I am stuck between two
> concerns. I like (love even) much of the refactoring work involved in
> Dan's patches. They replace a whole bunch of our nested dicts that are
> used in the re
> As there are multiple interfaces using non versioned dicts and as we are
> looking at reducing technical debt by Kilo, there are different
> blueprints which can be worked in parallel.
I don't think I disagree with anything above, but I'm not sure what
you're getting at. I think the parallelism
> The rationale behind two parallel data model hiercharies is that the
> format the virt drivers report data in, is not likely to be exactly
> the same as the format that the resoure tracker / scheduler wishes to
> use in the database.
Yeah, and in cases where we know where that line is, it makes
> I won’t have much time for OpenStack, but I’m going to continue to
> hang out in the channels.
Nope, sorry, veto.
Some options to explain your way out:
1. Oops, I forgot it wasn't April
2. I have a sick sense of humor; I'm getting help for it
3. I've come to my senses after a brief break from
>When I fix some bugs, I found that some code in
> nova/compute/api.py
> sometimes we use db ,sometimes we use objects do we have
> any criteria for it? I knew we can't access db in compute layer code,
> how about others ? prefer object or db direct access? thanks
Prefer
> If we really need that level of arbitrary complexity and future name
> values we should then just:
>
> pci_passthrough_cfg = /etc/nova/pci_pass.yaml
I hate to have to introduce a new thing like that, but I also think that
JSON-encoded config variable strings are a nightmare. They lead to bugs
li
> Are current nova CI platforms configured with nova-networking or with
> neutron networking? Or is networking in general not even a part of the
> nova CI approach?
I think we have several that only run on Neutron, so I think it's fine
to just do that.
--Dan
signature.asc
Description: OpenPGP
> An initial inconsistency I have noticed is that some objects refresh
> themselves from the database when calling save(), but others don't.
I agree that it would be ideal for all objects to behave the same in
this regard. I expect that in practice, it's not necessary for all
objects to do this,
> I personally favour having consistent behaviour across the board. How
> about updating them all to auto-refresh by default for consistency,
> but adding an additional option to save() to disable it for particular
> calls?
I think these should be two patches: one to make them all auto-refresh,
an
> I am proposing Wednesday as the meeting day since it's more open than
> Tues/Thurs so finding a meeting room at almost any time should be
> feasible. My opening bid is alternating between 1700 and 2200 UTC.
> That should provide options that aren't too early or too late for most
> people. Is t
1 - 100 of 371 matches
Mail list logo