As the person who -2'd the review, I'm thankful you raised this issue on
the ML, Jay. Much appreciated.
The metadetails term isn't being invented in this patch, of course. I
originally complained about the difference when this was being added:
Looks reasonable to me.
Description: OpenPGP digital signature
OpenStack-dev mailing list
I'm not questioning the value of f2f - I'm questioning the idea of
doing f2f meetings sooo many times a year. OpenStack is very much
the outlier here among open source projects - the vast majority of
projects get along very well with much less f2f time and a far
smaller % of their
On 8/13/14 11:20 AM, Mike Bayer wrote:
On Aug 13, 2014, at 1:44 PM, Russell Bryant rbry...@redhat.com
I disagree. IMO, *expecting* people to travel, potentially across
the globe, 4 times a year is an unreasonable expectation, and
quite uncharacteristic of open source projects. If we
You may have noticed that this has merged, along with a further change
that shows the latest results in a table format. (You may need to
force-reload in your browser to see the change.)
Thanks again to Radoslav Gerganov for writing the original change.
Thanks to all
== Move Virt Drivers to use Objects (Juno Work) ==
I couldn't actually find any code out for review for this one apart
from https://review.openstack.org/#/c/94477/, is there more out there?
This was an umbrella one to cover a bunch of virt driver objects work
done early in the cycle. Much of
OS or os is operating system. I am starting to see some people us OS or
os to mean OpenStack. This is confusing and also incorrect.
Except in the nova API code, where 'os' means 'openstack'.
I too think that policing the use of abbreviations is silly. It's a
confusing abbreviation, but
Feature freeze is only a few weeks away (Sept 4). How about we just
leave it in experimental until after that big push? That seems pretty
Joe just proposed dropping a bunch of semi-useless largeops runs. Maybe
that leaves room to sneak this in? If the infra team is okay with it,
The main sr-iov patches have gone through lots of code reviews, manual
rebasing, etc. Now we have some critical refactoring work on the
existing infra to get it ready. All the code for refactoring and sr-iov
is up for review.
I've been doing a lot of work on this recently, and plan to see
All the other patches from this blueprint have merged, the only
remaining patch really just needs a +W as it has been extensively
reviewed and already approved previously. This may be an easy
candidate since Andrew Laski, Jay Pipes and Dan Smith have reviewed
The last few days have been interesting as I watch FFEs come through.
People post explaining their feature, its importance, and the risk
associated with it. Three cores sign on for review. All of the ones
I've looked at have received active review since being posted. Would
it be bonkers to
As far as I understand it, though, that's a patch for a read-only
mode. It seems bizzare, and possibly dangerous, to proxy read
commands, but not write commands. It gives the impression that
everything's fine until it's not fine (because someone tried to use
an existing script to do a
1) Is this tested anywhere? There are no unit tests in the patch and
it's not clear to me that there would be any Tempest coverage of this
code path. Providing this and having it break a couple of months down
the line seems worse than not providing it at all. This is obviously
eg use a 'env_' prefix for glance image attributes
We've got a couple of cases now where we want to overrides these
same things on a per-instance basis. Kernel command line args
is one other example. Other hardware overrides like disk/net device
types are another possibility
I am fine with this, but I will never be attending the 1400 UTC
meetings, as I live in utc-8
I too will miss the 1400UTC meetings during the majority of the year.
During PDT I will be able to make them, but will be uncaffeinated.
Using this approach I think we can support live upgrades from N-1 to N
while still being able to drop some backwards compatibility code each
Agreed. We've been kinda slack about bumping the RPC majors for a while,
which means we end up with a lot of cruft and comments like
If an object A contains another object or object list (called
sub-object), any change happened in the sub-object can't be detected
by obj_what_changed() in object A.
Well, like the Instance object does, you can override obj_what_changed()
to expose that fact to the caller. However, I think
Sounds good to me. The list base objects don't have methods to make changes
to the list - so it would be a case of iterating looking at each object in
the list. That would be ok.
Hmm? You mean for NovaObjects that are lists? I hesitate to expose lists
as changed when one of the objects
This patch set makes the extra_resources a list of object, instead of
opaque json string. How do you think about that?
Sounds better to me, I'll go have a look.
However, the compute resource object is different with current
NovaObject, a) it has no corresponding table, but just a field in
ObjectListBase has a field called objects that is typed
fields.ListOfObjectsField('NovaObject'). I can see methods for count
and index, and I guess you are talking about adding a method for are
any of your contents changed here. I don't see other list operations
(like append, insert, remove,
Hi Dan, are you going to cook a patch to expand the base class? Or we can do
Yeah, I'll try to get to that today.
OpenStack-dev mailing list
Effective immediately, I would like to unfreeze nova-network
I fully support this plan, while also agreeing that Neutron is the
future of networking for OpenStack. As we have seen with recent
performance-related gate failures, we cannot continue to ignore
nova-network while the
I was thinking for the upgrade process that we could leverage the port
attach/detach BP done by Dan Smith a while ago. This has libvirt support
and there are patches pending approval for Xen and Vmware. Not sure about
the other drivers.
If the guest can deal with the fact that the nova port
If necessary the tasks work could be done solely as an extension, but I
would really prefer to avoid that so I'll get this ball rolling quickly.
I agree that doing it as a bolt-on to v3 would be significantly less
favorable than making it an integrated feature of the API. IMHO, if a
I'm of the opinion that the scheduler should use objects, for all the
reasons that Nova uses objects, but that they should not be Nova
objects. Ultimately what the scheduler needs is a concept of capacity,
allocations, and locality of resources. But the way those are modeled
doesn't need to
Basically, if object A has object B as a child, and deserialization
finds object B to be an unrecognized version, it will try to back
port the object A to the version number of object B.
Right, which is why we rev the version of, say, the InstanceList when we
have to rev Instance itself, and
And the error messages, which look like this:
Returning exception Unexpected task state: expecting [u'scheduling',
None] but the actual state is deleting to caller
don't make sense -- at least in the English language.
It's missing some grouping operator to help with order of operations.
The inner exception is a thing and the outer pieces are a thing. The
inner means that some instance update was attempted, but should be
aborted if the instance state is not what we think it is.
And looking a little closer, I think it means that the
messaging.expected_exceptions decorator isn't
Whats the underlying problem here? nova notifications aren't
versioned? Nova should try to support ceilometer's use case so
it sounds like there is may be a nova issue in here as well.
Oh you're far from it.
Long story short, the problem is that when an instance is detroyed,
we need to
We don't have to add a new notification, but we have to add some
new datas in the nova notifications. At least for the delete
instance notification to remove the ceilometer nova notifier.
A while ago, I have registered a blueprint that explains which
datas are missing in the current nova
I would also like to see CI (either third party or in the gate) for
the nova driver before merging it. There's a chicken and egg problem
here if its in the gate, but I'd like to see it at least proposed as a
Yeah, I think that the existing nova-baremetal driver is kinda frozen in
- fat model approach - put the db interaction in objects
If it's just DB interaction, then yes, in the object for sure.
- put the db interactions in the conductor itself
There is a reasonable separation between using conductor for mechanics
(i.e. API deferring a long-running activity to
- We want to make backwards incompatible changes to the API
and whether we do it in-place with V2 or by releasing V3
we'll have some form of dual API support burden.
IMHO, the cost of maintaining both APIs (which are largely duplicated)
for almost any amount of time outweighs the cost of
The API layer is a actually quite a very thin layer on top of the
rest of Nova. Most of the logic in the API code is really just
checking incoming data, calling the underlying nova logic and then
massaging what is returned in the correct format. So as soon as you
change the format the cost of
So the deprecation message in the patch says:
LOG.warning(_('XML support has been deprecated and will be
removed in the Juno release.'))
perhaps that should be changed :-)
Maybe, but I think we can continue with the plan to rip it out in Juno.
In the past when we've asked,
onSharedStorage = True
on_shared_storage = False
This is a good example. I'm not sure it's worth breaking users _or_
introducing a new microversion for something like this. This is
definitely what I would call a purity concern as opposed to usability.
Things like the twenty different datetime
I thought micro versioning was so we could make backwards compatible changes.
If we make breaking changes we need to support the old and the new for
a little while.
Adding a new field alongside an old one in a structure that we return is
not a breaking change to me, IMHO. We can clean up the
I think we need to find an alternative way to support the new and old
formats, like Accepts Headers, and retro-fitting a version to
extensions so we can easily advertise new attributes, to those parsers
that will break when they encounter those kinds of things.
Now I am tempted to
+1, seems would could explore for another cycle just to find out that
backporting everything to V2 isn't going to be what we want, and now
we've just wasted more time.
If we say it's just deprecated and frozen against new features, then
it's maintenance is just limited to bug fixes right?
Yeah, so objects is the big one here.
Objects, and everything else. With no-db-compute we did it for a couple
cycles, then objects, next it will be retooling flows to conductor, then
dealing with tasks, talking to gantt, etc. It's not going to end any
So what kind of reaction are
This would reduce the amount of duplication which is required (I doubt
we could remove all duplication though) and whether its worth it for say
the rescue example is debatable. But for those cases you'd only need to make
the modification in one file.
Don't forget the cases where the call
So I was thinking about this and Ken'ichi has basically said pretty
much the same thing in his reply to this thread. I don't think it
makes client moves any easier though - this is all about lowering our
So, in the other fork of this thread, I think you said we can't
We may need to differentiate between breaking the API and breaking
In one case you force everyone in the ecosystem to
adapt (the libraries, the end user code). In the other you only
(potentially) affect those that were not following the API correctly.
Users actually care about the latter. If the API accepts 'red' as a
valid UUID then that is part of the implicit contract.
Yeah, but right now, many of those things are would fail on postgres
and succeed on mysql (not uuids necessarily, but others). Since we
can't honor them in all cases, I
So if we make backwards incompatible changes we really need a major
version bump. Minor versions don't cut it, because the expectation is
you have API stability within a major version.
I disagree. If the client declares support for it, I think we can very
reasonably return new stuff.
So I think once we start returning different response codes, or
completely different structures (such as the tasks change will be), it
doesn't matter if we make the change in effect by invoking /v2 prefix
or /v3 prefix or we look for a header. Its a major api revision. I
don't think we should
Sure, but that's still functionally equivalent to using the /v2 prefix.
So we could chuck the current /v3 code and do:
/v2: Current thing
/v3: invalid, not supported
/v4: added simple task return for server create
/v5: added the event extension
/v6: added a new event for cinder to the
I do think client headers instead of urls have some pragmatic
approach here that is very attractive. Will definitely need a good
chunk of plumbing to support that in a sane way in the tree that
keeps the overhead from a review perspective low.
Aside from some helper functions to make this
So whilst we still have extensions (and that's a separate debate) we
need versioning on a per extension basis. Otherwise people are forced
to upgrade their extensions in lockstep with each other.
I think that some people would argue that requiring the extensions to go
together linearly is a
In a chat with Dan Smith on IRC, he was suggesting that the important
thing was not to use class paths in the config file. I can see that
internal implementation should not be exposed in the config files -
that way the implementation can change without impacting the nova
How about using 'unstable' as a component of the entrypoint group?
Well, this is a pretty heavy way to ensure that the admin gets the
picture, but maybe appropriate :)
What I don't think we want is the in-tree plugins having to hook into
something called unstable.
What I'd like to do next is work through a new proposal that includes
keeping both v2 and v3, but with a new added focus of minimizing the
cost. This should include a path away from the dual code bases and to
something like the v2.1 proposal.
I think that the most we can hope for is
Why accept it?
* It's low risk but needed refactoring that will make the code that has
been a source of occasional bugs.
* It is very low risk internal refactoring that uses code that has been
in tree for some time now (BDM objects).
* It has seen it's fair share of reviews
Yeah, this has
However, when attempting to boot an instance, the Nova network service
fails to retrieve network information from the controller. Adding the
the database keys resolves the problem. I'm using
the 2014.1~b3-0ubuntu1~cloud0 packages on Ubuntu 12.04.
Can you file a bug with details from the logs?
Thanks. Note that the objects work doesn't really imply that the service
doesn't hit the database. In fact, nova-compute stopped hitting the
database before we started on the objects work.
Anyway, looks like there are still some direct-to-database
Hmm... I guess the blueprint summary led me to believe that nova-network
no longer needs to hit the database.
Yeah, using objects doesn't necessarily mean that the rest of the direct
database accesses go away. However, I quickly cooked up the rest of what
is required to get this done:
I'm confused as to why we arrived at the decision to revert the commits
since Jay's patch was accepted. I'd like some details about this
decision, and what new steps we need to take to get this back in for Juno.
Jay's fix resolved the immediate problem that was reported by the user.
-BEGIN PGP SIGNED MESSAGE-
Here is the latest marked fail -
looking at this a little bit, you can see from the n-cpu log that
it is getting failures when talking to neutron. Specifically,
-BEGIN PGP SIGNED MESSAGE-
Because of where we are in the freeze, I think this should wait
until Juno opens to fix. Icehouse will only be compatible with
SQLA 0.8, which I think is fine. I expect the rest of the issues
can be addressed during Juno 1.
Agreed. I think we
Just to answer this point, despite the review latency, please don't be
tempted to think one big change will get in quicker than a series of
little, easy to review, changes. All changes are not equal. A large
change often scares me away to easier to review patches.
Seems like, for Juno-1, it
If we managed to break Horizon, its likely we've broken (or will break)
other people's scripts or SDKs.
The patch was merged in October (just after Icehouse opened) and so has
been used in clouds that do CD for quite a while. After some discussion
on IRC I think we'll end up having to leave
Any ideas on what might be going on would be appreciated.
This looks like something that should be filed as a bug. I don't have
any ideas off hand, bit I will note that the reconnection logic works
fine for us in the upstream upgrade tests. That scenario includes
starting up a full stack, then
-BEGIN PGP SIGNED MESSAGE-
Where can I obtain more information about this feature?
- From the blog post that I've yet to write :D
Does above imply that database is upgraded along with control
service update as well?
Yes, but only for the services that interact directly
I would like to run for the OpenStack Compute (Nova) PTL position.
I have been working almost exclusively on Nova since mid-2012, and have
been on the nova-core team since late 2012. I am also a member of
nova-drivers, where I help to target and
At run time there are decorators that behave in an unexpected manner.
For instance, in nova/compute/manager.py when ComputeManager's
resize_instance method is called, the migration positional argument
is somehow added to kwargs (paired with the migration key) and is
stripped out of the args
4. What is the max amount of time for us to report test results? Dan
didn't seem to think 48 hours would fly. :)
Honestly, I think that 12 hours during peak times is the upper limit of
what could be considered useful. If it's longer than that, many patches
could go into the tree without a
We could either have a single repo:
This would be my preference for sure, just from the standpoint of
additional release complexity otherwise. I know it might complicate how
the core team works, but presumably we could get away with just having
I think that all drivers that are officially supported must be
treated in the same way.
Well, we already have multiple classes of support due to the various
states of testing that the drivers have.
If we are going to split out drivers into a separate but still
official repository then we
My only request here is that we can make sure that new driver
features can land for other drivers without necessarilky having them
implemented for libvirt/KVM first.
We've got lots of things supported by the XenAPI drivers that aren't
supported by libvirt, so I don't think this is a problem
If the idea is to gate with nova-extra-drivers this could lead to a
rather painful process to change the virt driver API. When all the
drivers are in the same tree all of them can be updated at the same
time as the infrastructure.
Right, and I think if we split those drivers out, then we do
From the user perspective, splitting off the projects seems to be
focussing on the ease of commit compared to the final user
I think what you describe is specifically the desire that originally
spawned the thread: making the merging of changes to the hyper-v driver
4) Periodically, code from the new project(s) must be merged into Nova.
Only Nova core reviewers will have obviously +2a rights here.
I propose to do it on scheduled days before every milestone, differentiated
per driver to distribute the review effort (what about also having Nova core
The last thing that OpenStack needs ANY more help with is velocity. I
mean, let's be serious - we land WAY more patches in a day than is
even close to sane.
Thanks for saying this -- it doesn't get said enough. I find it totally
amazing that we're merging 34 changes in a day (yesterday) which
+1 - I think we really want to have a strong preference for a stable
api if we start separating parts out
So, as someone who is about to break the driver API all to hell over the
next six months (er, I mean, make some significant changes), I can tell
you that making it stable is the best way
This system is running tempest against a VMWare deployment and posting
the results publicly. This is really great progress. It will go a long
way in helping reviewers be more confident in changes to this driver.
This is huge progress, congrats and thanks to the VMware team for making
As I don't see how to keep it in the review, I'll copy to openstack-dev.
Just keep making your comments in Gerrit. That way all the discussion
related to a specific patch is preserved with proper linkage in case we
ever need to go back to it.
OK, I think I see what I need to do to do to not
In the last meeting we discussed an idea that I think is worth trying at
least for icehouse-1 to see if we like it or not. The idea is that
*every* blueprint starts out at a Low priority, which means best
effort, but no promises. For a blueprint to get prioritized higher, it
should have 2
n-conductor log in tempest/devstack -
Total log lines: 84076
Total non DEBUG lines: 61
Question: do we need more than 1 level of
If we had DEBUG and DEBUG2 levels, where one of them would only be seen
at the higher debug level, would that be useful?
I'm fine with not seeing those for devstack runs, yeah.
OpenStack-dev mailing list
1) I agree with Russell that Nova should not try to manage VMs it didn't
Me too. A lot.
OpenStack-dev mailing list
This API and these models are what we are trying to avoid exposing to
the rest of nova. By wrapping these in our NovaObject-based
1) Using objects as an abstraction for versioned data:
This seems like a good direction overall, especially from the point-of-view
of backwards compatibility of consumers of the object. However, after
looking through some
of the objects defined in nova/objects/, I am not sure if I
I'd like us to avoid meaningless reviewer churn here:
I'd like us to avoid trivial style guideline churn :)
The case that made me raise this is this:
folder_exists, file_exists, file_size_in_kb, disk_extents = \
self._path_file_exists(ds_browser, folder_path, file_name)
You're right, it's not really achievable without moving to a schemaless
persistence model. I'm fairly certain it was added to be humorous and
should not be considered an outcome of that session.
But we can avoid most data migrations by adding any required
conversion code into the objects
I assume this back support include also object and the N means major release
like H, I, right?
Yes, I meant N=Icehouse, N-1=Havana, for example.
OpenStack-dev mailing list
As you know, Nova adopted a plan to require CI testing for all our
in-tree hypervisors by the Icehouse release. At the summit last week, we
determined the actual plan for deprecating non-compliant drivers. I put
together a page detailing the specific requirements we're putting in
Thanks for weighing in, I do hope to keep the conversation going.
Add my name to the list of people that won't be considering the trip as
OpenStack-dev mailing list
Another issue that we might want to try and solve with this (and imho
another reason to keep the framework for this in oslo) is that we might
want to make these update aware, and allow for some graceful degradation
or something similar when for example an updated service is stated with
To be fair, we test only the subset that is set via devstack on the
stable side. That should be a common subset, but it is far from fully
comprehensive. Nova has over 600 config variables, so additional tooling
here would be goodness.
I'm surely not arguing against additional testing of
Sorry for the delay in responding to this...
* Moved the _obj_classes registry magic out of ObjectMetaClass and into
its own method for easier use. Since this is a subclass based
having a separate method feels more appropriate for a factory/registry
Not having been at the summit (maybe the next one), could somebody
give a really short explanation as to why it needs to be a separate
service? It sounds like it should fit within the Nova area. It is,
after all, just another hypervisor type, or so it seems.
But it's not just another
I think the plus of avoiding decorating things isn't really a huge
win, and actually i think takes clarity away.
Hence the (meh) in my list :)
This wasn't really a sticking point when we were getting reviews on the
original base infrastructure, so I'm surprised people are so vehement
Please respond with +1/-1, or any further comments.
+1 from me -- Matt has been helping a lot lately.
OpenStack-dev mailing list
So just to clarify: the native driver for another hypervisor (bhyve)
would not be accepted into Nova even if it met testing coverage
criteria? As I said the libvirt route is an option we consider, but we
would like to have the possibility of a native FreeBSD api integration
as well, similar
My overall concern, and I think the other guys doing this for virt
drivers will agree, is trying to scope down the exposure to unrelated
But, if it's not related to your driver, then it also failed in the
upstream gate, right? If it didn't fail the upstream gate, then it is
I think it'd be OK to move them to the experimental queue and a periodic
nightly job until the v2.1 stuff shakes out. The v3 API is marked
experimental right now so it seems fitting that it'd be running tests in
the experimental queue until at least the spec is approved and
Thats true, though I was suggesting as v2.1microversions rolls out we
drop the test out of v3 and move it to v2.1microversions testing, so
there's no change in capacity required.
Right now we run a full set over /v2 and a full set over /v3. Certainly
as we introduce /v2.1 we'll need full
-BEGIN PGP SIGNED MESSAGE-
What if 3rd Party CI didn't vote in Gerrit? What if it instead
published to some 3rd party test reporting site (a thing that
doesn't yet exist). Gerrit has the facility so that we could inject
the dashboard content for this in Gerrit in a little
-BEGIN PGP SIGNED MESSAGE-
There is a similar old bug for that, with a good suggestion for how
it could possibly be done:
This isn't what I'm talking about. What we need is, for each new
patchset on a given change, an
I'm contemplating how to fix
https://bugs.launchpad.net/nova/+bug/1339823 and it seems that a part of
the fix would be to track the state of live migrations in the database,
more or less the same way that cold migrations are tracked. The
thinking is that the logic could retrieve information
1 - 100 of 359 matches
Mail list logo