i that gets turned into properly-related SQLA
> objects. I think we could do the same for any that we're currently
> cascading separately, even if the db/api update method uses a
> transaction to ensure safety.
As I mention above, the problem wit
i using an ORM other than sqlalchemy, so we should
probably ditch it and promote it to db/api.py.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
is to just conductor is
appropriate and useful. Compare and swap at the object level would be
a useful mechanism for safety across multiple rpc calls.
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05
traditional flush process to flush it, as regular flush is a lot more
> reliable, so I’d agree this method is awkward and should be fixed,
> but I’m not sure there’s a second SELECT there.
Indeed, looks like there's just a single select here. Aggregate does,
however, fetch twice. This
the
> database. We've changed very little of that access pattern.
>
> I think we should push back to Matt to provide a description of why
> he thinks that this is a problem.
I don't think it's a problem. It puts a practical limit on the scope
of an 'api call' whi
where at all.
_validate_instance_group_policy() in compute manager seems to be doing
something else.
Are these undead relics in need of a final stake through the heart, or
is something else going on here?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +44207009444
, and that the object cannot be referenced from any
other thread? Seems safer just to pass it around.
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATU
s what we’re going with).
Dan,
Note that this model, as I understand it, would conflict with storing
context in NovaObject.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA
tually show up, I can’t
> imagine what that would be looking for, unless maybe some large
> amount of operations took up a lot of time between the flush() and
> the refresh().
Given the above constraints, the problem I'm actually trying to solve is
when another process modifies an obj
On 12/11/14 23:23, Mike Bayer wrote:
>
>> On Nov 12, 2014, at 10:56 AM, Matthew Booth wrote:
>>
>> For brevity, I have conflated what happens in object.save() with what
>> happens in db.api. Where the code lives isn't relevant here: I'm only
>> looki
eturn query.first()
>
> which gets called from object save()
Yes, this is one example, another is Aggregate. I already had a big list
in the post and didn't want a second one.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C
nd fix all callers, but
I suspect that's more likely to bite us in the short term unless we're
confident we can identify all the critical callers.
I also suggest a tactical fix to any object which fetches itself twice
on update (e.g. Aggregate).
>
>> Additionally, Instance,
ected.
Additionally, Instance, InstanceGroup, and Flavor perform multiple
updates on save(). I would apply the same rule to the sub-updates, and
also move them into a single transaction such that the updates are atomic.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Ph
they're not serving any useful purpose.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
Ope
t it doesn't care how the function is implemented.
TL;DR
* Bootstrap hour is awesome
* Don't fake if you don't have to
* However, there are situations where it's a good choice
Thanks for reading :)
Matt
[1] There are other ways to skin this cat, but ultimately if you aren&
play in the sandpit before
mixing with the big boys? If a tortuously slow review process is a
primary cause of technical debt, will adding more steps to it improve
the situation? I hope the answer is obvious. And I'll be honest, I found
the suggestion more than a little patronising.
Matt
--
Ma
On 04/09/14 14:46, Daniel P. Berrange wrote:
> On Thu, Sep 04, 2014 at 02:09:26PM +0100, Matthew Booth wrote:
>> I'd like to request a FFE for the remaining changes from
>> vmware-spawn-refactor. They are:
>>
>> https://review.openstack.org/#/c/109754/
>> h
, and has been given +1
by VMware CI multiple times.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing
tt +2
Andrew Laski +2
John Garbutt +2 +A
These patches have been lightly touched to resolve merge conflicts with
the oslo.vmware integration, but no more. If people could take another
quick look I'd be very grateful.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation T
On 30/08/14 03:45, Steve Gordon wrote:
> - Original Message -
>> From: "Matthew Booth"
>>
>> On 14/08/14 12:41, Steve Gordon wrote:
>>> - Original Message -
>>>> From: "Matthew Booth"
>>>> To: "Ope
eloper time.
* It isn't flexible enough for any conceivable future feature
Lets avoid premature generalisation. We can always generalise as part of
landing the future feature.
Any more of these?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (
ng against a 10-patch series:
git rebase -i -x './run_tests.sh -8' -x './run_tests.sh vmwareapi'
This will give you an interactive rebase (-x requires -i) on to .
After applying each patch it will run each of the 2 given commands. If
either fails it will pause. Afte
penstack.org/#/c/87546/
>>
>> _______
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
On 14/08/14 12:41, Steve Gordon wrote:
> - Original Message -
>> From: "Matthew Booth"
>> To: "OpenStack Development Mailing List (not for usage questions)"
>>
>>
>> I've just spent the best part of a day tracking down why i
warning.
Does anybody have a canonical list of valid values?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-
On 08/08/14 11:04, Matthew Booth wrote:
> On 07/08/14 18:54, Kevin L. Mitchell wrote:
>> On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
>>>> In any case, the operative point is that CONF. must
>>> always be
>>>> evaluated inside run-time code, ne
On 07/08/14 19:02, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 17:41 +0100, Matthew Booth wrote:
>> ... or arg is an object which defines __nonzero__(), or defines
>> __getattr__() and then explodes because of the unexpected lookup of a
>> __nonzero__ attribute. Or it
On 07/08/14 18:54, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
>>> In any case, the operative point is that CONF. must
>> always be
>>> evaluated inside run-time code, never at module load time.
>>
>> ...unless you cal
On 07/08/14 17:39, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 17:27 +0100, Matthew Booth wrote:
>> On 07/08/14 16:27, Kevin L. Mitchell wrote:
>>> On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
>>>> A (the?) solution is to register_opts() in foo bef
longer pass in the sentinel.
These are tricky, case-by-case workarounds to a general problem which
can be solved by simply calling register_opts() in a place where it's
guaranteed to be safe. Is there any reason not to call register_opts()
before importing other modules?
Matt
--
On 07/08/14 17:11, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 10:55 -0500, Matt Riedemann wrote:
>>
>> On 8/7/2014 10:27 AM, Kevin L. Mitchell wrote:
>>> On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
>>>> A (the?) solution is to registe
On 07/08/14 16:27, Kevin L. Mitchell wrote:
> On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
>> A (the?) solution is to register_opts() in foo before importing any
>> modules which might also use oslo.config.
>
> Actually, I disagree. The real problem here
On 07/08/14 12:15, Matthew Booth wrote:
> I'm sure this is well known, but I recently encountered this problem for
> the second time.
>
> ---
> foo:
> import oslo.config as cfg
>
> import bar
>
> CONF = cfg.CONF
> CONF.register_opts('foo_opt
mport oslo.config as cfg
CONF = cfg.CONF
CONF.import_opt('foo_opt', 'foo')
def bar_func(arg=CONF.foo_opt):
pass
---
Even if it's old news it's worth a refresher because it was a bit of a
headscratcher.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Tea
ve results. As a project we need to
understand the importance of CI failures. We need a proper negotiation
with contributors to staff a team dedicated to the problem. We can then
use the review process to ensure that the right people have an incentive
to prioritise bug fixes.
Matt
--
Matthew Boo
ever, they've all been well-reviewed already, so should
hopefully be just a quick re-review.
Note to self: When rebasing, make a note of merge conflicts and add a
summary of required changes to a comment in gerrit.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070
hat's going on here?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.ope
deal world, every person seeing this would diligently check that
the fingerprint match was accurate before submitting a recheck request.
In the real world, how about we just do it automatically?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D3
cing one type of deployment.
Does anybody recall the detail of why we wanted to remove this? There
was unease over use of instance's node field in the db, but I don't
recall why.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
If you want to move away from that, though, I believe you could create a
generic rescue image which would be good for most/all Linux instances at
the very least. In fact, there are plenty of examples of generic rescue
images out there already.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualis
On 11/07/14 12:36, Daniel P. Berrange wrote:
> On Fri, Jul 11, 2014 at 12:30:19PM +0100, John Garbutt wrote:
>> On 10 July 2014 16:52, Matthew Booth wrote:
>>> Currently we create a rescue instance by creating a new VM with the
>>> original instance's image, the
level,
anyway. I have a patch for that: https://review.openstack.org/#/c/106082/
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
242e/console.html
I got this too. Opened: https://bugs.launchpad.net/nova/+bug/1333232
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
_
I'm not giving duff advice.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev
ren't proposing any backwards incompatible changes at
the moment there is no current incentive to bring this forward.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19
On 19/06/14 13:22, Mark McLoughlin wrote:
> On Thu, 2014-06-19 at 09:34 +0100, Matthew Booth wrote:
>> On 19/06/14 08:32, Mark McLoughlin wrote:
>>> Hi Armando,
>>>
>>> On Tue, 2014-06-17 at 14:51 +0200, Armando M. wrote:
>>>> I wonder what the turn
> changes. Perhaps we could stop servicing these queues at certain
> points in the cycles, or reduce the rate at which they are
> serviced.
>
> - we could include specs and client patches in the same network so
> that they prioritized in the same way.
>
>
large cost,
and it doesn't have all the answers. The answer is not always more
review: there are other tools in the box. Imagine we spent 50% of the
time we spend on review writing tempest tests instead.
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
the biggest problem by far is just that
> we need more of the right people reviewing code.
Agreed, but a resource squeeze is often a good time to see
optimisations. A small improvement is still an improvement :)
Matt
[1] This series is very nice: https://review.openstack.org/#/c/98604
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/06/14 12:36, Sean Dague wrote:
> On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
>> On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
>>> We all know that review can be a bottleneck for Nova
>>> patches
accepted patches. It would be removed for abuse.
Is this practical? Would it help?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
O
gt; * What do entities that try to acquire a lock do when they can't acquire
>>> it?
>>
>> Typically block, but if a use case emerged for trylock() it would be
>> simple to implement. For example, in the image side-loading case we may
>> decide that if it isn
tesEx
* Non-polling task waiting
It also gives us explicit session transactions, which are a requirement
for locking should that ever come to pass.
Please read and discuss. There are a couple of points in there on which
I'm actively soliciting input.
Matt
--
Matthew Booth
Red Hat E
//wiki.openstack.org/wiki/StructuredWorkflowLocks
>
> Feel free to move that wiki if u find it useful (its sorta a high-level
> doc on the different strategies and such).
Nice list of implementation pros/cons.
Matt
>
> -Josh
>
> -Original Message-
> From: Matthew
On 13/06/14 05:27, Angus Lees wrote:
> On Thu, 12 Jun 2014 05:06:38 PM Julien Danjou wrote:
>> On Thu, Jun 12 2014, Matthew Booth wrote:
>>> This looks interesting. It doesn't have hooks for fencing, though.
>>>
>>> What's the status of tooz? Would
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/06/14 15:35, Julien Danjou wrote:
> On Thu, Jun 12 2014, Matthew Booth wrote:
>
>> We have a need for a distributed lock in the VMware driver, which
>> I suspect isn't unique. Specifically it is possible for a VMware
>
y in the driver?
Matt
[1] Cluster ~= hypervisor
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
Ope
it
wasn't present, but after scouring the api I'm not convinced it's possible.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
the nova-compute
> in order to talk to
> hypervisor management API through REST API
It may not be directly relevant to this discussion, but I'm interested
to know what constraint prevents you running nova-compute on the hypervisor.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering,
NUMA.
I don't think it makes sense for Nova to concern itself with migrating
VMs between hosts in a cluster. Putting a cluster into maintenance mode
would involve the whole cluster, but the vSphere administrator obviously
has other options.
The fact that we can't migrate a VM between 2
er
entry point in to this code, but it might be worth a quick look.
Incidentally, the tests seem to populate service_states in fake, so the
behaviour of the automated tests probably isn't reliable.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG
On 12/03/14 18:28, Matt Riedemann wrote:
>
>
> On 2/25/2014 6:36 AM, Matthew Booth wrote:
>> I'm new to Nova. After some frustration with the review process,
>> specifically in the VMware driver, I decided to try to visualise how the
>> review process is workin
every time you touch the code. A little up-front effort will
make a whole class of problems go away.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
resource usage.
However, implicit comparison to None seems to be the default in Nova. Do
I give up mentioning this in reviews, or is this something we care about?
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441
who regularly looks at it is an
obvious fix. A less obvious fix might involve a process which allows
developers to work on a fork which is periodically merged, rather like
the kernel.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D
roken URLs?
Thanks,
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
nyway and it will require donkey work
in the driver to match up with it. If we wait until later it becomes a
whole additional task.
Matt
> Thanks
> Gary
>
>
> On 2/6/14 12:43 PM, "Matthew Booth" wrote:
>
>> There's currently an effort to create a common i
lace for this kind of discussion.
Matt
[1] Note that the first argument to the current _wait_for_task() isn't
actually used.
[2] PEP8 recommends against this, btw.
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600
On Thu, 2013-12-05 at 11:35 +0200, Roman Prykhodchenko wrote:
> Hi folks,
>
> Open Stack community grows continuously bringing more people and so new
> initiatives and new projects. This growing amount of people, initiatives
> and projects causes increasing the amount of discussions in our mailing
101 - 169 of 169 matches
Mail list logo