deadlocks is hard enough work. Adding the possibility that they
might not even be there is just evil.
Incidentally, we're currently looking to replace this stuff with some
new code in oslo.db, which is why I'm looking at it.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone
-cluster-and-galera/
[3]
http://www.percona.com/blog/2013/03/03/investigating-replication-latency-in-percona-xtradb-cluster/
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
On 30/01/15 19:06, Mike Bayer wrote:
Matthew Booth mbo...@redhat.com wrote:
At some point in the near future, hopefully early in L, we're intending
to update Nova to use the new database transaction management in
oslo.db's enginefacade.
Spec:
http://git.openstack.org/cgit/openstack
comments on the usefulness of slave databases, and the
desirability of making maximum use of them?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
_TransactionContextManager, and moving code directly into
RequestContext would be a very invasive coupling.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
ensure
the same logic applies when we save the info cache directly? It's
certainly achievable, but it's just adding to the mess. My proposal is
safe, efficient, and simple.
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733
other than sqlalchemy, so we should
probably ditch it and promote it to db/api.py.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
transaction on the remote end. I think we agree
on this.
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
have
'seen' this pattern in more places than it actually exists.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev
at the object level would be
a useful mechanism for safety across multiple rpc calls.
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org
On 12/11/14 23:23, Mike Bayer wrote:
On Nov 12, 2014, at 10:56 AM, Matthew Booth mbo...@redhat.com wrote:
For brevity, I have conflated what happens in object.save() with what
happens in db.api. Where the code lives isn't relevant here: I'm only
looking at what happens.
Specifically
us between multiple,
remote transactions. This is one of the motivations for compare-and-swap
over row locking on read. Another is that the length of some API calls
makes holding a row lock for that long undesirable.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone
going with).
Dan,
Note that this model, as I understand it, would conflict with storing
context in NovaObject.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
by an object, long after the context
has been resolved, the call has been remoted, etc.
Can we guarantee that the lifetime of a context object in conductor is
a single rpc call, and that the object cannot be referenced from any
other thread? Seems safer just to pass it around.
Matt
- --
Matthew Booth
at all.
_validate_instance_group_policy() in compute manager seems to be doing
something else.
Are these undead relics in need of a final stake through the heart, or
is something else going on here?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK
, Instance, InstanceGroup, and Flavor perform multiple
updates on save(). I would apply the same rule to the sub-updates, and
also move them into a single transaction such that the updates are atomic.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK
these non-atomic updates, of course
:)
Thanks,
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
purpose.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http
to
* However, there are situations where it's a good choice
Thanks for reading :)
Matt
[1] There are other ways to skin this cat, but ultimately if you aren't
actually spinning up a vSphere server, you're modelling it somehow.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone
is a
primary cause of technical debt, will adding more steps to it improve
the situation? I hope the answer is obvious. And I'll be honest, I found
the suggestion more than a little patronising.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID
, and has been given +1
by VMware CI multiple times.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
On 04/09/14 14:46, Daniel P. Berrange wrote:
On Thu, Sep 04, 2014 at 02:09:26PM +0100, Matthew Booth wrote:
I'd like to request a FFE for the remaining changes from
vmware-spawn-refactor. They are:
https://review.openstack.org/#/c/109754/
https://review.openstack.org/#/c/109755/
https
On 30/08/14 03:45, Steve Gordon wrote:
- Original Message -
From: Matthew Booth mbo...@redhat.com
On 14/08/14 12:41, Steve Gordon wrote:
- Original Message -
From: Matthew Booth mbo...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev
future feature
Lets avoid premature generalisation. We can always generalise as part of
landing the future feature.
Any more of these?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441
-i) on to base.
After applying each patch it will run each of the 2 given commands. If
either fails it will pause. After resolving any issues you can continue
with 'git rebase --continue'.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Matthew Booth
Red Hat Engineering
a canonical list of valid values?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev
On 07/08/14 18:54, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
In any case, the operative point is that CONF.attribute must
always be
evaluated inside run-time code, never at module load time.
...unless you call register_opts() safely, which is what I'm
On 07/08/14 19:02, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 17:41 +0100, Matthew Booth wrote:
... or arg is an object which defines __nonzero__(), or defines
__getattr__() and then explodes because of the unexpected lookup of a
__nonzero__ attribute. Or it's False (no quotes when printed
On 08/08/14 11:04, Matthew Booth wrote:
On 07/08/14 18:54, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 17:46 +0100, Matthew Booth wrote:
In any case, the operative point is that CONF.attribute must
always be
evaluated inside run-time code, never at module load time.
...unless you call
= cfg.CONF
CONF.import_opt('foo_opt', 'foo')
def bar_func(arg=CONF.foo_opt):
pass
---
Even if it's old news it's worth a refresher because it was a bit of a
headscratcher.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733
On 07/08/14 12:15, Matthew Booth wrote:
I'm sure this is well known, but I recently encountered this problem for
the second time.
---
foo:
import oslo.config as cfg
import bar
CONF = cfg.CONF
CONF.register_opts('foo_opt')
---
bar:
import oslo.config as cfg
CONF = cfg.CONF
On 07/08/14 16:27, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
A (the?) solution is to register_opts() in foo before importing any
modules which might also use oslo.config.
Actually, I disagree. The real problem here is the definition of
bar_func
On 07/08/14 17:11, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 10:55 -0500, Matt Riedemann wrote:
On 8/7/2014 10:27 AM, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
A (the?) solution is to register_opts() in foo before importing any
modules which might
.
These are tricky, case-by-case workarounds to a general problem which
can be solved by simply calling register_opts() in a place where it's
guaranteed to be safe. Is there any reason not to call register_opts()
before importing other modules?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
On 07/08/14 17:39, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 17:27 +0100, Matthew Booth wrote:
On 07/08/14 16:27, Kevin L. Mitchell wrote:
On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:
A (the?) solution is to register_opts() in foo before importing any
modules which might also
. As a project we need to
understand the importance of CI failures. We need a proper negotiation
with contributors to staff a team dedicated to the problem. We can then
use the review process to ensure that the right people have an incentive
to prioritise bug fixes.
Matt
--
Matthew Booth
Red Hat
should
hopefully be just a quick re-review.
Note to self: When rebasing, make a note of merge conflicts and add a
summary of required changes to a comment in gerrit.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05
, every person seeing this would diligently check that
the fingerprint match was accurate before submitting a recheck request.
In the real world, how about we just do it automatically?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG
going on here?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org
On 11/07/14 12:36, Daniel P. Berrange wrote:
On Fri, Jul 11, 2014 at 12:30:19PM +0100, John Garbutt wrote:
On 10 July 2014 16:52, Matthew Booth mbo...@redhat.com wrote:
Currently we create a rescue instance by creating a new VM with the
original instance's image, then adding the original
from that, though, I believe you could create a
generic rescue image which would be good for most/all Linux instances at
the very least. In fact, there are plenty of examples of generic rescue
images out there already.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone
://review.openstack.org/#/c/106082/
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev
: https://bugs.launchpad.net/nova/+bug/1333232
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack
proposing any backwards incompatible changes at
the moment there is no current incentive to bring this forward.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
advice.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http
interest :)
This is all good stuff, but by the sounds of it experimenting in gerrit
isn't likely to be simple.
Remember, though, that the relevant metric is code quality, not review rate.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG
On 19/06/14 13:22, Mark McLoughlin wrote:
On Thu, 2014-06-19 at 09:34 +0100, Matthew Booth wrote:
On 19/06/14 08:32, Mark McLoughlin wrote:
Hi Armando,
On Tue, 2014-06-17 at 14:51 +0200, Armando M. wrote:
I wonder what the turnaround of trivial patches actually is, I bet you
it's very very
] This series is very nice: https://review.openstack.org/#/c/98604/
[2] In fact, I'm aware of a significant amount of cleanup which hasn't
happened because of this.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A
. Review has significant benefits, but also a large cost,
and it doesn't have all the answers. The answer is not always more
review: there are other tools in the box. Imagine we spent 50% of the
time we spend on review writing tempest tests instead.
Matt
- --
Matthew Booth
Red Hat Engineering
it should be able to have a fencing race with the
possible lock holder before continuing. This is obviously undesirable,
as you will probably be fencing an otherwise correctly functioning node,
but it will be correct.
Matt
-Original Message-
From: Matthew Booth mbo...@redhat.com
. It would be removed for abuse.
Is this practical? Would it help?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 17/06/14 12:36, Sean Dague wrote:
On 06/17/2014 07:23 AM, Daniel P. Berrange wrote:
On Tue, Jun 17, 2014 at 11:04:17AM +0100, Matthew Booth wrote:
We all know that review can be a bottleneck for Nova
patches.Not only that, but a patch lingering
on the different strategies and such).
Nice list of implementation pros/cons.
Matt
-Josh
-Original Message-
From: Matthew Booth mbo...@redhat.com
Organization: Red Hat
Reply-To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date
~= hypervisor
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/06/14 15:35, Julien Danjou wrote:
On Thu, Jun 12 2014, Matthew Booth wrote:
We have a need for a distributed lock in the VMware driver, which
I suspect isn't unique. Specifically it is possible for a VMware
datastore to be accessed via
not convinced it's possible.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
obviously
has other options.
The fact that we can't migrate a VM between 2 clusters is clearly a bug.
Whether a single Nova should manage multiple clusters is an open
question, but it should be able to treat them as multiple targets. It's
not fundamentally wrong, though.
Matt
--
Matthew Booth, RHCA
, but it might be worth a quick look.
Incidentally, the tests seem to populate service_states in fake, so the
behaviour of the automated tests probably isn't reliable.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600
On 12/03/14 18:28, Matt Riedemann wrote:
On 2/25/2014 6:36 AM, Matthew Booth wrote:
I'm new to Nova. After some frustration with the review process,
specifically in the VMware driver, I decided to try to visualise how the
review process is working across Nova. To that end, I've created 2
will
make a whole class of problems go away.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev
fix. A less obvious fix might involve a process which allows
developers to work on a fork which is periodically merged, rather like
the kernel.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
URLs?
Thanks,
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http
this, btw.
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin
additional task.
Matt
Thanks
Gary
On 2/6/14 12:43 PM, Matthew Booth mbo...@redhat.com wrote:
There's currently an effort to create a common internal API to the
vSphere/ESX API:
https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.ne
t/oslo/%2Bspec/vmware-apik=oIvRg1
On Thu, 2013-12-05 at 11:35 +0200, Roman Prykhodchenko wrote:
Hi folks,
Open Stack community grows continuously bringing more people and so new
initiatives and new projects. This growing amount of people, initiatives
and projects causes increasing the amount of discussions in our mailing
101 - 168 of 168 matches
Mail list logo