small change in
drivers to support it. Technically I don't see it as an issue.
However, is it a change we'd be willing to accept? Is there any good
reason not to do this? Are there any less esoteric workflows which
might use this feature?
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute
On Wed, 22 Aug 2018 at 10:47, Gorka Eguileor wrote:
>
> On 20/08, Matthew Booth wrote:
> > For those who aren't familiar with it, nova's volume-update (also
> > called swap volume by nova devs) is the nova part of the
> > implementation of cinder's live migration (also ca
olume-update directly for all use-cases we're aware of, I'm going to
propose heading off this class of bug by disabling it for non-cinder
callers.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +4420
On Mon, 13 Aug 2018 at 16:56, Chris Friesen wrote:
>
> On 08/13/2018 08:26 AM, Jay Pipes wrote:
> > On 08/13/2018 10:10 AM, Matthew Booth wrote:
>
> >> I suspect I've misunderstood, but I was arguing this is an anti-goal.
> >> There's no reason to do th
ou fix the
bug. The regression would be anything user-facing which queries by
metadata key. What does that?
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
OpenStack Deve
On Mon, 13 Aug 2018 at 15:27, Jay Pipes wrote:
>
> On 08/13/2018 10:10 AM, Matthew Booth wrote:
> > On Mon, 13 Aug 2018 at 14:05, Jay Pipes wrote:
> >>
> >> On 08/13/2018 06:06 AM, Matthew Booth wrote:
> >>> Thanks mriedem for answering my prev
On Mon, 13 Aug 2018 at 14:05, Jay Pipes wrote:
>
> On 08/13/2018 06:06 AM, Matthew Booth wrote:
> > Thanks mriedem for answering my previous question, and also pointing
> > out the related previous spec around just forcing all metadata to be
> > lowercase:
> >
> &
?
Or should we ask Rajesh to expand his patch into a series covering
other metadata?
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage
. Can anybody tell me if any of those
jobs ran the included functional test against a MySQL DB?,
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List
On 6 June 2018 at 13:55, Jay Pipes wrote:
> On 06/06/2018 07:46 AM, Matthew Booth wrote:
>>
>> TL;DR I think we need to entirely disable swap volume for multiattach
>> volumes, and this will be an api breaking change with no immediate
>> workaround.
>>
>>
then be possible if inconvenient.
Regardless of any other changes, though, I think it's urgent that we
disable the ability to swap_volume a multiattach volume because we
don't want users to start using this relatively new, but broken,
feature.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
On 19 April 2018 at 16:46, Chris Friesen <chris.frie...@windriver.com> wrote:
> On 04/19/2018 08:33 AM, Jay Pipes wrote:
>>
>> On 04/19/2018 09:15 AM, Matthew Booth wrote:
>>>
>>> We've had inconsistent naming of recreate/evacuate in Nova for a long
&g
On 19 April 2018 at 15:33, Jay Pipes <jaypi...@gmail.com> wrote:
> On 04/19/2018 09:15 AM, Matthew Booth wrote:
>>
>> We've had inconsistent naming of recreate/evacuate in Nova for a long
>> time, and it will persist in a couple of places for a while more.
>> How
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
configdrive properly.
I was going to ask this. Even if the contents of the disk can't be
transferred in advance... how does ironic do this? There must be a
way.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
___
ut I for one would oppose this
> alternative.
>
> Cheers,
> Gorka.
>
> ______
> OpenStack Development Mailing List (not for usage questions)
>
e server is rebuilt, and the volume is not
deleted. The user will still lose their data, of course, but that's implied
by the rebuild they explicitly requested. The volume id will remain the
same.
[1] I suspect this would require new functionality in cinder to
re-initialize from image.
Matt
--
Matthe
a schedules to host Z
* Nova host Z asks cyborg for a local function Y and blocks
* Cyborg hopefully returns function Y which is already available
* If not, Cyborg reprograms a function Y, then returns it
Can anybody correct me/fill in the gaps?
Matt
--
Matthew Booth
Red Hat OpenStack Eng
ck Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
_
On 31 January 2018 at 16:32, Matt Riedemann <mriede...@gmail.com> wrote:
> On 1/31/2018 7:30 AM, Matthew Booth wrote:
>
>> Could I please have some eyes on this bugfix:
>> https://review.openstack.org/#/c/462521/ . I addressed an issue raised
>> in August 2017, and
Could I please have some eyes on this bugfix:
https://review.openstack.org/#/c/462521/ . I addressed an issue raised in
August 2017, and it's had no negative feedback since. It would be good to
get this one finished.
Thanks,
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone
s to
be correctly installed on your compute hosts.
But to reiterate, ideally your rescue image would support cloud-init and
you would use a config disk.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone: +442070094448 (UK)
ve luks-encrypted volume.
>
In the context of the above, I don't think this is a priority as clearly
nobody is currently doing it. There's already a bug to track the problem in
libvirt, which is linked in a code comment. Admittedly that BZ is
unnecessarily private, which I noted in review,
On 9 January 2018 at 15:28, Matthew Booth <mbo...@redhat.com> wrote:
> In summary, the patch series is here:
>
> https://review.openstack.org/#/q/status:open+project:opensta
> ck/nova+branch:master+topic:bp/local-disk-serial-numbers
>
> The bottom 3 patches, which
independent of libvirt config
https://review.openstack.org/#/c/530786/
Don't generate fake disk_info in swap_volume
https://review.openstack.org/#/c/530787/
Local disk serial numbers for the libvirt driver
https://review.openstack.org/#/c/529380/
Thanks,
Matt
--
Matthew Booth
Red Hat OpenStack
disk_info dict to libvirt_info
https://review.openstack.org/529380 Local disk serial numbers for the
libvirt driver
Here we finally make the libvirt driver-specific changes to expose BDM uuid
as a serial number for local disks.
Matt
--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG
Phone
18/
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsub
looks very complex for both developers and
operators, and very fragile. I think we'd better going with a relatively
simple solution like this one first, and only going a couple of orders of
magnitude more complex if it turns out to be absolutely essential.
Matt
--
Matthew Booth
Red Hat Engineering
.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
the migration was
complete, and we'd at least have an opportunity to do something explicit
with migrations in an error state.
In the meantime I'm going to look for more backportable avenues to fix
this. Perhaps not updating instance.host until after finish_migration.
Matt
--
Matthew Booth
Red Hat
as fixing up tests which assumed
sub-second timestamp granularity which MySQL did not support at the time
(but may now).
IIRC the series died because we killed the fixture I was using in oslo.db
without replacement before my series finished landing. Fundamentally wasn't
that hard, though.
Matt
through some of the above and add a second +2
I'd be very grateful. There's plenty more in the queue after those!
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack
y this one with
limited risk and initial up-front effort.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
U
, and how large they should be. Where to put them is down to the
driver. If we're modelling this outside the driver and at least 2 drivers
are implementing it, I wonder if we shouldn't be implementing storage
policy at a higher level than the driver.
Thoughts?
Matt
--
Matthew Booth
Red Hat
or probably at least a couple of years.
The hypervisor is a (the?) critical component of any cloud deployment.
Objectively, it's bizarre that we expect people to deploy our brand new
code to work round things that were fixed in the hypervisor 2 years ago.
Give
On Fri, Sep 30, 2016 at 4:38 PM, Murray, Paul (HP Cloud) <pmur...@hpe.com>
wrote:
>
>
>
>
>
> On 27/09/2016, 18:12, "Daniel P. Berrange" <berra...@redhat.com> wrote:
>
> >On Tue, Sep 27, 2016 at 10:40:34AM -0600, Chris Friesen wrote:
> >>
mplemented/supported?
>
> -Viktor
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/op
response was understandably conservative. I think this solves more problems
than it creates, though, and it would result in Nova's libvirt driver
getting a bit smaller and a bit simpler. That's a big win in my book.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448
a maintainer only requires a single
+2 from a core.
We could implement this incrementally by defining a couple of pilot
subsystem maintainer domains.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK
198298/
>
> [2] http://paste.openstack.org/show/568983/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack
mashed all the links I could find seemingly related to gerrit
settings and I couldn't find anything which looked promising.
Thanks again,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
way to achieve what I'm looking
for which doesn't involve maintaining my own bot list? If not, would it be
feasible to add something?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK
e
[2] Flat, Qcow2, Lvm, Rbd, Ploop
[3] For recent examples see stable libvirt rescue, and device tagging.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing
rg/#/c/323761/2/nova/virt/libvirt/driver.py@4190
>
> --
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubsc
t; > It contains:
> >
> > {
> > 'disk_bus': the default bus used by disks
> > 'cdrom_bus': the default bus used by cdrom drives
> > 'mapping': defined below
> > }
> >
> > 'mapping' is a dict which maps disk names to a dict describ
://review.openstack.org/329927 Remove virt.block_device._NoLegacy
exception
https://review.openstack.org/329928 Remove unused context argument to
_default_block_device_names()
https://review.openstack.org/329930 Rename compute manager _check_dev_name
to _add_missing_dev_names
--
Matthew Booth
Red Hat Engineering
t; effects would be that the resources of the migrating instance would be
> "lost", allowing a newly-scheduled instance to claim the same resources
> (PCI devices, pinned CPUs, etc.)
>
> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubs
are right yet, but I do
think this is the way to go. I also think we need to entirely divorce this
functionality from the image cache.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
On Tue, May 24, 2016 at 11:06 AM, John Garbutt <j...@johngarbutt.com> wrote:
> On 24 May 2016 at 10:16, Matthew Booth <mbo...@redhat.com> wrote:
> > During its periodic task, ImageCacheManager does a checksum of every
> image
> > in the cache. It verifies this checks
, there also seems to be a bug in this implementation, in
that it doesn't hold the lock on the image itself at any point during the
hashing process, meaning that it cannot guarantee that the image has
finished downloading yet.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone
these backend by backend. I'll provide a weekly
progress update in the live migration meeting.
TL;DR Core reviewers: please review the first 5 patches listed above. There
will be cake.
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK
On Fri, Apr 29, 2016 at 2:47 AM, Eli Qiao <liyong.q...@intel.com> wrote:
> hi team,
>
> Is there any require that all compute node's instance_dir should be same?
>
Yes. This is assumed in many places, certainly in cold migration/resize.
Matt
--
Matthew Booth
Red Hat Engineeri
to spend too long on the spec. The only thing worth of
discussion is the image cache, I guess.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http
e: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
Op
On Tue, Jan 19, 2016 at 8:47 PM, Fox, Kevin M wrote:
> One feature I think we would like to see that could benefit from LVM is
> some kind of multidisk support with better fault tolerance
>
> For example:
> Say you have a node, and there are 20 vm's on it, and thats all
ling List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
___
on master and
liberty after some delays in the gate. Given the importance of the fix I
suspect that most/all downstream distributions will have already patched
(certainly Red Hat has), but it would be good to have them in upstream
stable.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
storage-related features to the libvirt driver.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack
I wrote this a while back, which implements 'migrate everything off this
compute host' in the most robust manner I could come up with using only the
external api:
https://gist.github.com/mdbooth/163f5fdf47ab45d7addd
It obviously overlaps considerably with host-servers-migrate, which is
supposed
On Tue, Nov 10, 2015 at 6:46 PM, Joshua Harlow <harlo...@fastmail.com>
wrote:
> Matthew Booth wrote:
>
>> My patch to MessageHandlingServer is currently being reverted because it
>> broke Nova tests:
>>
>> https://review.openstack.org/#/c/235347/
>>
&
My patch to MessageHandlingServer is currently being reverted because it
broke Nova tests:
https://review.openstack.org/#/c/235347/
Specifically it causes a number of tests to take a very long time to
execute, which ultimately results in the total build time limit being
exceeded. This is very
Accidentally sent this privately.
-- Forwarded message --
From: Matthew Booth <mbo...@redhat.com>
Date: Fri, Oct 9, 2015 at 6:14 PM
Subject: Re: [openstack-dev] [nova][mistral] Automatic evacuation as a long
running task
To: "Deja, Dawid" <dawid.d...@intel.c
On Fri, Sep 25, 2015 at 3:44 PM, Ihar Hrachyshka
wrote:
> Hi all,
>
> releases are approaching, so it’s the right time to start some bike
> shedding on the mailing list.
>
> Recently I got pointed out several times [1][2] that I violate our commit
> message requirement [3]
Hi, Roman,
Evacuated has been on my radar for a while and this post has prodded me to
take a look at the code. I think it's worth starting by explaining the
problems in the current solution. Nova client is currently responsible for
doing this evacuate. It does:
1. List all instances on the
contract around more fine-grained error reporting?
Thanks,
Matt
[1] Incidentally, this suggests to me that live migrate should just do
this anyway.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441
On 11/09/15 12:19, Sean Dague wrote:
> On 09/11/2015 05:41 AM, Matthew Booth wrote:
>> I've recently been writing a tool which uses Nova's external API. This
>> is my first time consuming this API, so it has involved a certain amount
>> of discovery. The tool is here for the c
I expect there are several existing solutions to this problem, but
here's mine (attached).
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
log_merge.sh
Description
I wrote this:
https://review.openstack.org/#/c/195983/1/tools/de-pbr.py,cm
Ideally we'd fix PBR, but this seems to be expected behaviour. Thoughts?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600
and commit_top arguments to virt_dom.blockCommit() are unvalidated.
Does python have anything like perl's taint mode? If so, it might be
worth investigating its use.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A
on how to improve core reviewer
throughput in the next cycle?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
://review.openstack.org/#/c/159481/
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
OpenStack Development Mailing List
On 25/02/15 20:18, Joe Gordon wrote:
On Fri, Feb 20, 2015 at 3:48 AM, Matthew Booth mbo...@redhat.com
mailto:mbo...@redhat.com wrote:
Gary Kotton came across a doozy of a bug recently:
https://bugs.launchpad.net/nova/+bug/1419785
In short, when you start a Nova compute
On 25/02/15 11:51, Radoslav Gerganov wrote:
On 02/23/2015 03:18 PM, Matthew Booth wrote:
On 23/02/15 12:13, Gary Kotton wrote:
On 2/23/15, 2:05 PM, Matthew Booth mbo...@redhat.com wrote:
On 20/02/15 11:48, Matthew Booth wrote:
Gary Kotton came across a doozy of a bug recently:
https
On 20/02/15 11:48, Matthew Booth wrote:
Gary Kotton came across a doozy of a bug recently:
https://bugs.launchpad.net/nova/+bug/1419785
In short, when you start a Nova compute, it will query the driver for
instances and compare that against the expected host of the the instance
according
On 23/02/15 12:13, Gary Kotton wrote:
On 2/23/15, 2:05 PM, Matthew Booth mbo...@redhat.com wrote:
On 20/02/15 11:48, Matthew Booth wrote:
Gary Kotton came across a doozy of a bug recently:
https://bugs.launchpad.net/nova/+bug/1419785
In short, when you start a Nova compute
get the online schema changes, but
for the moment it seems like a lot of complication for a relatively
small problem.
Do you use the global or project scope, btw?
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05
writer in certain circumstances. That
problem would have to be handled separately, perhaps at the messaging layer.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
On 19/02/15 18:57, Jay Pipes wrote:
On 02/19/2015 05:18 AM, Matthew Booth wrote:
Nova contains a config variable osapi_compute_unique_server_name_scope.
Its help text describes it pretty well:
When set, compute API will consider duplicate hostnames invalid within
the specified scope
on MySQL.
See this series if you're interested:
https://review.openstack.org/#/c/156299/
[2] For specifics, see my ramblings here:
https://review.openstack.org/#/c/141115/7/nova/db/sqlalchemy/api.py,cm
line 2547
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
OpenStack Development Mailing List (not for usage questions
to have the same 'host'.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
OpenStack Development Mailing
, which is what I've proposed.
Matt
Thanks
Gary
On 2/11/15, 5:31 PM, Matthew Booth mbo...@redhat.com wrote:
I just posted this:
https://review.openstack.org/#/c/154907/
as an alternative fix for critical bug:
https://bugs.launchpad.net/nova/+bug/1419785
I've just knocked this up
.
No. There are no duplicates.
-Sylvain
On 2/11/15, 5:55 PM, Matthew Booth mbo...@redhat.com wrote:
On 11/02/15 15:49, Gary Kotton wrote:
Hi,
I do not think that that is a healthy solution. That effectively would
render a cluster down if the compute node goes down. That would be a
real
disaster
, leaving inconsistent state in its wake
as it runs.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
On 10/02/15 18:29, Jay Pipes wrote:
On 02/10/2015 09:47 AM, Matthew Booth wrote:
On 09/02/15 18:15, Jay Pipes wrote:
On 02/09/2015 01:02 PM, Attila Fazekas wrote:
I do not see why not to use `FOR UPDATE` even with multi-writer or
Is the retry/swap way really solves anything here.
snip
Am I
to say the opposite.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
OpenStack Development Mailing
guidelines for comments on all reviews.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
OpenStack
to dramatically increase. We should
standardise on read committed.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
On 04/02/15 19:04, Jay Pipes wrote:
On 02/04/2015 12:05 PM, Sahid Orentino Ferdjaoui wrote:
On Wed, Feb 04, 2015 at 04:30:32PM +, Matthew Booth wrote:
I've spent a few hours today reading about Galera, a clustering solution
for MySQL. Galera provides multi-master 'virtually synchronous
contention even in single master.
I don't think so, but you can certainly still have real deadlocks.
They're bugs, though.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
fastest but ask people not to interrupt.
+1
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
OpenStack
safer to fail.
Matt
[1] Standard caveats apply.
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
__
OpenStack
:1 request:transaction relationship. We're moving towards it, but
potentially long running requests will always have to use multiple
transactions.
However, I take your point. I think retry on transaction failure is
something which would benefit from standard handling in a library.
Matt
--
Matthew
deadlocks is hard enough work. Adding the possibility that they
might not even be there is just evil.
Incidentally, we're currently looking to replace this stuff with some
new code in oslo.db, which is why I'm looking at it.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone
-cluster-and-galera/
[3]
http://www.percona.com/blog/2013/03/03/investigating-replication-latency-in-percona-xtradb-cluster/
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
On 30/01/15 19:06, Mike Bayer wrote:
Matthew Booth mbo...@redhat.com wrote:
At some point in the near future, hopefully early in L, we're intending
to update Nova to use the new database transaction management in
oslo.db's enginefacade.
Spec:
http://git.openstack.org/cgit/openstack
comments on the usefulness of slave databases, and the
desirability of making maximum use of them?
Thanks,
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
_TransactionContextManager, and moving code directly into
RequestContext would be a very invasive coupling.
Matt
--
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
ensure
the same logic applies when we save the info cache directly? It's
certainly achievable, but it's just adding to the mess. My proposal is
safe, efficient, and simple.
Matt
- --
Matthew Booth
Red Hat Engineering, Virtualisation Team
Phone: +442070094448 (UK)
GPG ID: D33C3490
GPG FPR: 3733
1 - 100 of 162 matches
Mail list logo