05/19/2018 05:58 PM, Blair Bethwaite wrote:
> > G'day Jay,
> > On 20 May 2018 at 08:37, Jay Pipes <jaypi...@gmail.com> wrote:
> >> If it's not the VM or baremetal machine that is using the accelerator,
> >> is?
> > It will
On 20 May 2018 at 08:37, Jay Pipes wrote:
> If it's not the VM or baremetal machine that is using the accelerator, what
It will be a VM or BM, but I don't think accelerators should be tied
to the life of a single instance if that isn't technically necessary
Relatively Cyborg-naive question here...
I thought Cyborg was going to support a hot-plug model. So I certainly
hope it is not the expectation that accelerators will be encoded into
Nova flavors? That will severely limit its usefulness.
On 19 May 2018 at 23:30, Jay Pipes
Please do not default to deleting it, otherwise someone will eventually be
back here asking why an irate user has just lost data. The better scenario
is that the rebuild will fail (early - before impact to the running
instance) with a quota error.
On Thu., 15 Mar. 2018, 00:46 Matt
It may also be worth testing a step where Nova & Neutron remain at N-1.
On 20 December 2017 at 04:58, Matt Riedemann wrote:
> During discussion in the TC channel today , we got talking about how
> there is a perception that you must upgrade all of the services
The former - we're running Cells so only have a single region currently
(except for Swift where we have multiple proxy endpoints around the
country, all backed by a global cluster, but they have to be different
regions to put them all in the service catalog). See
On 14 December 2017 at 17:36, Clint Byrum wrote:
> The batch size for "upgrade the whole cloud" is too big. Let's help our
> users advance components one at a time, and then we won't have to worry
> so much about doing the whole integrated release dance so often.
Is there any
Hi all - please note this conversation has been split variously across
-dev and -operators.
One small observation from the discussion so far is that it seems as
though there are two issues being discussed under the one banner:
1) maintain old releases for longer
2) do stable releases less
I missed this session but the discussion strikes a chord as this is
something I've been saying on my user survey every 6 months.
On 11 November 2017 at 09:51, John Dickinson wrote:
> What I heard from ops in the room is that they want (to start) one release a
> year who's branch
A related bug that hasn't seen any love for some time:
On 6 October 2017 at 07:47, James Penick wrote:
> Hey Pino,
> mriedem pointed me to the vendordata code  which shows some fields are
> passed (such as project ID) and that
On 29 September 2017 at 22:26, Bob Ball wrote:
> The concepts of PCI and SR-IOV are, of course, generic, but I think out of
> principal we should avoid a hypervisor-specific integration for vGPU (indeed
> Citrix has been clear from the beginning that the vGPU integration we
On 28 September 2017 at 07:10, Premysl Kouril wrote:
> Hi, I work with Jakub (the op of this thread) and here is my two
> cents: I think what is critical to realize is that KVM virtual
> machines can have substantial memory overhead of up to 25% of memory,
On 27 September 2017 at 23:19, Jakub Jursa wrote:
> 'hw:cpu_policy=dedicated' (while NOT setting 'hw:numa_nodes') results in
> libvirt pinning CPU in 'strict' memory mode
> (from libvirt xml for given instance)
> So yeah, the
On 27 September 2017 at 22:40, Belmiro Moreira
> In the past we used the tabs but latest Horizon versions use the visibility
> column/search instead.
> The issue is that we would like the old images to continue to be
> discoverable by everyone and
Also CC-ing os-ops as someone else may have encountered this before
and have further/better advice...
On 27 September 2017 at 18:40, Blair Bethwaite
> On 27 September 2017 at 18:14, Stephen Finucane <sfinu...@redhat.com> wrote:
>> What yo
On 27 September 2017 at 18:14, Stephen Finucane wrote:
> What you're probably looking for is the 'reserved_host_memory_mb' option. This
> defaults to 512 (at least in the latest master) so if you up this to 4192 or
> similar you should resolve the issue.
I don't see how this
On 20 Sep. 2017 7:58 pm, "Belmiro Moreira" <
> Discovering the latest image release is hard. So we added an image
> that we update when a new image release is available. Also, we patched
horizon to show
I've been watching this thread and I think we've already seen an
excellent and uncontroversial suggestion towards simplifying initial
deployment of OS - that was to push towards encoding Constellations
into the deployment and/or config management projects.
On 26 September 2017 at 15:44, Adam
Please don't make these 400s - it should not be a client error to be
unaware of the service status ahead of time.
On 12 July 2017 at 11:18, Matt Riedemann wrote:
> I'm looking for some broader input on something being discussed in this
On 27 June 2017 at 23:47, Sean Dague wrote:
> I still think I've missed, or not grasped, during this thread how a SIG
> functions differently than a WG, besides name. Both in theory and practice.
I think for the most part SIG is just a more fitting moniker for some
On 27 June 2017 at 23:47, Thierry Carrez wrote:
> Setting up a common ML for common discussions (openstack-sigs) will
> really help, even if there will be some pain setting them up and getting
> the right readership to them :)
It's worth a try! I agree it will probably
Resend for openstack-dev with proper list perms...
-- Forwarded message --
From: Blair Bethwaite <blair.bethwa...@monash.edu>
Date: 27 June 2017 at 23:24
Subject: [scientific] IRC Meeting (Tues 2100 UTC): Science app catalogues,
network security of research computing on Ope
There is a not insignificant degree of irony in the fact that this
conversation has splintered so that anyone only reading openstack-operators
and/or user-committee is missing 90% of the picture Maybe I just need a
new ML management strategy.
I'd like to add a +1 to Sean's suggestion about
On 2 June 2017 at 23:13, Alexandra Settle wrote:
> O I like your thinking – I’m a pandoc fan, so, I’d be interested in
> moving this along using any tools to make it easier.
I can't realistically offer much time on this but I would be happy to
Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?
On 2 June 2017 at 11:15, George Mihaiescu wrote:
> +1 for option 3
> On Jun 1, 2017, at 11:06, Alexandra Settle wrote:
Hopefully you've all noticed this by now, the timing of the WG
sessions (lightening talks, meeting, BoF) has changed a little since
first published. I've just updated the etherpad to reflect that now:
Tues 11:15am - 11:55am -
Morning all -
Apologies for the shotgun email. But looks like we still have one or
two spots available for lightening talks if anyone has work they want
to share and/or discuss:
On 28 April 2017 at 06:19, George
-- Forwarded message --
From: Blair Bethwaite <blair.bethwa...@gmail.com>
Date: 6 May 2017 at 17:55
Subject: GPU passthrough success and failure records
To: "openstack-oper." <openstack-operat...@lists.openstack.org>
I've been (very slowly) working
On 2 May 2017 at 05:50, Jay Pipes wrote:
> Masahito Muroi is currently marked as the moderator, but I will indeed be
> there and happy to assist Masahito in moderating, no problem.
The more the merrier :-).
There is a rather unfortunate clash here with the Scientific-WG BoF
h a temporal aspect to them (i.e.
> allocations in the future).
> A separate system (hopefully Blazar) is needed to manage the time-based
> associations to inventories of resources over a period in the future.
>>> I'm not sure how the above i
Thanks Rochelle. I encourage everyone to dump thoughts into the
- feel free to garden it as you go!) so we can have some chance of
organising a coherent session. In particular it would be useful to
know what is going to be most
On 29 April 2017 at 01:46, Mike Dorman wrote:
> I don’t disagree with you that the client side choose-a-server-at-random is
> not a great load balancer. (But isn’t this roughly the same thing that
> oslo-messaging does when we give it a list of RMQ servers?) For us it’s
We at Nectar are in the same boat as Mike. Our use-case is a little
bit more about geo-distributed operations though - our Cells are in
different States around the country, so the local glance-apis are
particularly important for caching popular images close to the
nova-computes. We consider these
A quick FYI that this Forum session exists:
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.
It would be great to see a good representation from both the Nova
Devil's advocate - what is "full enough"? Surely another channel is
essentially free and having flexibility in available timing is of utmost
On 8 Nov 2016 5:37 PM, "Tony Breeds" wrote:
> On Mon, Nov 07, 2016 at 05:52:43PM +0100, lebre.adr...@free.fr wrote:
2000UTC might catch a few kiwis, but it's 6am everywhere on the east
coast of Australia, and even earlier out west. 0800UTC, on the other
hand, would be more sociable.
On 26 May 2016 at 15:30, Nikhil Komawar wrote:
> Thanks Sam. We purposefully chose that time
I've just been doing some user consultation and pondering a case for
use of the Qemu Guest Agent in order to get quiesced backups.
In doing so I found myself wondering why on earth I need to set an
image property in Glance (hw_qemu_guest_agent) to toggle such a thing
for any particular
On 20 November 2014 05:25, openstack-dev-requ...@lists.openstack.org wrote:
Date: Wed, 19 Nov 2014 10:57:17 -0500
From: Doug Hellmann d...@doughellmann.com
To: OpenStack Development Mailing List (not for usage questions)
We've been investigating some guest filesystem issues recently and noticed
what looks like a slight inconsistency in base image handling in
block-migration. We're on Grizzly from the associated Ubuntu cloud archive
and using qcow on local storage.
What we've noticed is that after
Mail list logo