Hi,
Had the session in Austin summit for the maintenance:
https://etherpad.openstack.org/p/AUS-ops-Nova-maint
Now the discussion have gotten to a point that should start prototyping a
service hosting the maintenance. For maintenance Nova could have a link to this
new service, but no functionali
Hi all,
Does anyone know whether there is a way to disable the novnc console on a
per instance basis?
Cheers,
Blair
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
Excerpts from Adam Kijak's message of 2016-10-12 12:23:41 +:
> >
> > From: Xav Paice
> > Sent: Monday, October 10, 2016 8:41 PM
> > To: openstack-operators@lists.openstack.org
> > Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How
If fault domain is a concern, you can always split the cloud up into 3
regions, each having a dedicate Ceph cluster. It isn't necessarily going to
mean more hardware, just logical splits. This is kind of assuming that the
network doesn't share the same fault domain though.
Alternatively, you can s
I highly recommend looking in to Giftwrap for that, until there's UCA
packages.
The thing missing from the packages that Giftwrap will produce is init
scripts, config file examples, and the various user and directory setup
stuff. That's easy enough to put into config management or a separate
pack
On Wed, Oct 12, 2016 at 12:34 PM, James Penick wrote:
> Are you backing both glance and nova-compute with NFS? If you're only
> putting the glance store on NFS you don't need any special changes. It'll
> Just Work.
I've got both glance and nova backed by NFS. Haven't put up cinder
yet, but that w
Are you backing both glance and nova-compute with NFS? If you're only
putting the glance store on NFS you don't need any special changes. It'll
Just Work.
On Wed, Oct 12, 2016 at 11:18 AM, Curtis wrote:
> On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren
> wrote:
> > We don’t use shared storag
On Wed, Oct 12, 2016 at 11:58 AM, Kris G. Lindgren
wrote:
> We don’t use shared storage at all. But I do remember what you are talking
> about. The issue is that compute nodes weren’t aware they were on shared
> storage, and would nuke the backing mage from shared storage, after all vm’s
> on
Tobias does bring up something that we have ran into before.
With NFSv3 user mapping is done by ID, so you need to ensure that all of your
servers use the same UID for nova/glance. If you are using packages/automation
that do useradd’s without the same userid its *VERY* easy to have mismatched
Hi,
We have an environment with glance and cinder using NFS.
It's important that they have the correct rights. The shares should be owned by
nova on compute if mounted up on /var/lib/nova/instances
And the same for nova and glance on the controller..
It's important that you map the glance and no
We don’t use shared storage at all. But I do remember what you are talking
about. The issue is that compute nodes weren’t aware they were on shared
storage, and would nuke the backing mage from shared storage, after all vm’s on
*that* compute node had stopped using it. Not after all vm’s had s
Hi All,
I've never used NFS with OpenStack before. But I am now with a small
lab deployment with a few compute nodes.
Is there anything special I should do with NFS and glance and nova? I
remember there was an issue way back when of images being deleted b/c
certain components weren't aware they a
On 10/12/2016 10:17 AM, Ulrich Kleber wrote:
Hi,
I didn’t see an official announcement, so I like to point you to the new
release of OPNFV.
https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0
OPNFV is an open source project
Hi Matt, Tim,
Thanks for asking. We’ve used the API in the past as a way of getting the
> usage data out of Nova. We had problems running ceilometer at scale and
> this was a way of retrieving the data for our accounting reports. We
> created a special policy configuration to allow authorised user
> On 12 Oct 2016, at 07:00, Matt Riedemann wrote:
>
> The current form of the nova os-diagnostics API is hypervisor-specific, which
> makes it pretty unusable in any generic way, which is why Tempest doesn't
> test it.
>
> Way back when the v3 API was a thing for 2 minutes there was work done
Hi,
I didn't see an official announcement, so I like to point you to the new
release of OPNFV.
https://www.opnfv.org/news-faq/press-release/2016/09/open-source-nfv-project-delivers-third-platform-release-introduces-0
OPNFV is an open source project and one of the most important users of
OpenStack
The current form of the nova os-diagnostics API is hypervisor-specific,
which makes it pretty unusable in any generic way, which is why Tempest
doesn't test it.
Way back when the v3 API was a thing for 2 minutes there was work done
to standardize the diagnostics information across virt drivers
> ___
> From: Abel Lopez
> Sent: Monday, October 10, 2016 9:57 PM
> To: Adam Kijak
> Cc: openstack-operators
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
> Have you thought about dedicated pools for
Has anyone seen Ubuntu packages for Octavia yet?
We’re running Ubuntu 16.04 with Newton, but for whatever reason I can not find
any Octavia package…
So far I’ve only found in
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun the following:
Ubuntu Packages Setup: Install octavia with
>
> From: Xav Paice
> Sent: Monday, October 10, 2016 8:41 PM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do
> you handle Nova on Ceph?
>
> On Mon, 2016-10-10 at 13:29 +, Adam
20 matches
Mail list logo