A fair bit of my setup uses service names for automatic naming
elsewhere. For example, we have NFS mounts like /path/to/
and automatic DNS like .example.org
Of course, if a user is determined to to create a less useful
instance, they can create one with a name like "" or
During our migration from nova-network to neutron, we'll be running
two nova regions in parallel in our cloud. I have designate-sink
working just fine in our existing (nova-network) region, but since sink
is only listening to the rabbit queue of that region it's oblivious to
to events
Sometime soon I'm going to need to migrate a few hundred VMs from one
(nova-network-using) cloud to a newer (neutron-having) cloud. Google
searches for suggestions about inter-cloud migration get me a whole lot
of pages that suggest should I do this by taking snapshots in the old
cloud and
I just now upgraded my test install (nova, keystone and glance) from
Liberty to Mitaka. Immediately after the upgrade, every compute query
in the openstack client or Horizon started returned a 404.
I resolved this problem by changing all of my nova endpoints in Keystone
that looked like
I've just noticed that, as predicted, Mirantis has switched off
their Openstack package repos (e.g.
http://liberty-jessie.pkgs.mirantis.com/). Are there any updates about
a replacement repo, or a newly-reconstituted packaging team?
I remember being convinced back in February that
I've just noticed that, as predicted, Mirantis has switched off
their Openstack package repos (e.g.
http://liberty-jessie.pkgs.mirantis.com/). Are there any updates about
a replacement repo, or a newly-reconstituted packaging team?
I remember being convinced back in February that
I use keystone tokens for two things:
1) To authorize a Horizon session. I like these to live a nice, long
time so I don't have to re-auth with the web UI over and over.
2) To authorize service users running cron jobs and other maintenance
scripts. These don't need to last long at all;
Googling for nova-network migration advice gets me a lot of hits
but many are fragmentary and/or incomplete[1][2] I know that lots of
people have gone through this process, though, and that there are
probably as many different solutions as there are migration stories.
So: If you
On 2/15/17 8:22 PM, gustavo panizzo wrote:
On Wed, Feb 15, 2017 at 01:42:51PM +0100, Thomas Goirand wrote:
Hi there,
Its really sad news, for both You and OpenStack & Debian.
Thank you for your work, Thomas -- I hope you are able to continue it.
I'm slow on the uptake here, for which I
On 12/15/16 12:50 PM, Andrew Bogott wrote:
I'm trying to set up an unprivileged keystone account that can query
(but not modify) various openstack services.
Many thanks to Steve Martinelli who helped me sort this out on IRC
today. As I understand it, with the v3 keystone api there's no longer
I'm trying to set up an unprivileged keystone account that can query
(but not modify) various openstack services.
This is generally going pretty well. I've added an account with a no-op
role ('observer') and then modified a bunch of my policy files to permit
read-only queries:
This question has mostly been answered in this thread already, but I'm
recapping to highlight a potential pitfall.
On 11/22/16 11:10 PM, Kevin Wilson wrote:
Hello,
When there are periods of idle activity in the Horizon WebUI of
OpenStack, I am kicked off from it and redirected to the login
I've just read
http://developer.openstack.org/api-ref/identity/v3-ext/inherit.html
and I think I understand it, but can't put it into practice. I have a
user with a role on a domain, and a project in that domain, but I see no
evidence that the role assignment is inherited by the project. Am
Since upgrading to liberty, we've noticed some very dramatic lags in the
application of security group updates. Experience shows that it takes
somewhere between 15 minutes and forever for changes to take effect.
For example, I just now added a source group rule to a project:
Ingress -
Short version:
I need a way to provide a dynamically-updating .yaml file to a nova
instance. Is there an existing solution for this in nova that I'm
overlooking?
Long version:
In my current setup, each Nova instance has a puppet node
definition in ldap, which is loaded via the
I have SESSION_TIMEOUT in Horizon set to 7 days, and 'expiration'
in keystone set to slightly more than 7 days. I'm still having to log
in a bunch, though, and now I've finally read the code, here:
https://github.com/openstack/horizon/commit/dc7668177a2ef638d9a86e7f6c7f62b075b9592c
On 3/9/16 8:57 AM, Kiall Mac Innes wrote:
On 09/03/16 03:48, Andrew Bogott wrote:
Due to the weird public/private hybrid nature of my cloud, I'm
frequently needing to abuse policy.conf files in unexpected ways.
Today's challenge is the designate policy. Right now we're running a
custom
Due to the weird public/private hybrid nature of my cloud, I'm
frequently needing to abuse policy.conf files in unexpected ways.
Today's challenge is the designate policy. Right now we're running a
custom solution that maintains all public dns entries under a single
domain: wmflabs.org.
/?p=2356
All are welcome to reuse our code; I'm also happy to hear from anyone
about how I should have done it instead.
-Andrew
On 2/29/16 10:23 AM, Andrew Bogott wrote:
I require two-factor authentication for users who have permissions
to create and delete instances in Nova. Since we're
I require two-factor authentication for users who have permissions
to create and delete instances in Nova. Since we're in the process of
migrating from our custom webUI to Horizon, I need to add an additional
field (totp token) to the Horizon login screen and get that value passed
to
I'm just about ready to upgrade to Liberty. I've started a dry run by
setting up a trusty machine with the ubuntu kilo cloud archive in my
sources.list.d, and installed nova-compute, version
2015.1.2-0ubuntu2~cloud0.
Next I moved my cloud-archive reference to liberty, and ran 'apt-get
Just a quick followup for future readers of this thread...
On 2/19/16 10:24 PM, John Belamaric wrote:
On Feb 19, 2016, at 6:03 PM, Andrew Bogott <abog...@wikimedia.org> wrote:
I'm running designate kilo with a pdns backend. I originally set things up
in Juno, with designate-c
On 2/19/16 10:24 PM, John Belamaric wrote:
Thanks for the quick response!
On Feb 19, 2016, at 6:03 PM, Andrew Bogott <abog...@wikimedia.org> wrote:
I'm running designate kilo with a pdns backend. I originally set things up
in Juno, with designate-central syncing directly to th
I'm running designate kilo with a pdns backend. I originally set
things up in Juno, with designate-central syncing directly to the pdns
mysql backend. Everything was stable until my upgrade to Kilo. During
the upgrade to Kilo I was advised to add the mdns service, which I did,
and I got
I would like to be able to create some accounts with cloud-wide
permissions in my OpenStack install. Specifically:
'observer' permissions:
This would be an account (or type of account) that has 'read-only
access' to all tenants. This would be used to provide a public view
onto cloud
On 9/16/15 4:03 PM, Clint Byrum wrote:
Excerpts from Andrew Bogott's message of 2015-09-16 10:03:48 -0700:
My users are mostly happy with VMs, but I get occasional requests for
physical hardware in order to host databases, run performance tests,
etc. I'd love to rack a dozen small servers and
On 9/17/15 11:31 PM, Clint Byrum wrote:
Excerpts from Jeff Peeler's message of 2015-09-17 20:07:00 -0700:
On Wed, Sep 16, 2015 at 5:03 PM, Clint Byrum wrote:
Excerpts from Andrew Bogott's message of 2015-09-16 10:03:48 -0700:
My users are mostly happy with VMs, but I get
My users are mostly happy with VMs, but I get occasional requests for
physical hardware in order to host databases, run performance tests,
etc. I'd love to rack a dozen small servers and graft the ironic
service onto my existing cloud in order to fulfill these sporadic
needs. I'm given
On 7/28/15 1:32 PM, Andrew Bogott wrote:
I'm running Icehouse with the 'legacy' nova-network service and a
single network node.
Everything in my cluster is running Ubuntu Trusty and ready for an
upgrade to Juno, except for my network node, which is still running
Precise.
snip
It turns out
Most of my virt nodes run the standard Trusty kernel, 3.13.0-52-generic
or similar. Recently I had cause to shut down one of them, so I started
by running a scripted 'nova suspend' of all instances. A couple of
instances into the script, the kernel locked up and the whole system
died.
suggestion :) Neutron
doesn't support my current network topology so I'm not really interested
in switching... I just want want to move from one nova-network node to
another. (It doesn't help that block migration is sort of broken in
Icehouse.)
-A
On Wed, Jul 29, 2015 at 12:02 AM, Andrew
I'm running Icehouse with the 'legacy' nova-network service and a single
network node.
Everything in my cluster is running Ubuntu Trusty and ready for an
upgrade to Juno, except for my network node, which is still running
Precise. How do I upgrade it without causing cloud-wide downtime? I
On 6/18/15 1:43 AM, Morgan Fainberg wrote:
On Jun 17, 2015, at 23:14, Tim Bell tim.b...@cern.ch wrote:
-Original Message-
From: Jan van Eldik [mailto:jan.van.el...@cern.ch]
Sent: 17 June 2015 20:54
To: openstack@lists.openstack.org
Subject: Re: [Openstack] How should an instance learn
.
-Original Message-
From: Andrew Bogott [mailto:abog...@wikimedia.org]
Sent: Tuesday, June 16, 2015 6:46 PM
To: openstack@lists.openstack.org
Subject: [Openstack] How should an instance learn what tenant it is in?
I have many uses cases in which an instance needs to know what project
I have many uses cases in which an instance needs to know what
project it is in. Right now I accomplish this through an intricate hack
which involves hooking instance creation and writing the tenant name to
an ldap record.
I'm considering rewriting this hack to write the tenant name
On 5/7/15 1:23 PM, Antonio Messina wrote:
On Thu, May 7, 2015 at 7:30 PM, Andrew Bogott abog...@wikimedia.org wrote:
On 5/7/15 2:34 AM, Antonio Messina wrote:
On Wed, May 6, 2015 at 10:56 PM, Andrew Bogott abog...@wikimedia.org
wrote:
Since time immemorial, I've accepted as a fact
On 5/14/15 12:55 PM, Antonio Messina wrote:
On Thu, May 14, 2015 at 6:19 PM, Andrew Bogott abog...@wikimedia.org wrote:
OK, we've made some progress with this -- the solution seems to involve
changing my dmz_cidr setting and switching our bridge to promiscuous mode.
I don't have any dmz_cidr
On 5/7/15 2:34 AM, Antonio Messina wrote:
On Wed, May 6, 2015 at 10:56 PM, Andrew Bogott abog...@wikimedia.org wrote:
Since time immemorial, I've accepted as a fact of life that routing from
a nova instance to another instance via floating ip is impossible. We've
coped with this via
Since time immemorial, I've accepted as a fact of life that routing
from a nova instance to another instance via floating ip is impossible.
We've coped with this via a hack in dnsmasq, setting an alias to
rewrite public IPs to the corresponding internal IP.
Right now I'm trying to
On 2/20/15 9:06 AM, Mike Dorman wrote:
I can report that we do use this option (‘global' setting.) We have to
enforce name uniqueness for instances’ integration with some external
systems (namely AD and Spacewalk) which require unique naming.
However, we also do some external name validation
, release, etc.) and allows us shuffle around a
small number of public IPs amongst a much larger number of instances.
Your suggestion doesn't address that, does it? Short of my implementing
a bunch of custom stuff on my own?
-A
On Sun, Dec 21, 2014 at 7:00 PM, Andrew Bogott abog
--
On Mon, Dec 22, 2014 at 1:46 AM, Andrew Bogott abog...@wikimedia.org wrote:
On 12/22/14 2:08 PM, Kevin Benton wrote:
Can't you simulate the same topology as the FlatDHCPManager + Floating IPs
with a shared network attached to a router which is then attached to an
external network?
Mmmmaybe
Greetings!
I'm about to set up a new cloud, so for the second time this year I'm
facing the question of Neutron vs. nova-network. In our current setup
we're using nova.network.manager.FlatDHCPManager with floating IPs.
This config has been working fine, and would probably be our first
43 matches
Mail list logo