Hello Operators,
Was wondering what you are using to gather Openstack telemetry metrics?
Was looking at things around Openstack serivce api requests/s, response times
(if possible), errors/s, Rabbitmq metrics, if possible pending or tasks that
are in progress, ect ect. Basically your more
.
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
From: matt m...@nycresistor.commailto:m...@nycresistor.com
Date: Monday, November 10, 2014 at 2:36 PM
To: Kris G. Lindgren klindg...@godaddy.commailto:klindg...@godaddy.com
Cc:
openstack-operators
I am also interested in the packaging discussion.
As many of you guys already know, we use anvil
https://github.com/stackforge/anvil for building of our packages. The tool is
currently geared towards Redhat, however it will also build all of the required
pip deps as packages as well. It
-08 11:01 PM, Kris G. Lindgren wrote:
I don¹t think its too much to ask for each project to include a script
that will build a venv that includes tox and the other relevant deps to
build the sample configuration.
This is already the case. Back then, I did the work of documenting how
you could
On 12/13/14, 8:09 AM, Thomas Goirand z...@debian.org wrote:
On 12/12/2014 02:17 AM, Kris G. Lindgren wrote:
Why do you think it's a good idea to restart doing the work of
distributions by yourself? Why not joining a common effort?
Well for a lot of reasons. Some of us carry our own patch
node statistics along with per
vm stats running on each compute node.
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On Mon, Nov 10, 2014 at 10:23 PM, Kris G. Lindgren
klindg...@godaddy.commailto:klindg...@godaddy.com wrote:
Hello
The last few times I have used it - I believe it checks to see which one
belong to active vm's and only does stuff with those.
However, I have pretty much always restarted the ovs agent at the same
time as well.
Kris Lindgren
Senior Linux Systems
That¹s what I was thinking as well. I looked at the install guide and it
says to set the stamp using your existing config, then run the db upgrade
using the new config - then do a db migration after db upgrade. I thought
I had read some where that you had to migrate first then upgrade. We were
In the case of a raw backed qcow2 image (pretty sure that¹s the default)
the instances root disk as seen inside the vm is made up of changes made
on the instance disk (qcow2 layer) + the base image (raw). Also, remember
that as currently coded a resize migration will almost always be a
migrate.
: Joe Topjian j...@topjian.netmailto:j...@topjian.net
Date: Thursday, January 15, 2015 at 9:29 AM
To: Kris G. Lindgren klindg...@godaddy.commailto:klindg...@godaddy.com
Cc:
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
openstack-operators
, 2015 at 1:45 PM, Kris G. Lindgren
klindg...@godaddy.commailto:klindg...@godaddy.com wrote:
We did have an issue using celery on an internal application that we wrote -
but I believe it was fixed after much failover testing and code changes. We
also use logstash via rabbitmq and haven't noticed
Is the fact that neutron security groups don’t provide the same level of
isolation as nova security groups on your guys radar?
Specifically talking about: https://bugs.launchpad.net/neutron/+bug/1274034
I am sure their are a few other thigns that nova is doing that neutron is
currently not.
After our icehouse - juno upgrade we are noticing sporadic but frequent errors
from nova-metadata when trying to serve metadata requests. The error is the
following:
[req-594325c6-44ed-465c-a8e4-bd5a8e5dbdcb None] Failed to get metadata for ip:
x.x.x.x 2015-02-19 12:16:45.903 25007 TRACE
I can't help as we use config-drive to set networking and are just starting to
roll out Cent7 vm's. However, a huge change from Cent6 to Cent7 was the switch
from upstart/dhclient to systemd/systemd-dhcp.
Kris Lindgren
Senior Linux Systems Engineer
).
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On 1/8/15, 9:50 AM, Kris G. Lindgren klindg...@godaddy.com wrote:
On 1/8/15, 4:34 AM, Antonio Messina antonio.s.mess...@gmail.com wrote:
On Thu, Jan 8, 2015 at 12:12 PM, gustavo panizzo (gfa) g...@zumbi.com.ar
wrote
From my past xp with a very similar issue, a stuck image download/conversion
will never transition to error, and will hold up all other vm's trying to be
deployed to the compute nodes using the same image.
Kris Lindgren
Senior Linux Systems Engineer
Can you post your haproxy config file?
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
From: Gui Maluf guimal...@gmail.commailto:guimal...@gmail.com
Date: Tuesday, February 10, 2015 at 3:25 PM
To:
Event-based Monitoring Billing solution for OpenStack
Unsure what its checking out for billing though.
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On 2/12/15, 9:17 AM, Matt Joyce m...@nycresistor.com wrote:
I thought stacktach was
:33 PM, Kris G. Lindgren
klindg...@godaddy.commailto:klindg...@godaddy.com wrote:
Can you post your haproxy config file?
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
From: Gui Maluf guimal...@gmail.commailto:guimal...@gmail.com
Date
a message interchange between rabbitmq and nova-compute
and a
way of checking the result it would be great.
On Thu, Jan 15, 2015 at 1:45 PM, Kris G. Lindgren
klindg...@godaddy.com
wrote:
We did have an issue using celery on an internal application that we
wrote - but I believe it was fixed
On 1/8/15, 4:34 AM, Antonio Messina antonio.s.mess...@gmail.com wrote:
On Thu, Jan 8, 2015 at 12:12 PM, gustavo panizzo (gfa) g...@zumbi.com.ar
wrote:
On 01/08/2015 07:01 PM, Antonio Messina wrote:
On Thu, Jan 8, 2015 at 11:53 AM, gustavo panizzo (gfa)
g...@zumbi.com.ar
wrote:
i may
Can you post of you cronjob/script that you use to correct the quotas?
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On 3/20/15, 4:18 AM, Sam Morrison sorri...@gmail.com wrote:
We¹ve had the following for a year or so but doesn¹t help
Daniel,
2014.1.4 will have everything that was included in 2014.1.1, .2, and .3. No
need to do 3 updates - just update to the latest one. You might need to make
sure you don't need to do some db schema updates or something. But we have
never had an issue going to the latest maintenance
What does the [database] section of the configs look like?
Not only was the string changed but it was moved from the Default section to
[database]:
# The SQLAlchemy connection string used to connect to the
# database (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated
.
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On 2/20/15, 9:29 AM, Kris G. Lindgren klindg...@godaddy.com wrote:
We have memcache enabled on the metadata servers. Part of our load is
because we have a cron job that pulls the metadata and does some stuff on
the server every
I have been working with dism and sileht on testing this patch in one of
our pre-prod environments. There are still issues with rabbitmq behind
haproxy that we are working through. However, in testing if you are using
a list of hosts you should see significantly better catching/fixing of
faults.
We are running:
kombu 3.0.24
amqp 1.4.6
rabbitmq 3.4.0
erlang R16B-03.10
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On 5/1/15, 9:41 AM, Davanum Srinivas dava...@gmail.com wrote:
may i request folks post the versions of rabbitmq
I always thought that ebtables was below the stack in the iptables schema - but
still part of netfilter - thus should be reasonably fast (I would argue faster
than a user space lookup to openvswitchd). Considering the rules being added
are small in number and trivial (on this port allow
want here. It sounds like you
want ARP filtering support in the Linux bridge driver. Is that
correct?
On Mon, May 18, 2015 at 12:22 AM, Kris G. Lindgren
klindg...@godaddy.com wrote:
I always thought that ebtables was below the stack in the iptables
schema -
but still part of netfilter - thus
Why wouldn't you separate you dev/test/productiion via tenants as well? That's
what we encourage our users to do. This would let you create flavors that give
dev/test less resources under exhaustion conditions and production more
resources. You could even pin dev/test to specific
We run this exact configuration with the exception that we are using OVS
instead of linux bridge agent. On your Network nodes (those running
metadata/dhcp) you need to configure them exactly like you do you compute
services from the standpoint of the L2 agent. Once we did that when the l2
Mike is talking about our specific way of doing floating ips - which is not the
default for neutron, so you do *NOT* have to add an allowed-address pair for
the floating ip to work.
You will however have to add to the security group rules to allow traffic from
whatever networks are connecting
Also,
If you are running oslo.messaging 1.8.1 or higher and are wondering why
you are no longer seeing notifications from nova. Change
notification_driver=nova.openstack.common.notifier.rpc_notifier to
notification_driver=messaging and you will start seeing notifications for
nova events again.
Mike added our use case to the etherpad [1] today. I talked it over with Carl
Baldwin and he seemed ok with the format. If you guys want to add your uses
cases to the etherpad - please do, as now is the time to make your voice heard.
Mike and I will be at the Neutron mid-cycle to discuss
case is represented.
Even if your use case matches whats been described - please add a +1 next to
it.
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
From: Kris G. Lindgren klindg...@godaddy.commailto:klindg...@godaddy.com
Date: Friday, June
br-int (br-vlan and br-tun - depending on what modes you are using as well)
should get created automatically by opensvswitch-agent on startup. br-ext you
should create manually via your os's init scripts.
Kris Lindgren
Senior Linux Systems Engineer
On 6/17/15, 10:59 AM, Neil Jerram neil.jer...@metaswitch.com wrote:
On 17/06/15 16:17, Kris G. Lindgren wrote:
See inline.
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On 6/17/15, 5:12 AM, Neil Jerram neil.jer
[nova] and [neutron] subject markers.
Comments inline, Kris.
On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
During the Openstack summit this week I got to talk to a number of other
operators of large Openstack deployments about how they do networking.
I was happy, surprised even, to find
] and [neutron] subject markers.
Comments inline, Kris.
On 05/22/2015 09:28 PM, Kris G. Lindgren wrote:
During the Openstack summit this week I got to talk to a number of
other
operators of large Openstack deployments about how they do networking.
I was happy, surprised even, to find that a number
+1 (we had 2 people at the mid-cycle last time, so we would not have been
impacted by this)
When there are multiple 4+ breakout sessions going on at the same time and they
are all (hopefully) relevant to you/your company? I would agree that if
someone had 20+ people from a single company
I belive this can be possible by setting the allow_resize_same_host. More
details can be found here: http://www.madorn.com/resize-on-single-compute.html
. I believe the issue Dave is talking about is specifically targeting the same
host for resize assuming the host can support it in a
We are packaging nova in a venv so that we can run some kilo code on top of
some cent6 nodes (default python install is 2.6) (additionally we are working
on replacing the cent6 nodes with a newer os, but when you have a large number
of machines - things take time). We are using python27
We ran into this as well.
What we did is create an external to keystone api, that we expose to our
end users via a UI. The api will let user create projects (with a
specific defined quota) and also add users with the project admins role
to the project. Those admins can add/remove users from
Do you mean outside of the standard supplying user_data when the VM boots? Or
do you mean that you (as the cloud provider) want every vm to always do x,y,z
and to leave user_data open to your end users?
Kris Lindgren
Senior Linux Systems Engineer
talked to Tom too and he said there may be a room we can use
else there is plenty of space around the dev lounge to use.
See you tomorrow.
Sam
On 29 Oct 2015, at 6:02 PM, Xav Paice
<xavpa...@gmail.com<mailto:xavpa...@gmail.com>> wrote:
Suits me :)
On 29 October 2015 at 16:39, Kr
I also installed magnum - but ran into problems under kilo. Also, don't use
CoreOS as it wont work as well. I am trying to get magnum working against our
open stack install under liberty. But am running into problems assumptions
around with what services/features that Magnum expects your
The issue with ospurge is it only cleans up resources on a project to hasn't
been deleted. It doesn't detect/cleanup resources tied to already deleted
projects.
As I understand it, it suppose to be ran against a project - before the project
is removed.
I wonder how many people forgot to update their cloud in the user survey. I
almost did this, I noticed it had my cloud pre-defined and almost clicked next.
Versus going in and editing the cloud to make sure the details were correct
(they weren't). If I forgot to do this – I would have been
If you are doing this on the same server you are going to have many many issues
with olso.* libs being incompatible between releases (not just juno -> kilo but
all releases). I don't have specific knowledge around cinder, however on
separate machines/vm's we have run mismatched versions of
Hello all,
I noticed the otherday that in our Openstack install (Kilo) Neutron seems to be
the only project that was not logging the username/tenant information on every
wsgi request. Nova/Glance/heat all log a username and/or project on each
request. Our wsgi logs from neutron look like the
Please see inline.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
On 10/7/15, 6:12 AM, "Tim Bell" wrote:
>
>
>> -Original Message-
>> From: Daniel P. Berrange [mailto:berra...@redhat.com]
+1 on RHEL support. I have some interest in moving away from packages and
am interested in the OSAD tooling as well.
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
On 7/7/15, 3:38 PM, Abel Lopez alopg...@gmail.com wrote:
Hey
other people in the community are either already doing or
moving towards.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.
From: John Dewey j...@dewey.wsmailto:j...@dewey.ws
Date: Wednesday, July 8, 2015 at 11:43 PM
To: Kris G. Lindgren klindg
Jeff,
I was just talking to Yahoo! about this exact same thing. We both have many
regions that we would like to manage from a single plane of glass. From
Godaddys side it is mainly around managing quota for projects between multiple
regions. IE we would like to define a high level quota for
Hello,
As you know, much discussion has been around the naming and the url pathing for
the ip-usages extension. We also discussed this at the neutron mid-cycle as
well. Since we are the ones the made the extension, we use the extension to
help with scheduling in our layer 3 network design.
t;
>
>[1] https://review.openstack.org/#/c/187483/
>[2] https://review.openstack.org/190991
>[3] https://review.openstack.org/#/c/187433/
>
>
>
>Kris G. Lindgren wrote:
>>
>> We have been using ipsets since juno. Twice now since our kilo
>> upgrade we have had issue
Sha,
As you noticed the vif_plug_notification does not work with cells under the
stock confiuguration. In icehouse/juno we simply ran with
vif_plugging_is_fatal is false and set the timeout value to 5 or 10 seconds
iirc.
Sam Morrison and made a patch and Mathieu Gagné help updated it, to
For us on boot, we configure the systems init scripts to bring up br-ext and
plug in the ethernet (or in our case bond) device into the external bridge.
You should look at your specific distro for guidence here. Redhat based
(RHEL/CentOS/Fedora) use:
!
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: "Kris G. Lindgren"
Date: Tuesday, September 22, 2015 at 4:21 PM
To: openstack-operators
Subject: Re: Operator Local Patches
Hello all,
Friendly reminder: If you have local patches and haven't y
If we are going to be stringent on formatting – I would also like to see us be
relatively consistent on arguments/env variables that are needed to make a
script run. Some pull in ENV vars, some source a rc file, some just say
already source your rc file to start with, others accept command
We run nova-metadata on all the compute nodes, then bind 169.254.169.254 to lo
on each HV. This usually works with the standard iptables rule that
nova-metadata add's. Worse case you just add it to the the default rules set
for the compute node. Inside the images I think all you need to do
Hello all,
The LDT working group is currently trying to collect a list of patches that
people are carrying to better support Cells V1. We currently have a list of
~30 patches[1] that operators who are using cells are running to fix bugs or
fix broken functionality under cells v1. If you are
Dina,
Do we have a place to put things (etherpad) that we are seeing performance
issues with? I know we are seeing issues with CPU load under nova-conductor as
well as some stuff with the neutron API timing out (seems like it never
responds to the request (no log entry on the neutron side).
I believe TWC - (medberry on irc) was lamenting to me about cpusets, different
hypervisors HW configs, and unassigned vcpu's in numa nodes.
The problem is the migration does not re-define the domain.xml, specifically,
the vcpu mapping to match what makes sense on the new host. I believe the
Hello,
I was wondering if someone has a set of tools/code to work allow admins to move
vm's from one tenant to another? We get asked this fairly frequently in our
internal cloud (atleast once a week, more when we start going through and
cleaning up resources for people who are no longer with
Doesn't this script only solve the case of going from flatdhcp networks in
nova-network to same dchp/provider networks in neutron. Did anyone test to see
if it also works for doing more advanced nova-network configs?
___
Kris
In other projects the policy.json file is read each time of api request. So
changes to the file take place immediately. I was 90% sure keystone was the
same way?
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
On
uot; <mrie...@linux.vnet.ibm.com> wrote:
>
>
>On 12/2/2015 2:52 PM, Kris G. Lindgren wrote:
>> Hello,
>>
>> I was wondering if someone has a set of tools/code to work allow admins
>> to move vm's from one tenant to another? We get asked this fairly
>>
Not sure what you can do on your vmware backed boxes, but on the kvm compute
nodes you can run nova-api-metadata locally. We do this by binding
169.254.169.254 to loopback (technically an non-arping interface would work)
on each hypervisor. If I recall correctly, setting the metadata_server
We use R10k as well.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: Matt Fischer >
Date: Wednesday, November 25, 2015 at 12:16 PM
To: Saverio Proto
Upstart is the startup system used by Ubuntu. It's been phased out "in favor"
of systemd.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: Adam Lawson >
Date: Friday,
Cern is running ceilometer at scale with many thousands of compute nodes. I
think their blog goes into some detail about it [1], but I don’t have a direct
link to it.
[1] - http://openstack-in-production.blogspot.com/
___
Kris
You are most likely running db pools with a number of worker processes. If you
look at the MySQL connections most of them will be idle. If that's the case
set the db pool timeout lower. Lower the pool size down. Each worker thread
opens a connection pool to the database. If you are running
erry <openst...@medberry.net<mailto:openst...@medberry.net>>
Date: Monday, June 20, 2016 at 1:19 PM
To: "Kris G. Lindgren" <klindg...@godaddy.com<mailto:klindg...@godaddy.com>>
Cc: "openstack-oper."
<openstack-operators@lists.openstack.org&l
Hello all,
Wondering how you guys are handling the dns searchdomains for your instances in
your internal cloud. Currently we are updating the network metadata template,
on each compute node, to include the dns-search-domains options. We (Josh
Harlow) is working on implement the new network
When we did this within CentOS6 with the python 2.7 software collection. When
nova called into nova-rootwrap, rootwrap was called without any of the software
collection or venv stuff activated. So we had to move rootwrap to rootwrap-real
and create a shell script that did the needful (activate
Seconding what Matt said. You are also going to need to spend some time at the
kilo code level to do the flavor migrations. As that was a requirement from
kilo -> liberty. I also know that you needed to be on kilo.1 (or .2) to go to
liberty to fix a bug in Numa Node pinning (iirc).
I would
Not related to your issue, but something to keep an eye out for, is that you
need to keep the uid for glance synced across your glances servers when using
an nfsv3 store. Since nfsv3 stores the uid & gid for the file perms. You can
run into weird issues if glance is uid/gid 501 on one glance
We noticed the same thing. It a simple patch in /nova/virt/netutils.py (we
have been running this since icehouse).
Below is our current patch for kilo.
--- a/nova/virt/netutils.py
+++ b/nova/virt/netutils.py
@@ -104,8 +104,9 @@ def get_injected_network_template(network_info,
use_ipv6=None,
GoDaddy
From: "Kris G. Lindgren" <klindg...@godaddy.com<mailto:klindg...@godaddy.com>>
Date: Tuesday, February 2, 2016 at 9:50 PM
To: TAO ZHOU <angelo...@gmail.com<mailto:angelo...@gmail.com>>, OpenStack
Operations Mailing List
<openstack-operators@lists.ope
To follow up on the relay idea. In our implementation we have looked at trying
to enable ip_helper on the switches to forward dhcp to a set of defined neutron
dhcp servers. The issue is that this turns the dhcp requests from a broadcast
packet to a unicast packet. With the default way
This doesn't answer your specific question. However there are two projects out
there that are specifically for cleaning up projects and everything associated
with them for removal. They are:
The coda project: https://github.com/openstack/osops-coda
Which given a tenant ID will cleanup all
In the past we have had issues with having glance terminating ssl and downloads
either not completing or being corrupted. If you are having glance terminate
ssl, for us moving ssl termination to haproxy and running glance as non-ssl
fixed that issue for us.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
On 2/19/16, 10:07 AM, "Matt Riedemann" wrote:
>There is a long contentious dev thread going on here [1] about how Nova
>should handle the
We run cells, but when we reached about 250 hv in a cell we needed to add
another cell api (went from 2 to 3) to help with the cpu load caused by
novaconductor. Nova-conductor was/is constantly crushing the cpu on those
servers.
as well.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: Gustavo Randich
<gustavo.rand...@gmail.com<mailto:gustavo.rand...@gmail.com>>
Date: Tuesday, March 15, 2016 at 9:38 AM
To: "Kris G. Lindgren"
To be fair. The missing update that he needed was from almost 60 days ago
(Tagged on Jan 23rd).
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
On 3/29/16, 6:14 AM, "Ihar Hrachyshka" wrote:
You mean outside of the LDT filing an RFE bug with neutron to get
segmented/routed network support added to neutron complete with an etherpad of
all the ways we are using that at our companies and our use cases [1] . Or
where we (GoDaddy) came to the neutron Mid-cycle in Fort Collins to
that has come from LDT, alone, in neutron.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
On 4/11/16, 9:58 AM, "Sean M. Collins" <s...@coreitpro.com> wrote:
>Kris G. Lindgren wrote:
>> You mean
ood
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: Christopher Hull <chrishul...@gmail.com<mailto:chrishul...@gmail.com>>
Date: Saturday, March 26, 2016 at 11:06 AM
To: "Kris G. Lindgren" <klindg...@godaddy.com<mailto:klindg...@god
I believe some Redhat people that hang out in #openstack-rpm-packaging. But
per: https://www.rdoproject.org/community/ Their main points of contact are:
#rdo: Discussion around RDO in general
#rdo-puppet: Discussion around deploying RDO with Packstack and it's puppet
modules
#openstack:
Cern actually did a pretty good write up of this:
http://openstack-in-production.blogspot.com/2014/07/openstack-plays-tetris-stacking-and.html
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: Adam Lawson
We would love to have something like that as well.
However, to do it in openstack would mean that something would have to
gather/monitor the health of the HV's and not only disable new provisions but
kick off/monitor the migrations off the host and onto the new chosen
destinations . Also, due
I would be curious if specifing the cpu type would actually restrict
performance. As far as I know, this only restricts the cpu features presented
to a vm. You can present a vm that has the cpu instruction sets of a Pentium 3
– but runs and is as performant as a single core on a 2.8ghz
This spec/feature has already done on it and is committed:
https://review.openstack.org/#/q/topic:bp/os-instance-actions-read-deleted-instances
It landed in mitaka.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From:
Do you have rabbitmq/oslo messaging heartbeats enabled?
If you aren't using heartbeats it will take a long time for the nova-compute
agent to figure out that its actually no longer attached to anything.
Heartbeat does periodic checks against rabbitmq and will catch this state and
reconnect.
.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: "Ajay Kalambur (akalambu)" <akala...@cisco.com<mailto:akala...@cisco.com>>
Date: Thursday, April 21, 2016 at 12:51 PM
To: "Kris G. Lindgren" <
We have been using this since juno for Glance to do healthchecks against glance
from haproxy. Its worked pretty well for the most part.
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
From: Andy Botting
Make sure that the bridges are being created (1 bridge per vm) they should be
named close to the vm tap device name. Then make sure that you have bridge
nf-call-* files enabled:
http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Under hybrid mode what happens is a linux
Nova has a config setting for the maximum number of results to be returned by a
single call. You can bump that up so that you can do a nova list —all-tenants
and still see everything. However if I am reading the below correctly, then I
didn't realize that the —limit –1 apparently by-passes
1 - 100 of 146 matches
Mail list logo