Hi Edgar,
that's the crazy thing - so all the gre tunnels are up, I can see them in
openvswitch and also can see that there are some openflow rules applied.
I've craeted VMs on every hypervisor (including the controller, as it's a
test install) on network1 (192.168.1.0), every VM (and that is
On 01/15/2015 06:22 AM, Geo Varghese wrote:
Hi Jay/Abel,
Thanks for your help.
Just fixed issue by changing following line in nova.conf
cinder_catalog_info=volumev2:cinderv2:publicURL
to
cinder_catalog_info=volume:cinder:publicURL
Now attachment successfuly done.
Do guys know how this
Alex,
Did you follow the networking recommendations:
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html
It will ell you if you write your own topology and complete a packet trace to
find out the issue.
Make sure all tunnels are established between your three nodes.
I know we've been working on that on our commercial product side at big
switch with an analyzer... the issue i think you are going to run into is
getting insight into network upstream info from your top of racks and spine
switches.
Setting up a uniform access to ovs stats in the API or in an
We did have an issue using celery on an internal application that we wrote -
but I believe it was fixed after much failover testing and code changes. We
also use logstash via rabbitmq and haven't noticed any issues there either.
So this seems to be just openstack/oslo related.
We have tried
False alarm, after more tests the issue persisted, so I switched to backup
mode in the other haproxy nodes and now everything works as expected.
Thanks
Em 15/01/2015 12:13, Pedro Sousa pgso...@gmail.com escreveu:
Hi all,
the culprit was haproxy, I had option httpchk when I disabled this
On Thu, Jan 15, 2015 at 2:19 PM, Kris G. Lindgren klindg...@godaddy.com
wrote:
Is the fact that neutron security groups don’t provide the same level
of isolation as nova security groups on your guys radar?
Specifically talking about:
https://bugs.launchpad.net/neutron/+bug/1274034
That
Matt, can u please send the link for the wiki page?
On Thu, Jan 15, 2015 at 7:17 AM, Matt Griffin matt.grif...@percona.com
wrote:
Just a reminder that we're going to meet today (and every Thursday) from
3:00-3:30pm US Central.
Like last time, let's chat in #openstack-haguide on freenode.
A
During the Atlanta ops meeting this topic came up and I specifically mentioned
about adding a no-op or healthcheck ping to the rabbitmq stuff to both nova
neutron. The dev's in the room looked at me like I was crazy, but it was so
that we could exactly catch issues as you described. I am
On 01/16/2015 09:19 AM, Kris G. Lindgren wrote:
Is the fact that neutron security groups don’t provide the same level of
isolation as nova security groups on your guys radar?
Specifically talking about: https://bugs.launchpad.net/neutron/+bug/1274034
I am sure their are a few other
Here is the bug I’ve been tracking related to this for a while. I haven’t
really kept up to speed with it, so I don’t know the current status.
https://bugs.launchpad.net/nova/+bug/856764
From: Kris Lindgren klindg...@godaddy.commailto:klindg...@godaddy.com
Date: Thursday, January 15, 2015 at
Hello everyone.
One more thing in the light of small openstack.
I really dislike tripple network load caused by current glance snapshot
operations. When compute do snapshot, it playing with files locally,
than it sends them to glance-api, and (if glance API is linked to
swift), glance sends
That specific bottleneck can be solved by running glance on ceph, and
running ephemeral instances also on ceph. Snapshots are a quick backend
operation then. But you've made your installation on a house of cards.
On Thursday, January 15, 2015, George Shuklin george.shuk...@gmail.com
wrote:
I've found histograms to be pretty useful in figuring out patterns during
sizable time deltas... and anomaly detection there can highlight stuff you
might want to check out ( ie raise the alert condition on that device ).
example of a histogram i did many many moons ago to track disk sizes from
If you are using ha queues, use a version of rabbitmq 3.3.0. There was a
change in that version where consumption on queues was automatically
enabled when a master election for a queue happened. Previous versions only
informed clients that they had to reconsume on a queue. It was the clients
One good topic to try to pin down at the Ops meet up would be how we could do
the flavour/aggregate/project/hypervisor mappings. We’ve got a local patch for
some function but it was not possible to get the right way to do it agreed
On 01/14/2015 01:06 PM, matt wrote:
Hey Mike!
Thanks for this info. Super helpful to me at least. I am very interested
in hearing more about nova-network to neutron migrations.
-Matt
Hello Matt:
Please start attending the weekly meetings:
We’ve had a lot of issues with Icehouse related to rabbitMQ. Basically the
change from openstack.rpc to oslo.messaging broke things. These things are now
fixed in oslo.messaging version 1.5.1, there is still an issue with heartbeats
and that patch is making it’s way through review process now.
Hello everyone,
I want to setup Havana, Do anyone has installation guide for the same ?
Thanks
--
Thanks regards,
Anwar M. Durrani
+91-8605010721
http://in.linkedin.com/pub/anwar-durrani/20/b55/60b
___
OpenStack-operators mailing list
Thanks Edgar for help, i have question in following section :
-
Edit /etc/keystone/keystone.conf:
vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token=ADMIN
log_dir=/var/log/keystone
[database]
connection = mysql://keystone:password@controller/keystone
Is the fact that neutron security groups don’t provide the same level of
isolation as nova security groups on your guys radar?
Specifically talking about: https://bugs.launchpad.net/neutron/+bug/1274034
I am sure their are a few other thigns that nova is doing that neutron is
currently not.
21 matches
Mail list logo