.
We just re-created all our machines with the new flavours and the issues went
away.
I'm sorry to say that I don't have any suggestions if you are not able to do
this but at least you have some pointers to look at.
Cheers,
Robert van Leeuwen
From: Openstack
Hi there,
I need to delete subnet and net and then create new ones.
I type command:
quantum net-delete xxx
quantum subnet-delete xxx
I've seen some IP's never released after the virtual machine is destroyed.
If you are absolutely sure they are not in use I usually just manually remove
them
On 11 jul. 2013, at 20:43, comiqadze co...@yandex.ru wrote:
Hi, thank you Robert,
Sorry, but there is another issue now:
When I'm trying to associate floating-ip, according to the following link,:
http://docs.openstack.org/folsom/basic-install/content/basic-install_operate.html
it
of small files and really big disks.
The issue is not related to the network but the local filesystem/disk.
When the inode cache gets insufficient you can see terrible slow-downs.
There have been a few threads about that in this list, having a lot of memory
usually helps a bit.
Cheers,
Robert van
are only needed when you scale out one application for regular
communication between machines you won't need it.
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https
done.
Hi Chuck,
Thanks for the heads up.
Do you, or any of the Redhat people, know if the Red Hat 6 kernel is also
recent enough (are those improvements back-ported to RHEL 6)?
Thx,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
-auditor and replicator running?
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
use_namespaces = False in l3-agent.ini )
you must set the router-id of the router you want to run in the l3_agent.ini.
If you set the quantum server to debug you should see something referring to
this when you start the l3-agent.
Cheers,
Robert van Leeuwen
Oops, reply-ed off-list..
For future reference :)
From: Shyam Goud [shyam.tod...@oneconvergence.com]
Sent: Wednesday, May 15, 2013 2:14 PM
To: Robert van Leeuwen
Subject: Re: [Openstack-operators] Setting default dns server entry for VM
Bingoo.. that did
I would like to move all instances into /home/storage/nova/instances.
The following value in nova.conf specifies where the images are located:
state_path
Of course, I would shut down all instances and copy everything to the new
location. But how do I convince Nova that things have changed?
for
openvswitch.
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
from the virtual machines perspective?
Maybe I missing something or is this not even possible?
I'd rather not use Pacemaker for HA because it add's complexity and another
thing that can fail.
Thanks,
Robert van Leeuwen
___
Mailing list: https
All, I am trying to setup Folsom Quantum on CentOS and ran into issue
building the openvswitch rpm.
Does anyone have a built rpm or steps to build it?
Hi,
I will mail you our srpm / rpm for openvswitch.
Cheers,
Robert van Leeuwen
___
Mailing
do not want to build it yourself let me know.
I'll send the RPM to you. ( If there is a wide interest in this RPM I'll see if
I can make some time to setup a public repo here)
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net
://bugs.launchpad.net/quantum/+bug/1091605
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
localhost you will also need to connect
to localhost in the keystone.conf.
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
what it does?
'%' is a wildcard just like *
For more info:
http://dev.mysql.com/doc/refman/5.1/en/grant.html
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net
To prevent the libvirt directory from filling up with suspended machines
I'd like to disable the suspend functionality (or change it to shutdown)
Is there any way to do this?
Found the solution: you can restrict who can do this in policy.json
Cheers,
Robert van Leeuwen
Hi,
To prevent the libvirt directory from filling up with suspended machines I'd
like to disable the suspend functionality (or change it to shutdown)
Is there any way to do this?
Thx,
Robert van Leeuwen
___
Mailing list: https://launchpad.net
to the dmz router
first)
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
through our firewall.
Maybe not the most elegant solution but it works.
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help
issue I can think of is that you might get a-symetrical routing.
(traffic returning from the DHCP ip instead of the L3 ip)
Not sure if you can fix that with Policy Based Routing, never tried.
Cheers,
Robert van Leeuwen
___
Mailing list: https
Thanks for the reply. I have one more question.
How we will check whether the tunneling is established or not?
tcpdump can show you the GRE traffic:
tcpdump -i ethX proto gre
Cheers,
Robert
___
Mailing list: https://launchpad.net/~openstack
Post to
Only changing the VM MTU to 1454 does the trick ('ifconfig eth0 mtu 1454').
I think this is the same issue:
https://bugs.launchpad.net/quantum/+bug/1075336
So instead of decreasing the MTU on the physical interface you could also
increase it on the openvswitch port.
Cheers,
Robert
I thought about it, but yet not tried. Which OVS port would you
recommend to increase MTU ?
On the network node (br-ex or qg-) , or on the compute node (br-int) ?
You need to set it on the compute nodes ( int-br-ethX ) and possibly the an
extra port on the routing node.
(we use a
was messing with that setting due to a buggy custom fact.
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https
=488.861s, table=0, n_packets=45942, n_bytes=6350881,
idle_age=0, priority=1 actions=NORMAL
cookie=0x0, duration=486.919s, table=0, n_packets=6, n_bytes=440,
idle_age=440, priority=2,in_port=16 actions=drop
Thx,
Robert van Leeuwen
___
Mailing list
in the config file to get more info.
Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
can load balance it like any webpage.
Only gotcha is the vncproxy, it uses websockets that might be problematic for
level 7 loadbalancers.
If you use tcp loadbalancing you should be fine though.
Cheers,
Robert van Leeuwen
___
Mailing list: https
Hi,
I'm trying to get the access log for the swift-proxy-server working.
I'm logging to UDP and the swauth logs are getting there.
However the access logs are not created.
We are running swift 1.7.5 on Scientific Linux
proxy-server.conf
[default]
log_facility = LOG_LOCAL6
log_udp_host =
You need to add the proxy-logging filtering to the main pipeline
[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken swiftauth proxy-logging
proxy-server
Thx,
That was it.
Pipeline was missing and I totally looked over it.
Seems that putting this is in the pipeline was added
to have network namespace support to be able to allow this.
If you have this you can enable this in the quantum.conf: allow_overlapping_ips
= True
For the specifics:
http://docs.openstack.org/trunk/openstack-network/admin/content/ch_limitations.html
Cheers,
Robert van Leeuwen
On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert
leande...@gmail.commailto:leande...@gmail.com wrote:
Hello all,
I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients (each
hosted on a different machine) with 10 threads each uploading files using the
official
By stopping, do you mean halt the service (kill the process) or is it a
change in the configuration file?
Just halt the service.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
According to the info below, i think the current size is 256 right?
If I format the storage partition, will that automatically clear all the
contents from the storage or do I need to clean something else as well?
Output from xfs_info:
meta-data=/dev/sda3 isize=256agcount=4,
I see. With replication switched off during upload, does inserting into
various containers speed up the process
or is it irrelevant?
I'm not sure what's your question but maybe this helps:
In short:
The replication daemon is walking across your files to check if any files
need to be
Allow me to rephrase.
I've read somewhere (can't remember where) that it would be faster to upload
files if they would be uploaded to separate containeres.
This was suggested for a standard swift installation with a certain
replication factor.
Since I'll be uploading the files with the
Hi,
I'm trying to get all logging into syslog.
I have modified the nova.conf:
use_syslog= True
syslog_log_facility= LOG_LOCAL0
However it appears for most components logging is still going to
/var/log/nova/service
So the api.log and compute.log are still going there.
There are some messages
[han.sebast...@gmail.com]
Sent: Monday, January 07, 2013 10:08 AM
To: Robert van Leeuwen
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Nova (compute) and syslog
Hi,
Stupid question, did you restart compute and api service?
I don't have any problems with those flags
Hi,
I just noticed that when using glance with a swift backend the checksum is not
populated when the size is below the swift_store_large_object_size when adding
an image.
This results in an error message when downloading the image (and breaking nova
instance creation).
Looking at the
Ok thanks for confirming it,
https://bugs.launchpad.net/glance/+bug/1095356
Cheers,
Robert
From: openstack-bounces+robert.vanleeuwen=spilgames@lists.launchpad.net
[openstack-bounces+robert.vanleeuwen=spilgames@lists.launchpad.net] on
behalf of
into
Swift storage? Or, our developers will have to change that every single
application's code and tell them to save and retrieve BLOBS from remote Swift
servers?
Yes, the applications needs to be modified.
Cheers,
Robert van Leeuwen
___
Mailing
]
* dhcp_agent.ini:
[DEFAULT]
debug = False
state_path = /var/lib/quantum
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
use_namespaces = False
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
Thanks,
Robert van Leeuwen
Yes the tag setting should be in the opts file.
Ok, thanks, at least I do not have to look for a wrong configuration then.
What version of dnsmasq are you running?
dnsmasq-2.48-6.el6.x86_64
Also can you get a tcpdump of the DHCP traffic?
FYI: this is a different subnet example
This is the
What version of dnsmasq are you running?
dnsmasq-2.48-6.el6.x86_64
Mark,
Thanks for pointing me in the right direction.
It had to do with the older dnsmasq version.
Just installed the dnsmasq 2.63 and it works like a charm :)
Cheers,
Robert van Leeuwen
was not in a config file yet because now it is working
through the dashboard)
Because of this missing info the flows were not created.
Everything seems to be working now.
So finally got GRE / Openvswitch (kmod) / Quantum on Scientific Linux 6.3 up
and running in the testlab :)
Cheers,
Robert van
according to
ovs-dpctl show -s
Is there something else I am missing?
My ovs-agent config:
[OVS]
tenant_network_type = gre
enable_tunneling = True
tunnel_id_ranges = 1:1000
local_ip = 10.10.10.10
integration_bridge = br-int
tunnel_bridge = br-tun
Thanks,
Robert van Leeuwen
, volume)
For debugging you can take a look at the keystone log so you can see the
contents of the created tokens.
Regards,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe
,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
at what rate the kernel tries to reclaim entry's.
Default is 100 (described as fair rate), setting it to 0 will never reclaim
(and the oom killer will probably have some fun).
See: http://www.kernel.org/doc/Documentation/sysctl/vm.txt
Cheers,
Robert van Leeuwen
51 matches
Mail list logo