Hi All,
I was able to resolve this issue by changing the MTU size within the
instances from 1500 to 1454. I got these pointers from this mailthread:-
https://lists.launchpad.net/openstack/msg24050.html
Thanks and Regards
Rahul Sharma
On Tue, May 28, 2013 at 5:00 PM, Rahul Sharma
Hi,
IS there any way to access VM from external network without using quantum
l3 agent.
--
Regards,
VeeraReddy.B
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
No but the provider network extention does provide a way to do this that
might work for your usecase:
http://docs.openstack.org/trunk/openstack-network/admin/content/provider_networks.html
On Wed, May 29, 2013 at 11:02 PM, Veera Reddy veerare...@gmail.com wrote:
Hi,
IS there any way to
Hi,
I still don't see why you want to have two nics on the same L2? We don't
allow this because we don't want to allow a tenants to bridge them
creating a loop in the network.
Aaron
On Thu, May 23, 2013 at 8:18 PM, Liu Wenmao marvel...@gmail.com wrote:
Hello:
I have a network with a
Yes, the provider network provide a way for that use case.
And I proposed a blueprint [1] to be able to isolate ports on a same
network/subnet.
So in you case, if you set a provider network as a public network and if
you like to share this network between tenants, you will be able to isolate
l2
I am testing openstack in my college LAN network. I am able to have
multiple nova compute nodes. But there is no networking in the VMs. what is
the best possible solution to test this out? Any suggestions?
I tried creating pools with the command
nova-manage floating create --pool=nova
Rosen:
I want to implement a virutal IPS(intrusion protection system) on L2 layer,
so the input interface and the output interface should be on the same
network.
Now I manually modify the packet vlan using OpenFlow protocol at the two
NICs, so that the loop won't happen.
On Thu, May 30, 2013
Hi Guys,
Nova actually developed based on sharing nothing arch. But in my question
is I have five computer node. each compute node 8 cores.
It's possible can I start instance with 32 core machine.
Please guide me.
-Dhanasekaran
Did I learn something today? If not, I wasted it.
Hi,
It sounds quite unclear for me about the possibility *in Folsom* to have
two distinct Cinder hosts having each one LVM backend called
cinder-volumes ?
As per the doc [1], I would say the answer is no, but could you please
confirm ?
If so, do you have any idea on how to trick a nearly
Hi Dhanasekaran,
It seems to me that the 'shared nothing' architecture [1] probably has
little to do with your need, as it refers to the various nova nodes, rather
than instances.
It looks like you want to start an instance which is distributed across
several nodes. Is this your goal?
Salvatore
Forwarding again with some hope for response :)
-- Forwarded message --
From: Anil Vishnoi vishnoia...@gmail.com
Date: Thu, May 30, 2013 at 3:14 AM
Subject: [Grizzly][Quantum] Floating IP is not reachable
To: openstack@lists.launchpad.net openstack@lists.launchpad.net
Hi All,
We will need to look at iptables on your network node.
If you run iptables -n -tnat --list you should see a couple of DNAT/SNAT
rules for forwarding traffic netween 9.126.108.127.
In any case, bear in mind that the default security group does not allow
ICMP. If you have not enabled it, it might
HI Salvatore,
It's possible start an instance which is distributed across several nodes.
it's my goal.
please guide me.
-Dhanasekaran.
Did I learn something today? If not, I wasted it.
On Thu, May 30, 2013 at 9:33 AM, Salvatore Orlando sorla...@nicira.comwrote:
Hi Dhanasekaran,
It seems
Hi all,
I think it is a bug of qpid as rpcbackend.
Other service(nova-compute, cinder-scheduler, etc) use eventlet thead to
run service. They stop service use thread kill() method. The last step
rpc.cleanup() just did nothing, because the relative consume connection run
in thread and killed.
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.
But I cannot boot from volumes.
I
Le 30/05/2013 15:25, Sylvain Bauza a écrit :
Hi,
It sounds quite unclear for me about the possibility *in Folsom* to
have two distinct Cinder hosts having each one LVM backend called
cinder-volumes ?
As per the doc [1], I would say the answer is no, but could you please
confirm ?
If so,
Hi all,
I followed the Grizzly Multinode howto
(https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst),
but could not make Quantum operational. I apologize in advance for the
long email, but I was hoping that someone can help me
I am not familiar with impl_qpid,py, but am familiar with amqp.py and have
had problems around rpc_amqp.cleanup() the Pool.empty() method it calls.
It was a totally different problem, but I decided to take a look at your
problem. I noticed that in impl_qpid.py the only other place a
Hi everyone,
The first milestone of the Havana development cycle, havana-1 is now
available for Keystone, Glance, Nova, Horizon, Networking, Cinder,
Ceilometer, and Heat. It contains all the new features that have been
added since the Grizzly pre-release Feature Freeze in March.
You can see the
The ceilometer team has had a few requests for help with older versions of
ceilometer or running the grizzly version of ceilometer with older versions
of other OpenStack components lately. We appreciate the level of interest
in the project but, as much as we would like to, unfortunately we are not
No, you would have to start several instances one on each compute node and
implement that distribution in your application.
On Thu, May 30, 2013 at 6:46 AM, Dhanasekaran Anbalagan
bugcy...@gmail.comwrote:
HI Salvatore,
It's possible start an instance which is distributed across several
HI Aaron,
I would like to know how the sharing of resources happening in OpenStack.
Assume that there are two compute nodes of 4 physical cores each with 16 GB
of Physical RAM each, would I be able to start an instance with 8 cores and
32 Gb of RAM. How this is handled in Openstack.
Hi again,
You would need a compute driver for a hypervisor supporting a distributed
virtual machine.
I have a very limited knowledge of server virtualization and nova drivers,
but I don't think such driver exists for nova.
In fact, I don't think a hypervisor which does that is generally
Hello Chris,
This may help: (even though it's too late ^^;)
nova/virt/baremetal/db/sqlalchemy/migration.py
56 - from migrate import exceptions as versioning_exceptions
56 + try:
57 + # Try the more specific path first (migrate = 0.6)
58 + from migrate.versioning import exceptions as
This is how my iptable looks like
# iptables -n -tnat --list
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
nova-api-PREROUTING all -- 0.0.0.0/00.0.0.0/0
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain
On 05/30/2013 07:37 AM, Martin Mailand wrote:
Hi Josh,
I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without
Hi,
I have plans to rent dedicated servers running on full server sized VMs
running on OpenStack, and the servers all will feature 2-tier storage
schemes, so that the customers can have both a 10k rpm SAS drive based disk
another SSD based disk on their dedicated server.
Each server will
Doug,
Can you advise on what the plan/policy will be for Havana ?
- Will I be able to run Havana core components such as Nova with
Grizzly ceilometer ?
- Will I be able to run Havana ceilometer with Grizzly core components
(with reduced functionality compared to the
Hi Farhan and Rahul,
I think this issue would only be seen by people using the OVS plugin in a
multinode setup with GRE tunnels and doing more than simple ping and ssh
access. It seems some sites like github.com are either ignoring or not
receiving the destination unreachable - need
Hi Weiguo,
my answers are inline.
-martin
On 30.05.2013 21:20, w sun wrote:
I would suggest on nova compute host (particularly if you have
separate compute nodes),
(1) make sure rbd ls -l -p works and /etc/ceph/ceph.conf is
readable by user nova!!
yes to both
(2) make sure you can start
Hallo All,
Currently, I use VNC to access the windows virtual machine deployed in
OpenStack. But this gives me a smaller view of the Windows GUI or Desktop. Is
there any way from the Horizon GUI to have an enlarged view of the Windows
Desktop?
Can I get any suggestions to connect to the
Hi Josh,
On 30.05.2013 21:17, Josh Durgin wrote:
It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the
From what I understand, it is unusual to support mixing components from
different releases like that. Am I wrong?
Doug
On Thu, May 30, 2013 at 3:29 PM, Tim Bell tim.b...@cern.ch wrote:
** **
Doug,
** **
Can you advise on what the plan/policy will be for Havana ?
** **
**-
Hi Josh,
I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the
On 05/30/2013 01:50 PM, Martin Mailand wrote:
Hi Josh,
I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
For windows, you could add TCP port 3389 to your security group and enable
remote desktop access in windows. The VNC console access in Horizon is really
intended for administrative/management access rather than production usage.
Brian
-
Brian
The difficulties of upgrading OpenStack while running production services was
one of the major feedbacks from the user survey at
Portland. Core components are including N/N+1 upgrades into their plans for
Havana so big bang (and high risk of extended downtime)
upgrades are no longer the
Hi Josh,
that's working.
I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?
2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which
On 05/30/2013 02:18 PM, Martin Mailand wrote:
Hi Josh,
that's working.
I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?
cinder.volume.drivers.rbd.RBDDriver
2. I have in the
Hi,
From the openstack documentation:
(http://docs.openstack.org/folsom/openstack-compute/install/apt/content/faq-about-vnc.html)
A: These values are hard-coded in a Django HTML template. To alter them, you
must edit the template file _detail_vnc.html. The location of this file will
vary
Hi Josh,
now everything is working, many thanks for your help, great work.
-martin
On 30.05.2013 23:24, Josh Durgin wrote:
I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?
On 05/30/2013 02:50 PM, Martin Mailand wrote:
Hi Josh,
now everything is working, many thanks for your help, great work.
Great! I added those settings to
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure
out in the future.
-martin
On 30.05.2013 23:24, Josh Durgin
Hi Gabriel,
The path to the file was correct and I had changed it to 1024 * 768 and I could
see the change in the VNC window size. Thanks for your suggestion and it is
working.
Regards,
Krishnaprasad
From: Staicu Gabriel [mailto:gabriel_sta...@yahoo.com]
Sent: Donnerstag, 30. Mai 2013 23:28
Hi Brian,
The Windows server 2012 OpenStack edition comes with TCP port 3389 enabled. I
don't think we should specify it in the security groups as this is already
taken care in the Windows firewall.
Thanks
Krishnaprasad
From: Brian Schott [mailto:brian.sch...@nimbisservices.com]
Sent:
Hi Ray,
Thanks for your reply.
try except change to line 386 only solve cinder-scheduler or
nova-compute service which is the similar implementation stop raise
exception.
However, all cinder-volume queue be removed when one of
multi-cinder-volume service stop. It is another problem.
I
I don't think the default security groups (visible in Horizon or nova
secgroup-list-rules) has 3389 open, unless maybe you are running Windows Server
2012 and Hyper-V as the hypervisor? There are 2 layers of firewalls. The
OpenStack security groups outside the windows guest and the windows
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
at 20130530
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
at 20130530
at 20130530
-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesCall os.kill for each child instead of the process groupby flaper87editglance/common/wsgi.pyConsole Output[...truncated 6322 lines...]Finished at 20130530-0740Build needed 00:08:26, 28804k disc
-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesCall os.kill for each child instead of the process groupby flaper87editglance/common/wsgi.pyConsole Output[...truncated 6993 lines...]Finished at 20130530-0741Build needed 00:08:18, 28832k disc
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/252/Project:precise_havana_nova_trunkDate of build:Thu, 30 May 2013 08:44:26 -0400Build duration:5 min 16 secBuild cause:Started by user Chuck ShortBuilt
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: precise_havana_python-swiftclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_python-swiftclient_trunk/16/Project:precise_havana_python-swiftclient_trunkDate of build:Thu, 30 May 2013 10:46:49 -0400Build duration:3 min 35 secBuild
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: saucy_havana_python-novaclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/saucy_havana_python-novaclient_trunk/23/Project:saucy_havana_python-novaclient_trunkDate of build:Thu, 30 May 2013 11:33:40 -0400Build duration:4 min 49 secBuild cause:Started by
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
66 matches
Mail list logo