thanks will try that asap
Regards,
Pranav
On Tue, Mar 19, 2013 at 8:53 PM, bruno sendas bsen...@gmail.com wrote:
Hi,
I had tried every sollution posted on the openstack archives and it simply
wouldn't work.
Until I had the wrong idea and installed the nova-api-metadata on the
controller
Connection to 10.3.1.25 failed.
Do you know what this ip is?
On Wed, Mar 20, 2013 at 1:06 PM, Arun Fera fer...@gmail.com wrote:
The stack.sh file runs without errors and only when it reaches the lines
create_keystone_accounts
++ keystone tenant-create --name admin
the error starts like:
What are the api/registry log output when you get this error? Enable
verbose and debug modes in both services.
---
JuanFra
2013/3/16 杨峰 hoking.y...@gmail.com
No, I did not follow devstack install, I am installing folsom from three
new virtual machines.
I had checked those conf/ini files,
Hello everyone,
Next projects to publish a release candidate in preparation for the
Grizzly release are OpenStack Image Service (Glance and OpenStack Object
Storage (Swift). The RC1s are available for download at:
https://launchpad.net/glance/grizzly/grizzly-rc1
ip of my system
On Wed, Mar 20, 2013 at 1:41 PM, Gareth academicgar...@gmail.com wrote:
Connection to 10.3.1.25 failed.
Do you know what this ip is?
On Wed, Mar 20, 2013 at 1:06 PM, Arun Fera fer...@gmail.com wrote:
The stack.sh file runs without errors and only when it reaches the lines
Hello everyone,
Hot on the heels of Glance and Swift, OpenStack Dashboard (codenamed
Horizon) also just published its first Grizzly release candidate. It is
available for download at:
https://launchpad.net/horizon/grizzly/grizzly-rc1
Congrats to Gabriel and all the Horizon team!
Unless
Hi devs,
we are using backend iSCSI provider (Netapp) which is mapping
Openstack volumes to iSCSI LUNs. This mapping is not static and
changes over time. For example when the volume is detached then his
LUN id becomes unused. After a while a _different_ volume may get the
same LUN id, as Netapp
Hi,
This guide is tested and good for similar setups in Virtual Box
https://github.com/dguitarbite/OpenStack-Folsom-VM-SandBox-Guide
Regards,
Pranav
On Wed, Mar 20, 2013 at 8:29 AM, Dolph Mathews dolph.math...@gmail.comwrote:
Make sure that the certs created by pki_setup are readable by the
Le 20/03/2013 13:24, Mohammed Amine SAYA a écrit :
RuntimeError:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf',
'ovs-vsctl', '--', '--may-exist', 'add-port', 'br-ex', 'qg-32dd6c6b-b6', '--',
'set', 'Interface', 'qg-32dd6c6b-b6', 'type=internal', '--', 'set',
'Interface',
Hi Sylvain,
Thanks for your answer.
No I am not. I am actually logged in the controller machine. I start the
l3-agent with sudo /etc/init.d/quantum-l3-agent start.
But I took a look at /var/log/quantum/dhcp-agent.log and I got this : (see
last log)
Like l3-agent, it complains about some
You are having the exact same issue, sudo is complaining about being
executed without a tty.
Could you please take a look at the various links I provided to you and
check ?
Basically, you can quickly disable requiretty on sudoers and retry, to
check if that fixes the problem.
Which is your
Hi Sylvain,
Yes I managed to get l3-agent up and running. You were right it was a
problem of sudoers.
I didn't notice the files for cinder, quantum and nova in /etc/sudoers.d.
I still have an issue with dhcp-agent because a VM data IP address was
already taken. But
I think I can manage to solve
Hi all,
I am following the steps listed in
http://docs.openstack.org/folsom/openstack
-
network/admin/content/l3_workflow.html
The l3-agent.log fills with messages like this:
http://ix.io/4Oy
I am using a gre network type, and
nova.virt.libvirt.vif.LibvirtHybirdOVSBridgeDriver
with
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
OpenStack Security Advisory: 2013-009
CVE: CVE-2013-1865
Date: March 20, 2013
Title: Keystone PKI tokens online validation bypasses revocation check
Reporter: Guang Yee (HP)
Products: Keystone
Affects: Folsom
Description:
Guang Yee from HP reported
Can someone give a pointer to how one goes about adding a new panel to an
existing panel using overrides.py ?
I know my panel is working because if I hardcode it into an existing
dashboard.py file, it is found and displayed. I'd prefer to put it in
overrides.py instead and am wondering how
Hi,
As per https://bugs.launchpad.net/quantum/+bug/1155050 and also other
litterature, I do see doc alerts saying that Quantum L3 and DHCP agents
must be on different hosts.
Let me be honest, I successfully installed and configured both on the
same physical machine, using GRE tunnels and
Hello Sylvain,
Same here, I have grizzly on a single node and it works fine. Using
linuxbridge plugin with vlan. So far, so good. If things break I'll let you
know.
I read you were able to have floating ip's too. May I ask if you could send
me the steps you followed to create and assign floating
On 03/20/2013 06:16 PM, Sylvain Bauza wrote:
Hi,
As per https://bugs.launchpad.net/quantum/+bug/1155050 and also other
litterature, I do see doc alerts saying that Quantum L3 and DHCP
agents must be on different hosts.
Let me be honest, I successfully installed and configured both on the
There's a couple of changes that you need to make...
First, edit the overrides.py file: (e.g., if we wanted to add the panel to the
admin dashboard so this uses the admin dashboard slug: 'admin')
import horizon
from path_to_module.panel import YourNewPanelClass
admin_dashboard =
Hello everyone,
Almost there... OpenStack Compute (codenamed Nova) just published its
first Grizzly release candidate. It is available for download at:
https://launchpad.net/nova/grizzly/grizzly-rc1
Congrats to all the Nova devs, who fixed 201 bugs (!) in 4 weeks.
Unless release-critical
On Mar 20, 2013, at 3:39 AM, Brano Zarnovican zarnovi...@gmail.com wrote:
Hi devs,
we are using backend iSCSI provider (Netapp) which is mapping
Openstack volumes to iSCSI LUNs. This mapping is not static and
changes over time. For example when the volume is detached then his
LUN id
Thats not working for me.
My module is installed in
/usr/lib/python2.7/dist-packages/horizon/dashboards/settings as 'ec2list', it
is in the python path so thats not the issue.
overrides.py looks like this:
import horizon
import logging
settings= horizon.get_dashboard('settings')
But you should be registering the Panel like
settings.register(EC2ListPanel)
or settings.register(ec2list.EC2ListPanel)
not ec2list
-Dave
-Original Message-
From: Wyllys Ingersoll [mailto:wyllys.ingers...@evault.com]
Sent: Wednesday, March 20, 2013 11:04 AM
To: Lyle, David (Cloud
Neither of those works.
When I use settings.register(ec2list.EC2ListPanel) I get this error:
Error registering panel: 'module' object has no attribute 'EC2ListPanel'
If I just use: settings.register(EC2ListPanel), I get the same sort of error:
Error registering panel: name 'EC2ListPanel' is
OK, I figured out...
The import statement needs to look like this:
from ec2list.panel import EC2ListPanel
I was just using from ec2list import EC2ListPanel which was insufficient
since it has an empty __init__.py
adding the .panel (which you suggested in your original email, but I didn't
On Wed, Mar 20, 2013 at 5:06 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
2) Wipeout connection_info after disconnect. At least for Netapp
provider it makes no sense to retain the info which is no longer valid
anyway.
This seems reasonable. In fact, the whole block_device_mapping item
Guys,
This problem still persist... I tried everything...
Here is more error message from dhcp-agent.log:
http://paste.openstack.org/show/34167/
I really need help to fix this... It is a fresh installation of Grizzly
G3+RC1 from PPA, Ubuntu 12.04.2 64 bits...
Tks!
Thiago
On 19 March
On Mar 20, 2013, at 11:20 AM, Brano Zarnovican zarnovi...@gmail.com wrote:
On Wed, Mar 20, 2013 at 5:06 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
2) Wipeout connection_info after disconnect. At least for Netapp
provider it makes no sense to retain the info which is no longer valid
Leandro!
My /etc/sudoers:
---
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults
Guys,
The nova-novncproxy `1:2013.1+git201303201334~precise-0ubuntu1' still
conflicts with `novnc_2012.1~e3+dfsg+1-2_amd64.deb'.
I'm trying Grizzly from PPA on top of Ubuntu 12.04.2.
Look (trying to install novnc):
---
Unpacking novnc (from .../novnc_2012.1~e3+dfsg+1-2_amd64.deb) ...
dpkg:
Please open up a bug in launchpad.
Thanks
chuck
On 13-03-20 03:13 PM, Martinx - ジェームズ wrote:
Guys,
The nova-novncproxy `1:2013.1+git201303201334~precise-0ubuntu1' still
conflicts with `novnc_2012.1~e3+dfsg+1-2_amd64.deb'.
I'm trying Grizzly from PPA on top of Ubuntu 12.04.2.
Look (trying
What about adding:
Defaults:ALL !requiretty
in /etc/sudoers?
Just to check
Cheers
On Wed, Mar 20, 2013 at 3:54 PM, Martinx - ジェームズ
thiagocmarti...@gmail.comwrote:
Leandro!
My /etc/sudoers:
---
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider
I'm doing a Folsom deployment with FlatDHCP (not multihost).
When I try to boot a quantal image, the instance doesn't pick up the DHCP
lease. I've confirmed that dnsmasq is sending out the DHCPOFFER, and I can
see by tcpdump on the compute host that the DHCP packets are making it to
the vnet0
On Wed, Mar 20, 2013 at 3:51 PM, Lorin Hochstein
lo...@nimbisservices.comwrote:
I'm doing a Folsom deployment with FlatDHCP (not multihost).
When I try to boot a quantal image, the instance doesn't pick up the DHCP
lease. I've confirmed that dnsmasq is sending out the DHCPOFFER, and I can
On Wed, Mar 20, 2013 at 4:15 PM, Nathanael Burton
nathanael.i.bur...@gmail.com wrote:
On Wed, Mar 20, 2013 at 3:51 PM, Lorin Hochstein lo...@nimbisservices.com
wrote:
I'm doing a Folsom deployment with FlatDHCP (not multihost).
When I try to boot a quantal image, the instance doesn't pick
On 03/20/2013 04:43 PM, Lorin Hochstein wrote:
iptables -D POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM
--checksum-fill
Are you *sure* this rule was applied to the traffic in question? It
really sounds like you were hitting this issue ...
--
Russell Bryant
Hi!
I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
guide I wrote:
Ultimate OpenStack Grizzly
Guidehttps://gist.github.com/tmartinx/d36536b7b62a48f859c2
It covers:
* Ubuntu 12.04.2
* Basic Ubuntu setup
* KVM
* OpenvSwitch
* Name Resolution for OpenStack
1 problem fixed with:
visudo
---
quantum ALL=NOPASSWD: ALL
cinder ALL=NOPASSWD: ALL
nova ALL=NOPASSWD: ALL
---
Guide updated...
On 20 March 2013 19:51, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:
Hi!
I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
guide I
Guys,
Of of nothing, my Dashboard (1:2013.1+git201303201651~precise-0ubuntu1)
gives Internal Server Error.
Apache error.log contains:
http://paste.openstack.org/show/34196/
I'm running:
cd /usr/share/openstack-dashboard
python manage.py compress
service apache2 restart
...but the
On Wed, Mar 20, 2013 at 3:51 PM, Martinx - ジェームズ
thiagocmarti...@gmail.comwrote:
Hi!
I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
guide I wrote:
Ultimate OpenStack Grizzly
Guidehttps://gist.github.com/tmartinx/d36536b7b62a48f859c2
It covers:
* Ubuntu
On Tue, Mar 19, 2013 at 9:02 AM, Markku Tavasti markku.tava...@cybercom.com
wrote:
Hi!
We are trying to create setup where one opestack cluster is connected to
many existing networks. Networks are each assigned to some specific
customer, and every network can have different ip range
if you want private networks, but also to give VMs public IPs, you will
want to create one or more private networks + subnets, create a router,
uplink each subnet to the router, create an external network + subnet using
your public IPs, and then allocate a floating ip for each VM that needs a
Hi,
I have a port that has two ip addresses on different subnets for the same
network. The network is a flat provider network and if I try to boot an
instance using this port I get the following error in my nova-compute
logs:
2013-03-20 22:03:15 45647 TRACE nova.openstack.common.rpc.amqp
Oops!
I just did:
aptitude purge openstack-dashboard-ubuntu-theme
#2013.1+git201303201651~precise-0ubuntu1
..and the Internal Server Error message disapear... But, Dashboard still
doesn't work...
A new error page appear now:
---
Something went wrong!
An unexpected error has occurred. Try
Please remember the bug link i provided in another email. It seems to be a
packaging bug. The Ubuntu theme with COMPRESS section disabled [per the bug
response] works.
I'm currently installing RC1 bits on Raring. I'll post back the experience
with dashboard.
On Thu, Mar 21, 2013 at 10:51 AM,
Title: precise_grizzly_glance_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_glance_trunk/183/Project:precise_grizzly_glance_trunkDate of build:Wed, 20 Mar 2013 03:35:01 -0400Build duration:2 hr 39 minBuild cause:Started by user James PageBuilt
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: quantal_folsom_keystone_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_keystone_stable/105/Project:quantal_folsom_keystone_stableDate of build:Wed, 20 Mar 2013 12:01:35 -0400Build duration:2 min 18 secBuild cause:Started by an SCM
changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80Changesvalidate from backend (bug 1129713)by dolph.mathewsedittests/test_service.pyeditkeystone/service.pyConsole Output[...truncated 5877 lines...]Finished at 20130320-1236Build needed 00:02:35
Title: precise_folsom_keystone_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_keystone_stable/93/Project:precise_folsom_keystone_stableDate of build:Wed, 20 Mar 2013 12:01:33 -0400Build duration:1 hr 26 minBuild cause:Started by an SCM changeBuilt
at 20130320
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/852/Project:precise_grizzly_nova_trunkDate of build:Wed, 20 Mar 2013 13:28:36 -0400Build duration:3 min 52 secBuild cause:Started by an SCM changeBuilt
at 20130320-1333Build needed 00:00:42, 2492k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d
at 20130320-1334Build needed 00:00:36, 2364k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d
Title: precise_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/853/Project:precise_grizzly_nova_trunkDate of build:Wed, 20 Mar 2013 13:32:34 -0400Build duration:12 minBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/480/Project:precise_grizzly_quantum_trunkDate of build:Wed, 20 Mar 2013 14:31:30 -0400Build duration:5 min 16 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/481/Project:precise_grizzly_quantum_trunkDate of build:Wed, 20 Mar 2013 15:01:30 -0400Build duration:16 minBuild cause:Started by an SCM changeBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11281/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11282/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11283/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/968/Project:raring_grizzly_nova_trunkDate of build:Wed, 20 Mar 2013 16:56:30 -0400Build duration:3 min 42 secBuild cause:Started by user James PageBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11284/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
Title: test_juju
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/test_juju/44/Project:test_jujuDate of build:Wed, 20 Mar 2013 17:26:48 -0400Build duration:12 secBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 1
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11286/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/969/Project:raring_grizzly_nova_trunkDate of build:Wed, 20 Mar 2013 17:18:28 -0400Build duration:4 min 22 secBuild cause:Started by user James PageBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11287/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11288/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11289/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/11290/
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe :
Title: test_juju
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/test_juju/45/Project:test_jujuDate of build:Wed, 20 Mar 2013 22:02:48 -0400Build duration:11 secBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 2
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
73 matches
Mail list logo