Re: [Openstack] [DevStack] Does Devstack support grizilly already?

2013-04-17 Thread Aaron Rosen
See: https://wiki.openstack.org/wiki/Quantum/LBaaS/HowToRun


On Tue, Apr 16, 2013 at 8:38 PM, HuYanrui h...@arraynetworks.com.cn wrote:

 I just installed a new devstack with git clone git://
 github.com/openstack-dev/devstack.git.
 But did not see anything related with Loadbanlance in dashboard.
 It should contain LBaas in dashboard of grizzily, right?

 - Original Message -
 From: Jeremy Stanley fu...@yuggoth.org
 To: OpenStack Users List openstack@lists.launchpad.net
 Sent: Tuesday, April 16, 2013 1:41 AM
 Subject: Re: [Openstack] [DevStack] Does Devstack support grizilly already?


  On 2013-04-15 10:19:26 +0100 (+0100), Filipe Manco wrote:
  I've been testing grizzly with devstack and it works just fine.
 
  And in fact, every commit which went into Grizzly was independently
  deployed (multiple times) on DevStack via devstack-gate and
  exercised with Tempest. It's essentially what we use to perform
  integration testing, to make sure patches to one component of
  OpenStack don't result in adverse interactions with another
  component.
  --
  Jeremy Stanley
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quantum and centos

2013-04-17 Thread Robert van Leeuwen
 Thanks for the information. This link seems to talk about installing 
 openstack grizzly on redhat related linux platform. 
 What I was looking for was folsum
 quantum server install on redhat/centos…any idea?

Yes it is possible to run Quantum on CentOS.
I've written down some tips here:
http://engineering.spilgames.com/openstack-with-open-vswitch-on-scientific-linux/

You can get the Folsom packages from the EPEL repository.
Depending on the functionality you want from Quantum (e.g. GRE tunnels) you 
need to change to the
upstream module instead of the openvswitch from Centos 6.4.
If you do not want to build it yourself let me know.
I'll send the RPM to you. ( If there is a wide interest in this RPM I'll see if 
I can make some time to setup a public repo here)

Cheers,
Robert van Leeuwen



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Multi Node Problems with OVS

2013-04-17 Thread Robert van Leeuwen
 Hi Folks,

 I am working on bringing up a multi node OpenStack environment with OVS.
 I have a Controller, Compute and a Gateway/Network node.
 This is running Folsom.  Most of the services are up, except that I cannot 
 ping the floating ip of the VM.

What kind of setup are you creating? Bridge_mapped networks? Private networks 
(with GRE tunnels)?

If you are setting this up with GRE I'm  missing the GRE tunnels in the 
ovs-vsctl show overview.
You should have GRE tunnels to all Compute nodes in the network.

To start troubleshooting I would first check if networking works between 
compute nodes:
Start two VM's, on different Compute nodes, in the same subnet and see if they 
can reach each other.
(If they do not get a DHCP IP just put in a static IP for now)

There is bug with rebooting the machines running dhcp / l3-agent which might be 
good to be aware of:
https://bugs.launchpad.net/quantum/+bug/1091605

Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Glance] 404 after upgrading to grizzly

2013-04-17 Thread Razique Mahroua
Good to know man!weird indeed though...
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 16 avr. 2013 à 11:21, Heiko Krämer i...@honeybutcher.de a écrit :All right Guys,command back. Glance wasn't apparently the "bad guy" in this case. I'vecheck swift and all stored files and I found that the image files arenot available. I think any goes wrong with the upgrade of swift but allother stored files of customers are present  It's totaly crazy :)I need only upload the images again and take snapshots But it's very strange that i've lost these images and glance means thatall are present.However GreetingsHeikoOn 16.04.2013 10:39, Heiko Krämer wrote:Heyho Guys,i've a strange issue with glance after I've upgraded from Folsom toGrizzly. All Images are stored in swift!I see all Images and the image details too but I can't download ormodify this images. Nova-compute can't download it too.root@api2:~# glance image-list+--++-+--+++| ID | Name | Disk Format |Container Format | Size | Status |+--++-+--+++| a9d4488d-305d-44ee-aded-923a9f3e7aa2 | Cirros-Test | qcow2 |bare | 9761280 | active || b7dcf14e-4a1d-4370-86d8-7e4d2f5792f8 | default(12.04) | qcow2 |bare | 251527168 | active |+--++-+--+++root@api2:~# glance image-download a9d4488d-305d-44ee-aded-923a9f3e7aa2test.imgRequest returned failure status.404 Not FoundSwift could not find image at URI. (HTTP 404)So i've checked, the db migrations have worked i think (example):++--+---+-+-++-+| id | image_id |value | created_at | updated_at | deleted_at | deleted |++--+---+-+-++-+| 25 | a9d4488d-305d-44ee-aded-923a9f3e7aa2 |swift+https://service%3Aglance:@xx:35357/v2.0/glance/a9d4488d-305d-44ee-aded-923a9f3e7aa2| 2013-03-11 16:30:08 | 2013-03-11 16:30:09 | NULL | 0 |I can't see any errors in Log's of the glance services (Debug mode on)or Keystone logs. In addition I don't see a request in my swift log.I've running all Services in Folsom without problems, so my Keystoneendpoints should be ok:| de64976ee0974ddca7f2c6cfb3fe0fae | nova |https://swift.xxx.de/v1/AUTH_%(tenant_id)s |https://10.0.0.103/v1/AUTH_%(tenant_id)s | https://10.0.0.103/v1 | a7a2021c32354e6caff8bef14e1c5eb3 |I've upgraded last week my hole stack to grizzly and all have worked,yesterday i've upgraded glance and swift and now i can't start anyinstance :) because no images was found.I tried to upload a new image and download it after the process finishedand it works normally.Do anyone have same trouble ? If you need more informations please ask :)Greetings and thanksHeiko___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Heiko Krämer
Hi Guys,

I'm running in a strange config issue with cinder-volume service.
I try to use the multi backend feature in grizzly and the scheduler
works fine but the volume service are not running correctly.
I can create/delete volumes but not attach.

My cinder.conf (abstract):
/
// Backend Configuration//
//scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
//scheduler_host_manager=cinder.scheduler.host_manager.HostManager//
//
//enabled_backends=storage1,storage2//
//[storage1]//
//volume_group=nova-volumes//
//volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
//volume_backend_name=LVM_ISCSI//
//iscsi_helper=tgtadm//
//
//
//[storage2]//
//volume_group=nova-volumes//
//volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
//volume_backend_name=LVM_ISCSI//
//iscsi_helper=tgtadm/



this section is on each host the same. If i try to attach an existing
volume to an instance i'll get the following error on cinder-volume:

/2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume
node (version 2013.1)//
//2013-04-16 17:18:13 INFO [cinder.volume.manager] Updating volume
status//
//2013-04-16 17:18:13 INFO [cinder.volume.iscsi] Creating
iscsi_target for: volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
//2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:14 INFO [cinder.volume.manager] Updating volume
status//
//2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common]
Connected to AMQP server on 10.0.0.104:5672//
//2013-04-16 17:18:26ERROR [cinder.openstack.common.rpc.amqp]
Exception during message handling//
//Traceback (most recent call last)://
//  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
line 430, in _process_data//
//rval = self.proxy.dispatch(ctxt, version, method, **args)//
//  File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py,
line 133, in dispatch//
//return getattr(proxyobj, method)(ctxt, **kwargs)//
//  File /usr/lib/python2.7/dist-packages/cinder/volume/manager.py,
line 665, in initialize_connection//
//return self.driver.initialize_connection(volume_ref, connector)//
//  File /usr/lib/python2.7/dist-packages/cinder/volume/driver.py,
line 336, in initialize_connection//
//if self.configuration.iscsi_helper == 'lioadm'://
//  File
/usr/lib/python2.7/dist-packages/cinder/volume/configuration.py, line
83, in __getattr__//
//return getattr(self.local_conf, value)//
//  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line
1708, in __getattr__//
//return self._conf._get(name, self._group)//
//  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line
1513, in _get//
//value = self._substitute(self._do_get(name, group))//
//  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line
1529, in _do_get//
//info = self._get_opt_info(name, group)//
//  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line
1629, in _get_opt_info//
//raise NoSuchOptError(opt_name, group)//
//NoSuchOptError: no such option in group storage1: iscsi_helper/


It's very strange the
'/volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//'/ option
should set the iscsi_helper=tgtadm per default.


Anyone have an idea or the same issue, otherwise i'll create a bug report.

Greetings from Berlin
Heiko
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cloud Foundry experience, please

2013-04-17 Thread Ilkka Tengvall

Hi fellow stackers,

I would like to hear your opinion and experience of using Cloud Foundry 
in front of OpenStack. I'm evaluating open source hybrid cloud 
management options, an CF raises mixed feelings while I read about it. 
Any opinion about the following subjects?


1. CF web page tells to use Essex. Is it Grizzly compatible, or is the 
info accurate?


2. Does the open development model work in the project, is vmware open 
enough to take in patches?


3. Is there active community of openstack developers/users taking part 
to CF to keep up with OS development?


4. Any other opinions and experience you think I (anyone) should be 
aware of it?


Thanks for any hands on experience,

Ilkka Tengvall // Cybercom

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Foundry experience, please

2013-04-17 Thread Ray Sun
Tengvall,
I had some experience that use CF on folsom OpenStack.
1. Unfortunately, it's not too easy to do that. The document is not
very accurate enough.
2. I didn't see any workflow document about this.
3. No.
4. Service HA is a important part, that cloud foundry don't supply yet.


- Ray
Best Regards

CIeNET Technologies (Beijing) Co., Ltd
Technical Manager
Email: qsun01...@cienet.com.cn
Office Phone: +86-01081470088-7079
Mobile Phone: +86-13581988291


On Wed, Apr 17, 2013 at 4:42 PM, Ilkka Tengvall ilkka.tengv...@cybercom.com
 wrote:

 Hi fellow stackers,

 I would like to hear your opinion and experience of using Cloud Foundry in
 front of OpenStack. I'm evaluating open source hybrid cloud management
 options, an CF raises mixed feelings while I read about it. Any opinion
 about the following subjects?

 1. CF web page tells to use Essex. Is it Grizzly compatible, or is the
 info accurate?

 2. Does the open development model work in the project, is vmware open
 enough to take in patches?

 3. Is there active community of openstack developers/users taking part to
 CF to keep up with OS development?

 4. Any other opinions and experience you think I (anyone) should be aware
 of it?

 Thanks for any hands on experience,

 Ilkka Tengvall // Cybercom

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Foundry experience, please

2013-04-17 Thread Razique Mahroua
Interesting to have feedback on that as well.Ilkka, are you interested by CF per se, or what it proposes?
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 17 avr. 2013 à 10:42, Ilkka Tengvall ilkka.tengv...@cybercom.com a écrit :Hi fellow stackers,I would like to hear your opinion and experience of using Cloud Foundry in front of OpenStack. I'm evaluating open source hybrid cloud management options, an CF raises mixed feelings while I read about it. Any opinion about the following subjects?1. CF web page tells to use Essex. Is it Grizzly compatible, or is the info accurate?2. Does the open development model work in the project, is vmware open enough to take in patches?3. Is there active community of openstack developers/users taking part to CF to keep up with OS development?4. Any other opinions and experience you think I (anyone) should be aware of it?Thanks for any hands on experience,Ilkka Tengvall // Cybercom___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone Identity based notifications

2013-04-17 Thread boden

All,
From an upper level management stack perspective, has anyone else seen 
the need for AMQP based notifications from Keystone identity and/or 
heard of any activity in this space?


For example, similar to nova's notification system 
(https://wiki.openstack.org/wiki/SystemUsageData based on 
https://wiki.openstack.org/wiki/NotificationSystem), but for events such 
as user/role/project/domain CRUD?


I've come across this requirement a few times now, but have not turned 
up any hits via google-foo.


Thanks


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Foundry experience, please

2013-04-17 Thread Razique Mahroua
haha that is what I had in mind actually...being myself more and more interested by it.OpenShift origin seems to lack of any implementation model/ documentation around unfortunately
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 17 avr. 2013 à 11:10, Heling Yao yaohel...@gmail.com a écrit :Maybe OpenShift from Redhat?
在 2013-4-17 下午5:06,"Razique Mahroua" razique.mahr...@gmail.com写道:
Interesting to have feedback on that as well.Ilkka, are you interested by CF per se, or what it proposes?
Razique Mahroua-Nuage  Co
razique.mahr...@gmail.comTel: +33 9 72 37 94 15

NUAGECO-LOGO-Fblan_petit.jpg

Le 17 avr. 2013 à 10:42, Ilkka Tengvall ilkka.tengv...@cybercom.com a écrit :Hi fellow stackers,
I would like to hear your opinion and experience of using Cloud Foundry in front of OpenStack. I'm evaluating open source hybrid cloud management options, an CF raises mixed feelings while I read about it. Any opinion about the following subjects?
1. CF web page tells to use Essex. Is it Grizzly compatible, or is the info accurate?2. Does the open development model work in the project, is vmware open enough to take in patches?3. Is there active community of openstack developers/users taking part to CF to keep up with OS development?
4. Any other opinions and experience you think I (anyone) should be aware of it?Thanks for any hands on experience,Ilkka Tengvall // Cybercom___Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to   : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help  : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] keystone-manage db_sync error

2013-04-17 Thread Arindam Choudhury
Hi,
any help will be highly appreciated.

I am following 
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/install-keystone.html.
 

mysql select User,Host from mysql.user;
+--+---+
| User | Host  |
+--+---+
| keystone | % |
| root | 127.0.0.1 |
| root | ::1   |
|  | localhost |
| debian-sys-maint | localhost |
| keystone | localhost |
| root | localhost |
|  | ubu-a.arindam.com |
| root | ubu-a.arindam.com |
+--+---+
9 rows in set (0.00 sec)

root@ubu-a:~# cat /etc/keystone/keystone.conf | grep connection
# The SQLAlchemy connection string used to connect to the database
connection = mysql://keystone:keystone@192.168.122.10/keystone

root@ubu-a:~# cat /etc/mysql/my.cnf | grep 0.0.0.0
bind-address= 0.0.0.0

I am getting this error:

root@ubu-a:~# keystone-manage db_sync
Traceback (most recent call last):
  File /usr/bin/keystone-manage, line 28, in module
cli.main(argv=sys.argv, config_files=config_files)
  File /usr/lib/python2.7/dist-packages/keystone/cli.py, line 164, in main
return run(cmd, (args[:1] + args[2:]))
  File /usr/lib/python2.7/dist-packages/keystone/cli.py, line 147, in run
return CMDS[cmd](argv=args).run()
  File /usr/lib/python2.7/dist-packages/keystone/cli.py, line 35, in run
return self.main()
  File /usr/lib/python2.7/dist-packages/keystone/cli.py, line 56, in main
driver.db_sync()
  File /usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py, 
line 136, in db_sync
migration.db_sync()
  File /usr/lib/python2.7/dist-packages/keystone/common/sql/migration.py, 
line 49, in db_sync
current_version = db_version()
  File /usr/lib/python2.7/dist-packages/keystone/common/sql/migration.py, 
line 63, in db_version
return db_version_control(0)
  File /usr/lib/python2.7/dist-packages/keystone/common/sql/migration.py, 
line 68, in db_version_control
versioning_api.version_control(CONF.sql.connection, repo_path, version)
  File string, line 2, in version_control
  File /usr/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py, 
line 159, in with_engine
return f(*a, **kw)
  File /usr/lib/python2.7/dist-packages/migrate/versioning/api.py, line 250, 
in version_control
ControlledSchema.create(engine, repository, version)
  File /usr/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 
139, in create
table = cls._create_table_version(engine, repository, version)
  File /usr/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 
180, in _create_table_version
if not table.exists():
  File /usr/lib/python2.7/dist-packages/sqlalchemy/schema.py, line 579, in 
exists
self.name, schema=self.schema)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2424, 
in run_callable
conn = self.contextual_connect()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2490, 
in contextual_connect
self.pool.connect(), 
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 224, in 
connect
return _ConnectionFairy(self).checkout()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 387, in 
__init__
rec = self._connection_record = pool._do_get()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 741, in 
_do_get
con = self._create_connection()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 188, in 
_create_connection
return _ConnectionRecord(self)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 270, in 
__init__
self.connection = self.__connect()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 330, in 
__connect
connection = self.__pool._creator()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py, line 
80, in connect
return dialect.connect(*cargs, **cparams)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
281, in connect
return self.dbapi.connect(*cargs, **cparams)
  File /usr/lib/python2.7/dist-packages/MySQLdb/__init__.py, line 81, in 
Connect
return Connection(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 187, in 
__init__
super(Connection, self).__init__(*args, **kwargs2)
sqlalchemy.exc.OperationalError: (OperationalError) (1045, Access denied for 
user 'keystone'@'ubu-a.arindam.com' (using password: YES)) None None



  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone-manage db_sync error

2013-04-17 Thread Robert van Leeuwen
 connection = mysql://keystone:keystone@192.168.122.10/keystone

 mysql select User,Host from mysql.user;
 | keystone | localhost |

 sqlalchemy.exc.OperationalError: (OperationalError) 
 (1045, Access denied for user 'keystone'@'ubu-a.arindam.com' (using 
 password: YES)) None None

Looks like a pretty clear error message to me.

Note that if you specify 192.168.122.10 as a database host you need to give 
grants to that user from the IP it will connect from. If it is on the same 
machine it will probably connect from 192.168.122.10
If you set mysql user permissions from localhost you will also need to connect 
to localhost in the keystone.conf.

Cheers,
Robert van Leeuwen












___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone-manage db_sync error

2013-04-17 Thread Arindam Choudhury
thanks,
GRANT ALL ON keystone.* TO 'keystone'@'192.168.122.10' IDENTIFIED BY 'keystone';
fixed the issue.
but then:
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
what it does?


 From: robert.vanleeu...@spilgames.com
 To: openstack@lists.launchpad.net
 Date: Wed, 17 Apr 2013 10:48:01 +
 Subject: Re: [Openstack] keystone-manage db_sync error
 
  connection = mysql://keystone:keystone@192.168.122.10/keystone
 
  mysql select User,Host from mysql.user;
  | keystone | localhost |
 
  sqlalchemy.exc.OperationalError: (OperationalError) 
  (1045, Access denied for user 'keystone'@'ubu-a.arindam.com' (using 
  password: YES)) None None
 
 Looks like a pretty clear error message to me.
 
 Note that if you specify 192.168.122.10 as a database host you need to give 
 grants to that user from the IP it will connect from. If it is on the same 
 machine it will probably connect from 192.168.122.10
 If you set mysql user permissions from localhost you will also need to 
 connect to localhost in the keystone.conf.
 
 Cheers,
 Robert van Leeuwen
 
 
 
 
 
 
 
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] subnet gateway's arp ack not sent back

2013-04-17 Thread Liu Wenmao
Hi all:

I set up openstack with quantum successfully, but I use floodlight as the
network controller, VMs can not ping their gateway.

I use a host as compute/network controller(30.0.0.1), and another host as a
compute node(30.0.0.11). The VM X address is 100.0.0.7 and the subnet
gateway G is 100.0.0.1. I use namespace to isolate networks (floodlight
restproxy seems not to support namespace, but I use floodligt standalone)

When X is pinging G, I can see gateway responses a ARP ack:

root@controller:/usr/src/floodlight# ip netns exec
qrouter-7bde1209-e8ed-4ae6-a627-efaa148c743c tcpdump -nn -i qr-8af2e01f-bb
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qr-8af2e01f-bb, link-type EN10MB (Ethernet), capture size
65535 bytes
18:52:32.769334 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28
18:52:32.769371 ARP, Reply 100.0.0.1 is-at fa:16:3e:f7:3d:5e, length 28
18:52:33.769049 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28
18:52:33.769082 ARP, Reply 100.0.0.1 is-at fa:16:3e:f7:3d:5e, length 28
18:52:34.769117 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28
18:52:34.769149 ARP, Reply 100.0.0.1 is-at fa:16:3e:f7:3d:5e, length 28

But when I listen to the bridge br-int or physical interface eth2, no ARP
reply is heard:

root@controller:/usr/src/floodlight# tcpdump -i br-int -nn
tcpdump: WARNING: br-int: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes
18:50:31.405691 IP 0.0.0.0.68  255.255.255.255.67: BOOTP/DHCP, Request
from fa:16:3e:1c:65:d0, length 286
18:50:31.749137 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28
18:50:32.749232 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28
18:50:33.749575 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28



root@controller:/usr/src/floodlight# tcpdump -i eth2 proto gre -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
18:54:28.784500 IP 30.0.0.11  30.0.0.1: GREv0, key=0x0, length 50: ARP,
Request who-has 100.0.0.1 tell 100.0.0.7, length 28
18:54:29.784430 IP 30.0.0.11  30.0.0.1: GREv0, key=0x0, length 50: ARP,
Request who-has 100.0.0.1 tell 100.0.0.7, length 28
18:54:30.784317 IP 30.0.0.11  30.0.0.1: GREv0, key=0x0, length 50: ARP,
Request who-has 100.0.0.1 tell 100.0.0.7, length 28

After I delete the controller from openvswitch and restart openvswithes,
VMs can ping their gateway. I do not what causes the problem

Can anyone give me some resources how the namespace and bridges work
together.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone-manage db_sync error

2013-04-17 Thread Robert van Leeuwen
 GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
 what it does?

'%' is a wildcard just like *
For more info:
http://dev.mysql.com/doc/refman/5.1/en/grant.html


Cheers,
Robert van Leeuwen
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Absolute limits is quotas?

2013-04-17 Thread Vasiliy Khomenko
Hi all.

Official documentation says: The name of the absolute limit uniquely
identifies the limit within a deployment., but my experiments shows that
limits affects only within tenants, as quotas do.

What i do:
I start instance in demo tenant and see:
$nova absolute-limits
+-+---+
| Name| Value |
+-+---+
...
| maxTotalCores   | 20|
...
| totalCoresUsed  | 2|

I suppose in alt_demo tenant i can see decreased by 2 value, but there is
no change..

Can anybody explain what absolute-limits is and how it differs from quotas?
Thank you.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Multi Node Problems with OVS

2013-04-17 Thread Atif Wasi
It is Private Networks (VLAN).  Networking works between all the devices.  I am 
pretty sure this is a bug.  I am also seeing the following message on my 
compute server (see below).
The error message is indicating that it is not getting the proper values when 
running the command.  I confirmed as user root, quantum, nova that I can easily 
run the command get
the output.

Atif...

==

 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

[OVS]
tenant_network_type=vlan
network_vlan_ranges = physnet1:500:1000
bridge_mappings = physnet1:br-eth1
===

openvswitch-agent.log


2013-04-17 07:29:37    DEBUG [quantum.agent.linux.utils] Running command: sudo 
/usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl --timeout=2 
list-ports br-int

2013-04-17 07:29:38    DEBUG [quantum.agent.linux.utils] 
Command: ['sudo', '/usr/bin/quantum-rootwrap', '/etc/quantum/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', 'list-ports', 'br-int']
Exit code: 0




 From: Robert van Leeuwen robert.vanleeu...@spilgames.com
To: openstack@lists.launchpad.net openstack@lists.launchpad.net 
Sent: Wednesday, April 17, 2013 3:45 AM
Subject: Re: [Openstack] OpenStack Multi Node Problems with OVS
 


 
 Hi Folks, 

 I am working on bringing up a multi node OpenStack environment with OVS.  
 I have a Controller, Compute and a Gateway/Network node.  
 This is running Folsom.  Most of the services are up, except that I cannot 
 ping the floating ip of the VM.  

What kind of setup are you creating? Bridge_mapped networks? Private networks 
(with GRE tunnels)?

If you are setting this up with GRE I'm  missing the GRE tunnels in the 
ovs-vsctl show overview.
You should have GRE tunnels to all Compute nodes in the network.

To start troubleshooting I would first check if networking works between 
compute nodes:
Start two VM's, on different Compute nodes, in the same subnet and see if they 
can reach each other.
(If they do not get a DHCP IP just put in a static IP for now)

There is bug with rebooting the machines running dhcp / l3-agent which might be 
good to be aware of:
https://bugs.launchpad.net/quantum/+bug/1091605

Cheers,
Robert van Leeuwen

___
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] which network controller is the best for quantum grizzly?

2013-04-17 Thread Liu Wenmao
I have tried floodlight, but it does not support namespace, so I wonder is
there a better network controller to support quantum?(nox, ryu ..)

Wenmao Liu
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] which network controller is the best for quantum grizzly?

2013-04-17 Thread Heiko Krämer
Hi Wenmao,

i think you should plan your network topologie first and after that you
can decide which controller are the best choice for you.

Greetings
Heiko

On 17.04.2013 14:01, Liu Wenmao wrote:
 I have tried floodlight, but it does not support namespace, so I
 wonder is there a better network controller to support quantum?(nox,
 ryu ..)

 Wenmao Liu



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] grizzly swift-proxy, swift-recon -d: HTTP Error 400: Bad Request

2013-04-17 Thread Axel Christiansen
Dear List,


after upgrading to grizzly, swift-recon returns this. Could not find out
to much in the logs.
Can one give me a hint?

root@ns-proxy01:~# swift-recon -d -v
===
-- Starting reconnaissance on 6 hosts
===
[2013-04-17 13:07:51] Checking disk usage now
- http://10.42.45.13:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.12:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.15:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.11:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.14:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.16:6000/recon/diskusage: HTTP Error 400: Bad Request



On one of the storage-nodes this does show up in the log:

Apr 17 13:07:51 sn02 object-server 10.42.45.1 - - [17/Apr/2013:11:07:51
+] GET /recon/diskusage 400 30 - - Python-urllib/2.7 0.0001



Cheers
Axel

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Steve Heistand
what OS Are you running in the VM? I had similar issues with ubuntu 12.04
but things worked great with centos 6.4


On 04/17/2013 01:15 AM, Heiko Krämer wrote:
 Hi Guys,
 
 I'm running in a strange config issue with cinder-volume service.
 I try to use the multi backend feature in grizzly and the scheduler works 
 fine 
 but the volume service are not running correctly.
 I can create/delete volumes but not attach.
 
 My cinder.conf (abstract):
 /
 // Backend Configuration//
 //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
 //scheduler_host_manager=cinder.scheduler.host_manager.HostManager//
 //
 //enabled_backends=storage1,storage2//
 //[storage1]//
 //volume_group=nova-volumes//
 //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
 //volume_backend_name=LVM_ISCSI//
 //iscsi_helper=tgtadm//
 //
 //
 //[storage2]//
 //volume_group=nova-volumes//
 //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
 //volume_backend_name=LVM_ISCSI//
 //iscsi_helper=tgtadm/
 
 
 
 this section is on each host the same. If i try to attach an existing volume 
 to 
 an instance i'll get the following error on cinder-volume:
 
 /2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume node 
 (version 2013.1)//
 //2013-04-16 17:18:13 INFO [cinder.volume.manager] Updating volume 
 status//
 //2013-04-16 17:18:13 INFO [cinder.volume.iscsi] Creating iscsi_target 
 for: 
 volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
 //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common] Connected 
 to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common] Connected 
 to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:14 INFO [cinder.volume.manager] Updating volume 
 status//
 //2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common] Connected 
 to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common] Connected 
 to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:26ERROR [cinder.openstack.common.rpc.amqp] Exception 
 during message handling//
 //Traceback (most recent call last)://
 //  File 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py, 
 line 430, in _process_data//
 //rval = self.proxy.dispatch(ctxt, version, method, **args)//
 //  File 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py, 
 line 133, in dispatch//
 //return getattr(proxyobj, method)(ctxt, **kwargs)//
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 
 665, 
 in initialize_connection//
 //return self.driver.initialize_connection(volume_ref, connector)//
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/driver.py, line 
 336, 
 in initialize_connection//
 //if self.configuration.iscsi_helper == 'lioadm'://
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/configuration.py, 
 line 
 83, in __getattr__//
 //return getattr(self.local_conf, value)//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1708, in 
 __getattr__//
 //return self._conf._get(name, self._group)//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1513, in 
 _get//
 //value = self._substitute(self._do_get(name, group))//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1529, in 
 _do_get//
 //info = self._get_opt_info(name, group)//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1629, in 
 _get_opt_info//
 //raise NoSuchOptError(opt_name, group)//
 //NoSuchOptError: no such option in group storage1: iscsi_helper/
 
 
 It's very strange the 
 '/volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//'/ option should 
 set 
 the iscsi_helper=tgtadm per default.
 
 
 Anyone have an idea or the same issue, otherwise i'll create a bug report.
 
 Greetings from Berlin
 Heiko
 

-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Heiko Krämer
Hi Steve,

yeah it's running ubuntu 12.04 on the nodes and on the vm.

But configuration parsing error should have normally nothing todo with a
distribution ?! Maybe the oslo version or something like that.

But thanks for your hint.

Greetings
Heiko

On 17.04.2013 14:36, Steve Heistand wrote:
 what OS Are you running in the VM? I had similar issues with ubuntu 12.04
 but things worked great with centos 6.4


 On 04/17/2013 01:15 AM, Heiko Krämer wrote:
 Hi Guys,

 I'm running in a strange config issue with cinder-volume service.
 I try to use the multi backend feature in grizzly and the scheduler works 
 fine 
 but the volume service are not running correctly.
 I can create/delete volumes but not attach.

 My cinder.conf (abstract):
 /
 // Backend Configuration//
 //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
 //scheduler_host_manager=cinder.scheduler.host_manager.HostManager//
 //
 //enabled_backends=storage1,storage2//
 //[storage1]//
 //volume_group=nova-volumes//
 //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
 //volume_backend_name=LVM_ISCSI//
 //iscsi_helper=tgtadm//
 //
 //
 //[storage2]//
 //volume_group=nova-volumes//
 //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
 //volume_backend_name=LVM_ISCSI//
 //iscsi_helper=tgtadm/



 this section is on each host the same. If i try to attach an existing volume 
 to 
 an instance i'll get the following error on cinder-volume:

 /2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume node 
 (version 2013.1)//
 //2013-04-16 17:18:13 INFO [cinder.volume.manager] Updating volume 
 status//
 //2013-04-16 17:18:13 INFO [cinder.volume.iscsi] Creating iscsi_target 
 for: 
 volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
 //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common] 
 Connected to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common] 
 Connected to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:14 INFO [cinder.volume.manager] Updating volume 
 status//
 //2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common] 
 Connected to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:14 INFO [cinder.openstack.common.rpc.common] 
 Connected to 
 AMQP server on 10.0.0.104:5672//
 //2013-04-16 17:18:26ERROR [cinder.openstack.common.rpc.amqp] Exception 
 during message handling//
 //Traceback (most recent call last)://
 //  File 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py, 
 line 430, in _process_data//
 //rval = self.proxy.dispatch(ctxt, version, method, **args)//
 //  File 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py,
  
 line 133, in dispatch//
 //return getattr(proxyobj, method)(ctxt, **kwargs)//
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 
 665, 
 in initialize_connection//
 //return self.driver.initialize_connection(volume_ref, connector)//
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/driver.py, line 
 336, 
 in initialize_connection//
 //if self.configuration.iscsi_helper == 'lioadm'://
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/configuration.py, 
 line 
 83, in __getattr__//
 //return getattr(self.local_conf, value)//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1708, 
 in 
 __getattr__//
 //return self._conf._get(name, self._group)//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1513, 
 in _get//
 //value = self._substitute(self._do_get(name, group))//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1529, 
 in 
 _do_get//
 //info = self._get_opt_info(name, group)//
 //  File /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1629, 
 in 
 _get_opt_info//
 //raise NoSuchOptError(opt_name, group)//
 //NoSuchOptError: no such option in group storage1: iscsi_helper/


 It's very strange the 
 '/volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//'/ option should 
 set 
 the iscsi_helper=tgtadm per default.


 Anyone have an idea or the same issue, otherwise i'll create a bug report.

 Greetings from Berlin
 Heiko



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

in my case (as near as I can tell) its something to do with the inability
for ubuntu 12.04 (as a vm) to do hot plug pci stuff.
the node itself in as 12.04 just the vm part that doesnt work as ubuntu.
havent tried 12.10 or rarring as a vm.

steve

On 04/17/2013 05:42 AM, Heiko Krämer wrote:
 Hi Steve,
 
 yeah it's running ubuntu 12.04 on the nodes and on the vm.
 
 But configuration parsing error should have normally nothing todo with a 
 distribution
 ?! Maybe the oslo version or something like that.
 
 But thanks for your hint.
 
 Greetings Heiko
 
 On 17.04.2013 14:36, Steve Heistand wrote:
 what OS Are you running in the VM? I had similar issues with ubuntu 12.04 but
 things worked great with centos 6.4
 
 
 On 04/17/2013 01:15 AM, Heiko Krämer wrote:
 Hi Guys,
 
 I'm running in a strange config issue with cinder-volume service. I try to 
 use
 the multi backend feature in grizzly and the scheduler works fine but the 
 volume
 service are not running correctly. I can create/delete volumes but not 
 attach.
 
 My cinder.conf (abstract): / // Backend Configuration// 
 //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler// 
 //scheduler_host_manager=cinder.scheduler.host_manager.HostManager// // 
 //enabled_backends=storage1,storage2// //[storage1]// 
 //volume_group=nova-volumes// 
 //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver// 
 //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm// // // 
 //[storage2]// 
 //volume_group=nova-volumes// 
 //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver// 
 //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm/
 
 
 
 this section is on each host the same. If i try to attach an existing 
 volume to 
 an instance i'll get the following error on cinder-volume:
 
 /2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume node 
 (version 2013.1)// //2013-04-16 17:18:13 INFO [cinder.volume.manager]
 Updating volume status// //2013-04-16 17:18:13 INFO 
 [cinder.volume.iscsi]
 Creating iscsi_target for: volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d// 
 //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common] 
 Connected to
  AMQP server on 10.0.0.104:5672// //2013-04-16 17:18:13 INFO
 [cinder.openstack.common.rpc.common] Connected to AMQP server on
 10.0.0.104:5672// //2013-04-16 17:18:14 INFO [cinder.volume.manager] 
 Updating
 volume status// //2013-04-16 17:18:14 INFO
 [cinder.openstack.common.rpc.common] Connected to AMQP server on
 10.0.0.104:5672// //2013-04-16 17:18:14 INFO
 [cinder.openstack.common.rpc.common] Connected to AMQP server on
 10.0.0.104:5672// //2013-04-16 17:18:26ERROR
 [cinder.openstack.common.rpc.amqp] Exception during message handling// 
 //Traceback (most recent call last):// //  File
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py, 
 line 430,
 in _process_data// //rval = self.proxy.dispatch(ctxt, version, method,
 **args)// //  File 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py,
  
 line 133, in dispatch// //return getattr(proxyobj, method)(ctxt, 
 **kwargs)// 
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 
 665, 
 in initialize_connection// //return
 self.driver.initialize_connection(volume_ref, connector)// //  File
 /usr/lib/python2.7/dist-packages/cinder/volume/driver.py, line 336, in
 initialize_connection// //if self.configuration.iscsi_helper == 
 'lioadm':// 
 //  File /usr/lib/python2.7/dist-packages/cinder/volume/configuration.py, 
 line
  83, in __getattr__// //return getattr(self.local_conf, value)// //  
 File
 /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1708, in 
 __getattr__// //return self._conf._get(name, self._group)// //  File
 /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1513, in _get// 
 //
 value = self._substitute(self._do_get(name, group))// //  File
 /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1529, in 
 _do_get// //
 info = self._get_opt_info(name, group)// //  File
 /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1629, in 
 _get_opt_info// //raise NoSuchOptError(opt_name, group)// 
 //NoSuchOptError:
 no such option in group storage1: iscsi_helper/
 
 
 It's very strange the 
 '/volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//'/ option should 
 set 
 the iscsi_helper=tgtadm per default.
 
 
 Anyone have an idea or the same issue, otherwise i'll create a bug report.
 
 Greetings from Berlin Heiko
 
 

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific  HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Jérôme Gallard
Hi,

Yes, it's very surprising. I manage to obtain your error by doing the
operations manually (compute and guest are ubuntu 12.04 and devstack
deployment).

Another interesting thing is that, in my case, with multi-backend enabled,
tempest tells me everything is right:

/opt/stack/tempest# nosetests -sv
tempest.tests.volume.test_volumes_actions.py
nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_attach_detach_volume_to_instance[smoke]
... ok
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_get_volume_attachment
... ok

--
Ran 2 tests in 122.465s

OK


I don't think that error is linked to the distribution. With my
configuration, if I remove the multi-backend option, attachment is possible.

Regards,
Jérôme


On Wed, Apr 17, 2013 at 3:22 PM, Steve Heistand steve.heist...@nasa.govwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 in my case (as near as I can tell) its something to do with the inability
 for ubuntu 12.04 (as a vm) to do hot plug pci stuff.
 the node itself in as 12.04 just the vm part that doesnt work as ubuntu.
 havent tried 12.10 or rarring as a vm.

 steve

 On 04/17/2013 05:42 AM, Heiko Krämer wrote:
  Hi Steve,
 
  yeah it's running ubuntu 12.04 on the nodes and on the vm.
 
  But configuration parsing error should have normally nothing todo with a
 distribution
  ?! Maybe the oslo version or something like that.
 
  But thanks for your hint.
 
  Greetings Heiko
 
  On 17.04.2013 14:36, Steve Heistand wrote:
  what OS Are you running in the VM? I had similar issues with ubuntu
 12.04 but
  things worked great with centos 6.4
 
 
  On 04/17/2013 01:15 AM, Heiko Krämer wrote:
  Hi Guys,
 
  I'm running in a strange config issue with cinder-volume service. I
 try to use
  the multi backend feature in grizzly and the scheduler works fine but
 the volume
  service are not running correctly. I can create/delete volumes but not
 attach.
 
  My cinder.conf (abstract): / // Backend Configuration//
  //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
  //scheduler_host_manager=cinder.scheduler.host_manager.HostManager// //
  //enabled_backends=storage1,storage2// //[storage1]//
  //volume_group=nova-volumes//
  //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
  //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm// // //
 //[storage2]//
  //volume_group=nova-volumes//
  //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
  //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm/
 
 
 
  this section is on each host the same. If i try to attach an existing
 volume to
  an instance i'll get the following error on cinder-volume:
 
  /2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume
 node
  (version 2013.1)// //2013-04-16 17:18:13 INFO
 [cinder.volume.manager]
  Updating volume status// //2013-04-16 17:18:13 INFO
 [cinder.volume.iscsi]
  Creating iscsi_target for:
 volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
  //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common]
 Connected to
   AMQP server on 10.0.0.104:5672// //2013-04-16 17:18:13 INFO
  [cinder.openstack.common.rpc.common] Connected to AMQP server on
  10.0.0.104:5672// //2013-04-16 17:18:14 INFO
 [cinder.volume.manager] Updating
  volume status// //2013-04-16 17:18:14 INFO
  [cinder.openstack.common.rpc.common] Connected to AMQP server on
  10.0.0.104:5672// //2013-04-16 17:18:14 INFO
  [cinder.openstack.common.rpc.common] Connected to AMQP server on
  10.0.0.104:5672// //2013-04-16 17:18:26ERROR
  [cinder.openstack.common.rpc.amqp] Exception during message handling//
  //Traceback (most recent call last):// //  File
 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
 line 430,
  in _process_data// //rval = self.proxy.dispatch(ctxt, version,
 method,
  **args)// //  File
 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py,
  line 133, in dispatch// //return getattr(proxyobj, method)(ctxt,
 **kwargs)//
  //  File /usr/lib/python2.7/dist-packages/cinder/volume/manager.py,
 line 665,
  in initialize_connection// //return
  self.driver.initialize_connection(volume_ref, connector)// //  File
  /usr/lib/python2.7/dist-packages/cinder/volume/driver.py, line 336,
 in
  initialize_connection// //if self.configuration.iscsi_helper ==
 'lioadm'://
  //  File
 /usr/lib/python2.7/dist-packages/cinder/volume/configuration.py, line
   83, in __getattr__// //return getattr(self.local_conf, value)//
 //  File
  /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1708, in
  __getattr__// //return self._conf._get(name, self._group)// //
  File
  /usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1513, in
 _get// //
  value = self._substitute(self._do_get(name, group))// //  File
  

Re: [Openstack] Keystone Identity based notifications

2013-04-17 Thread Dolph Mathews
Yes, we've had a few small conversations about it at the summit (don't have
an actual session scheduled on the issue, though, nor any registered
blueprints). It would be my preferred approach to resolve bugs like this
one, which is one of our longest standing and highest priority issues.

  https://bugs.launchpad.net/keystone/+bug/967832

I'd be eager to hear broader feedback on the issue.


-Dolph


On Wed, Apr 17, 2013 at 4:16 AM, boden bo...@linux.vnet.ibm.com wrote:

 All,
 From an upper level management stack perspective, has anyone else seen the
 need for AMQP based notifications from Keystone identity and/or heard of
 any activity in this space?

 For example, similar to nova's notification system (
 https://wiki.openstack.org/**wiki/SystemUsageDatahttps://wiki.openstack.org/wiki/SystemUsageDatabased
  on
 https://wiki.openstack.org/**wiki/NotificationSystemhttps://wiki.openstack.org/wiki/NotificationSystem),
 but for events such as user/role/project/domain CRUD?

 I've come across this requirement a few times now, but have not turned up
 any hits via google-foo.

 Thanks


 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Object Replication fails

2013-04-17 Thread Clay Gerrard
Did you get this worked out?

Starting the replicators on the new nodes should create the objects
directory.


On Sat, Apr 13, 2013 at 2:26 AM, Philip flip...@gmail.com wrote:

 Hi,

 I just tried to add two new servers into the ring. Only the containers
 were replicated to the new servers but there are no objects beeing
 replicated. The disks don't even have a objects folder yet. On the old
 servers there are plenty of log entries that indicate that something is
 going wrong:

 Apr 13 11:14:45 z1-n1 object-replicator Bad rsync return code: ['rsync',
 '--recursive', '--whole-file', '--human-readable', '--xattrs',
 '--itemize-changes', '--ignore-existing', '--timeout=30',
 '--contimeout=30', '/srv/node/sdq1/objects/80058/b70',
 '/srv/node/sdq1/objects/80058/ff9', '/srv/node/sdq1/objects/80058/5d3',
 '/srv/node/sdq1/objects/80058/389', '/srv/node/sdq1/objects/80058/473',
 '/srv/node/sdq1/objects/80058/81a', '/srv/node/sdq1/objects/80058/a67',
 '/srv/node/sdq1/objects/80058/b72', '/srv/node/sdq1/objects/80058/8f5',
 '/srv/node/sdq1/objects/80058/ed3', '/srv/node/sdq1/objects/80058/8db',
 '/srv/node/sdq1/objects/80058/4e5', '/srv/node/sdq1/objects/80058/fbf',
 '/srv/node/sdq1/objects/80058/5cc', '/srv/node/sdq1/objects/80058/318',
 '172.16.100.4::object/sdg1/objects/80058'] - 12

 Apr 13 11:14:46 z1-n1 object-replicator rsync: mkdir /sdl1/objects/75331
 (in object) failed: No such file or directory (2)

 Apr 13 11:14:46 z1-n1 object-replicator rsync error: error in file IO
 (code 11) at main.c(605) [Receiver=3.0.9]
 Apr 13 11:14:46 z1-n1 object-replicator rsync: read error: Connection
 reset by peer (104)

 What could be the reason for this?

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Heiko Krämer
Thx for your replies!!

I've created a bug report: https://bugs.launchpad.net/cinder/+bug/1169928

I think anything is wrong with the config parser.
If i'll find a quickfix i'll let you know.

Greetings
Heiko

On 17.04.2013 15:50, Jérôme Gallard wrote:
 Hi,

 Yes, it's very surprising. I manage to obtain your error by doing the
operations manually (compute and guest are ubuntu 12.04 and devstack
deployment).

 Another interesting thing is that, in my case, with multi-backend
enabled, tempest tells me everything is right:

 /opt/stack/tempest# nosetests -sv
tempest.tests.volume.test_volumes_actions.py
http://tempest.tests.volume.test_volumes_actions.py
 nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']

tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_attach_detach_volume_to_instance[smoke]
... ok

tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_get_volume_attachment
... ok

 --
 Ran 2 tests in 122.465s

 OK


 I don't think that error is linked to the distribution. With my
configuration, if I remove the multi-backend option, attachment is possible.

 Regards,
 Jérôme


 On Wed, Apr 17, 2013 at 3:22 PM, Steve Heistand
steve.heist...@nasa.gov mailto:steve.heist...@nasa.gov wrote:

 in my case (as near as I can tell) its something to do with the inability
 for ubuntu 12.04 (as a vm) to do hot plug pci stuff.
 the node itself in as 12.04 just the vm part that doesnt work as ubuntu.
 havent tried 12.10 or rarring as a vm.

 steve

 On 04/17/2013 05:42 AM, Heiko Krämer wrote:
  Hi Steve,

  yeah it's running ubuntu 12.04 on the nodes and on the vm.

  But configuration parsing error should have normally nothing todo
 with a distribution
  ?! Maybe the oslo version or something like that.

  But thanks for your hint.

  Greetings Heiko

  On 17.04.2013 14:36, Steve Heistand wrote:
  what OS Are you running in the VM? I had similar issues with ubuntu
 12.04 but
  things worked great with centos 6.4
 
 
  On 04/17/2013 01:15 AM, Heiko Krämer wrote:
  Hi Guys,
 
  I'm running in a strange config issue with cinder-volume service.
 I try to use
  the multi backend feature in grizzly and the scheduler works fine
 but the volume
  service are not running correctly. I can create/delete volumes but
 not attach.
 
  My cinder.conf (abstract): / // Backend Configuration//
  //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
 
 //scheduler_host_manager=cinder.scheduler.host_manager.HostManager// //
  //enabled_backends=storage1,storage2// //[storage1]//
  //volume_group=nova-volumes//
  //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
  //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm// // //
 //[storage2]//
  //volume_group=nova-volumes//
  //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
  //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm/
 
 
 
  this section is on each host the same. If i try to attach an
 existing volume to
  an instance i'll get the following error on cinder-volume:
 
  /2013-04-16 17:18:13AUDIT [cinder.service] Starting
 cinder-volume node
  (version 2013.1)// //2013-04-16 17:18:13 INFO
 [cinder.volume.manager]
  Updating volume status// //2013-04-16 17:18:13 INFO
 [cinder.volume.iscsi]
  Creating iscsi_target for:
 volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
  //2013-04-16 17:18:13 INFO
 [cinder.openstack.common.rpc.common] Connected to
   AMQP server on 10.0.0.104:5672// http://10.0.0.104:5672//
 //2013-04-16 17:18:13 INFO
  [cinder.openstack.common.rpc.common] Connected to AMQP server on
  10.0.0.104:5672// http://10.0.0.104:5672// //2013-04-16
 17:18:14 INFO [cinder.volume.manager] Updating
  volume status// //2013-04-16 17:18:14 INFO
  [cinder.openstack.common.rpc.common] Connected to AMQP server on
  10.0.0.104:5672// http://10.0.0.104:5672// //2013-04-16
 17:18:14 INFO
  [cinder.openstack.common.rpc.common] Connected to AMQP server on
  10.0.0.104:5672// http://10.0.0.104:5672// //2013-04-16
 17:18:26ERROR
  [cinder.openstack.common.rpc.amqp] Exception during message handling//
  //Traceback (most recent call last):// //  File
 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py, line
 430,
  in _process_data// //rval = self.proxy.dispatch(ctxt, version,
 method,
  **args)// //  File
 
 /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py,
  line 133, in dispatch// //return getattr(proxyobj,
 method)(ctxt, **kwargs)//
  //  File
 /usr/lib/python2.7/dist-packages/cinder/volume/manager.py, line 665,
  in initialize_connection// //return
  self.driver.initialize_connection(volume_ref, connector)// //  File
  /usr/lib/python2.7/dist-packages/cinder/volume/driver.py, line
 336, in
  initialize_connection// //if self.configuration.iscsi_helper
 == 'lioadm'://
  //  File
 

Re: [Openstack] grizzly swift-proxy, swift-recon -d: HTTP Error 400: Bad Request

2013-04-17 Thread Corrigan, Coleman
Hello Christiansen, have you made sure to set up recon in the pipeline of your 
object node ?

e.g  on 10.42.45.13 does /etc/swift/object-server.conf  have

   pipeline = recon object-server 

and a [filter:recon] stanza ?

Regards,
Coleman

-Original Message-
From: Openstack 
[mailto:openstack-bounces+coleman.corrigan=hp@lists.launchpad.net] On 
Behalf Of Axel Christiansen
Sent: 17 April 2013 13:37
To: openstack@lists.launchpad.net
Subject: [Openstack] grizzly swift-proxy, swift-recon -d: HTTP Error 400: Bad 
Request

Dear List,


after upgrading to grizzly, swift-recon returns this. Could not find out
to much in the logs.
Can one give me a hint?

root@ns-proxy01:~# swift-recon -d -v
===
-- Starting reconnaissance on 6 hosts
===
[2013-04-17 13:07:51] Checking disk usage now
- http://10.42.45.13:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.12:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.15:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.11:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.14:6000/recon/diskusage: HTTP Error 400: Bad Request
- http://10.42.45.16:6000/recon/diskusage: HTTP Error 400: Bad Request



On one of the storage-nodes this does show up in the log:

Apr 17 13:07:51 sn02 object-server 10.42.45.1 - - [17/Apr/2013:11:07:51
+] GET /recon/diskusage 400 30 - - Python-urllib/2.7 0.0001



Cheers
Axel

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone Identity based notifications

2013-04-17 Thread heckj
There was a fellow Panok, I think, that asked me about it in the dev lounge, 
sounded like he was perhaps interested in driving that forward. I also sat in 
on Mark McLoughlin's talk about normalizing our the RPC API code that's used 
across several of the projects, and caught a bit from others that logging and 
notifications are looking to get a bit more unified in Nova, so I expect this 
next release will be a good one to tackle something like dropping in basic AMQP 
notification support from Keystone. Like dolph said though, no blueprints up as 
yet…

-joe

On Apr 17, 2013, at 7:11 AM, Dolph Mathews dolph.math...@gmail.com wrote:
 Yes, we've had a few small conversations about it at the summit (don't have 
 an actual session scheduled on the issue, though, nor any registered 
 blueprints). It would be my preferred approach to resolve bugs like this one, 
 which is one of our longest standing and highest priority issues.
 
   https://bugs.launchpad.net/keystone/+bug/967832
 
 I'd be eager to hear broader feedback on the issue.
 
 
 -Dolph
 
 
 On Wed, Apr 17, 2013 at 4:16 AM, boden bo...@linux.vnet.ibm.com wrote:
 All,
 From an upper level management stack perspective, has anyone else seen the 
 need for AMQP based notifications from Keystone identity and/or heard of any 
 activity in this space?
 
 For example, similar to nova's notification system 
 (https://wiki.openstack.org/wiki/SystemUsageData based on 
 https://wiki.openstack.org/wiki/NotificationSystem), but for events such as 
 user/role/project/domain CRUD?
 
 I've come across this requirement a few times now, but have not turned up any 
 hits via google-foo.
 
 Thanks
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud Foundry experience, please

2013-04-17 Thread Ilkka Tengvall

On 17.04.2013 12:06, Razique Mahroua wrote:

Ilkka, are you interested by CF per se, or what it proposes?


It seems to do the things I want to do, so trying to weight here if it's 
the right tool for the job: Manage the PaaS things over different clouds.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] error installing nova in ubuntu

2013-04-17 Thread Arindam Choudhury
Hi,
I am this guide for installation 
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/compute-create-network.html

I am stuck now. Any help will be highly appreciated.

[(keystone_admin)]# nova-manage db sync
2013-04-17 16:28:25 22384 DEBUG nova.utils [-] backend module 
'nova.db.sqlalchemy.migration' from 
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.pyc' 
__get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:506

[(keystone_admin)]# keystone service-list
+--+--+--+---+
|id|   name   |   type   |description   
 |
+--+--+--+---+
| 31102bd9753e43078e9383e16c21af6f |   nova   | compute  |  Compute Service 
 |
| 3e7237e98d72467d9e468883ad116add |  glance  |  image   |Glance Image 
Service   |
| 42560f9b834e41eb9b379ea030aa22f2 | keystone | identity | Keystone identity 
Service |
+--+--+--+---+

[(keystone_admin)]# keystone endpoint-list
+--+---+-+-+--+
|id|   region  |  publicurl 
 | internalurl |   
adminurl   |
+--+---+-+-+--+
| 0f5f30035f09434daf52a19d71b869f3 | RegionOne | 
http://192.168.122.10:8774/v2/%(tenant_id)s | 
http://192.168.122.10:8774/v2/%(tenant_id)s | 
http://192.168.206.130:8774/v2/%(tenant_id)s |
| 3fcbc339d07a41fe8d5c58d606c17e86 | RegionOne |   
http://192.168.122.10:5000/v2.0   |   http://192.168.122.10:5000/v2.0   
|   http://192.168.122.10:35357/v2.0   |
| d12f04ccadee47e080e20dfb1dc6c987 | RegionOne |  
http://192.168.122.10:9292 |  http://192.168.122.10:9292
 |  http://192.168.122.10:9292  |
+--+---+-+-+--+

[(keystone_admin)]# nova-manage service list
2013-04-17 16:22:29 DEBUG nova.utils [req-696df22c-c92e-4ab7-a212-53e9d4c84d12 
None None] backend module 'nova.db.sqlalchemy.api' from 
'/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc' __get_backend 
/usr/lib/python2.7/dist-packages/nova/utils.py:506
Binary   Host Zone Status   
  State Updated_At
nova-scheduler   ubu-a.arindam.comnova enabled  
  XXX   None  

[(keystone_admin)]# nova network-create private 
--fixed-range-v4=192.168.100.0/24 --bridge-interface=br100
usage: nova [--version] [--debug] [--no-cache] [--timings]
[--os-username auth-user-name] [--os-password auth-password]
[--os-tenant-name auth-tenant-name] [--os-auth-url auth-url]
[--os-region-name region-name] [--os-auth-system auth-system]
[--service-type service-type] [--service-name service-name]
[--volume-service-name volume-service-name]
[--endpoint-type endpoint-type]
[--os-compute-api-version compute-api-ver] [--insecure]
[--bypass-url bypass-url]
subcommand ...
error: argument subcommand: invalid choice: 'network-create'
Try 'nova help ' for more information.

[(keystone_admin)]# sudo apt-get install nova-novncproxy novnc nova-api 
nova-ajax-console-proxy nova-cert nova-conductor nova-consoleauth nova-doc 
nova-scheduler nova-network nova-conductor
Reading package lists... Done
Building dependency tree   
Reading state information... Done
E: Unable to locate package nova-conductor
E: Unable to locate package nova-conductor
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Absolute limits is quotas?

2013-04-17 Thread Kevin L. Mitchell
On Wed, 2013-04-17 at 14:19 +0300, Vasiliy Khomenko wrote:

 Official documentation says: The name of the absolute limit uniquely
 identifies the limit within a deployment., but my experiments shows
 that limits affects only within tenants, as quotas do.

absolute limits are just another name for quotas.  I'm not certain why
the difference in terminology; it's probably a hold-over from nova's
precursors.

-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] grizzly swift-proxy, swift-recon -d: HTTP Error 400: Bad Request

2013-04-17 Thread Axel Christiansen
Hello Coleman,


Great. That fixed it.

Thank you.

Axel




Am 17.04.13 16:22, schrieb Corrigan, Coleman:
 Hello Christiansen, have you made sure to set up recon in the pipeline of 
 your object node ?
 
 e.g  on 10.42.45.13 does /etc/swift/object-server.conf  have
 
pipeline = recon object-server 
 
 and a [filter:recon] stanza ?
 
 Regards,
 Coleman
 
 -Original Message-
 From: Openstack 
 [mailto:openstack-bounces+coleman.corrigan=hp@lists.launchpad.net] On 
 Behalf Of Axel Christiansen
 Sent: 17 April 2013 13:37
 To: openstack@lists.launchpad.net
 Subject: [Openstack] grizzly swift-proxy, swift-recon -d: HTTP Error 400: Bad 
 Request
 
 Dear List,
 
 
 after upgrading to grizzly, swift-recon returns this. Could not find out
 to much in the logs.
 Can one give me a hint?
 
 root@ns-proxy01:~# swift-recon -d -v
 ===
 -- Starting reconnaissance on 6 hosts
 ===
 [2013-04-17 13:07:51] Checking disk usage now
 - http://10.42.45.13:6000/recon/diskusage: HTTP Error 400: Bad Request
 - http://10.42.45.12:6000/recon/diskusage: HTTP Error 400: Bad Request
 - http://10.42.45.15:6000/recon/diskusage: HTTP Error 400: Bad Request
 - http://10.42.45.11:6000/recon/diskusage: HTTP Error 400: Bad Request
 - http://10.42.45.14:6000/recon/diskusage: HTTP Error 400: Bad Request
 - http://10.42.45.16:6000/recon/diskusage: HTTP Error 400: Bad Request
 
 
 
 On one of the storage-nodes this does show up in the log:
 
 Apr 17 13:07:51 sn02 object-server 10.42.45.1 - - [17/Apr/2013:11:07:51
 +] GET /recon/diskusage 400 30 - - Python-urllib/2.7 0.0001
 
 
 
 Cheers
 Axel
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cinder + Live migration

2013-04-17 Thread Paras pradhan
Hi,

If we boot a instance from cinder volume. Is it possible to setup live
migrations? Hypervisor would be KVM

Thanks
Paras.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] json - Static Large Objects

2013-04-17 Thread david.loy
This is my first post to this list so this may not be the appropriate 
place to ask this question:


I am trying to  upload a Static Large Object and have not been 
successful. I believe the problem is the json format I'm using.


The document description:
http://docs.openstack.org/developer/swift/misc.html#deleting-a-large-object

shows:

json:
[{path: /cont/object,
  etag: etagoftheobjectsegment,
  size_bytes: 1048576}, ...]

which is not legal json.

If anyone can send me a working json example for SLO I would appreciate. If XML 
is supported,
that would also be useful.

Any help would really be appreciated.

Thanks
David



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum][Grizzly] Second NIC is not getting IP address from the network

2013-04-17 Thread Anil Vishnoi
Hi All,

I created two network, one private network for project (say 'TestProject'),
and second network at admin level, but its shared network.

I spawned one VM and connected it to private network. VM successfully boots
up and get the IP address from its respective private network DHCP.

I spawned second VM and connected it to the shared network, and it boots up
successfully and gets IP address from the shared network DHCP.

I spawned third VM and connected it to both the network, and the order was
1.Private Network 2. Shared Network. VM boots up successfully but only gets
the IP address from the private network DHCP and second NIC didn't receive
any ip from shared network DHCP.

Next i spawned fourth VM and this time i changed the ordering of NIC, 1.
Shared Network 2.Private Network. VM gets the IP address from shared
network DHCP but not from private network DHCP. Looks like whatever first
network you add while creating VM, it will just make DHCP request for the
first network only. Is this expected behavior ? My understanding is both
the NIC should get IP address if DHCP is enabled for the connected networks.

Few point i want to mention

* I am using cirros VM image
* Meta data service is running on my network node, but i am still not able
to reach the mata data service.

Please let me know if further details are needed for debugging this issue.
Thanks in advance!!!

-- 
Thanks
Anil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum][Grizzly] Second NIC is not getting IP address from the network

2013-04-17 Thread Filipe Manco
Just to make it clear, do you see both NICs on the VM?

Filipe Manco
http://about.me/fmanco


2013/4/17 Anil Vishnoi vishnoia...@gmail.com

 Hi All,

 I created two network, one private network for project (say
 'TestProject'), and second network at admin level, but its shared network.

 I spawned one VM and connected it to private network. VM successfully
 boots up and get the IP address from its respective private network DHCP.

 I spawned second VM and connected it to the shared network, and it boots
 up successfully and gets IP address from the shared network DHCP.

 I spawned third VM and connected it to both the network, and the order was
 1.Private Network 2. Shared Network. VM boots up successfully but only gets
 the IP address from the private network DHCP and second NIC didn't receive
 any ip from shared network DHCP.

 Next i spawned fourth VM and this time i changed the ordering of NIC, 1.
 Shared Network 2.Private Network. VM gets the IP address from shared
 network DHCP but not from private network DHCP. Looks like whatever first
 network you add while creating VM, it will just make DHCP request for the
 first network only. Is this expected behavior ? My understanding is both
 the NIC should get IP address if DHCP is enabled for the connected networks.

 Few point i want to mention

 * I am using cirros VM image
 * Meta data service is running on my network node, but i am still not able
 to reach the mata data service.

 Please let me know if further details are needed for debugging this issue.
 Thanks in advance!!!

 --
 Thanks
 Anil

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum][Grizzly] Second NIC is not getting IP address from the network

2013-04-17 Thread Aaron Rosen
Hi,

The cirros image only starts the dhcp client on the eth0 interface. If you
have a vm with multiple interfaces you need to manually run udhcp -i
interface or change the network configuration file in order to start the
dhcp client for you .

Aaron


On Wed, Apr 17, 2013 at 11:44 AM, Anil Vishnoi vishnoia...@gmail.comwrote:

 Hi All,

 I created two network, one private network for project (say
 'TestProject'), and second network at admin level, but its shared network.

 I spawned one VM and connected it to private network. VM successfully
 boots up and get the IP address from its respective private network DHCP.

 I spawned second VM and connected it to the shared network, and it boots
 up successfully and gets IP address from the shared network DHCP.

 I spawned third VM and connected it to both the network, and the order was
 1.Private Network 2. Shared Network. VM boots up successfully but only gets
 the IP address from the private network DHCP and second NIC didn't receive
 any ip from shared network DHCP.

 Next i spawned fourth VM and this time i changed the ordering of NIC, 1.
 Shared Network 2.Private Network. VM gets the IP address from shared
 network DHCP but not from private network DHCP. Looks like whatever first
 network you add while creating VM, it will just make DHCP request for the
 first network only. Is this expected behavior ? My understanding is both
 the NIC should get IP address if DHCP is enabled for the connected networks.

 Few point i want to mention

 * I am using cirros VM image
 * Meta data service is running on my network node, but i am still not able
 to reach the mata data service.

 Please let me know if further details are needed for debugging this issue.
 Thanks in advance!!!

 --
 Thanks
 Anil

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] json - Static Large Objects

2013-04-17 Thread David Goetz
Here's a little script I have to test it out on my SAIO:

#/bin/bash
export PASS='AUTH_tk36accf5200b143dd8883b9841965e6a2'
export URL='http://127.0.0.1:8080/v1/AUTH_dfg'

curl -i -H X-Auth-Token: $PASS $URL/hat -XPUT

curl -i -H X-Auth-Token: $PASS $URL/hat/one -XPUT -d '1'

curl -i -H X-Auth-Token: $PASS $URL/hat/two -XPUT -d '2'

echo `python -c 'import simplejson; print simplejson.dumps([{path: 
/hat/one, etag: b0baee9d279d34fa1dfd71aadb908c3f, size_bytes: 5}, 
{path: /hat/two, etag: 3d2172418ce305c7d16d4b05597c6a59, size_bytes: 
5}])'` | curl -i -H X-Auth-Token: $PASS $URL/hat/man?multipart-manifest=put 
-XPUT -Hcontent-type:text/plain -T -


you'd just need to switch out the PASS and URL with whatever you're using.  It 
creates a SLO object in $URL/hat/man. Oh- you'd also need to change your 
minimum segment size in your /etc/swift/proxy-server.conf if you wanted this to 
work… something like this:

[filter:slo]
use = egg:swift#slo
min_segment_size = 1


I also added support for Static Large Objects in python-swiftclient: 
https://github.com/openstack/python-swiftclient for example:

swift upload testcontainer testfile -S 1048576 --use-slo

creates a SLO object with 1MB segments.

David


On Apr 17, 2013, at 1:22 PM, david.loy wrote:

 This is my first post to this list so this may not be the appropriate place 
 to ask this question:
 
 I am trying to  upload a Static Large Object and have not been successful. I 
 believe the problem is the json format I'm using.
 
 The document description:
 http://docs.openstack.org/developer/swift/misc.html#deleting-a-large-object
 
 shows:
 
 json:
 [{path: /cont/object,
  etag: etagoftheobjectsegment,
  size_bytes: 1048576}, ...]
 
 which is not legal json.
 
 If anyone can send me a working json example for SLO I would appreciate. If 
 XML is supported,
 that would also be useful.
 
 Any help would really be appreciated.
 
 Thanks
 David
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum][Grizzly] Second NIC is not getting IP address from the network

2013-04-17 Thread Aaron Rosen
Yes, it will get the same ip from the dhcp server.


On Wed, Apr 17, 2013 at 12:09 PM, Anil Vishnoi vishnoia...@gmail.comwrote:

 Thanks Aaron for quick response.

 OS dashboard shows that both the NIC got their IP addresses from
 respective DHCP servers. It also shows the second IP address which actually
 is requested by Cirros VM image. If i run udhcp manually from the VM, will
 it get the same IP address from its DHCP server? If yes, than i think there
 is no issue, but if not then i think it will create inconsistency on what
 dashboard is showing and what actual IP address it got assigned.




 On Thu, Apr 18, 2013 at 12:21 AM, Aaron Rosen aro...@nicira.com wrote:

 Hi,

 The cirros image only starts the dhcp client on the eth0 interface. If
 you have a vm with multiple interfaces you need to manually run udhcp -i
 interface or change the network configuration file in order to start the
 dhcp client for you .

 Aaron


 On Wed, Apr 17, 2013 at 11:44 AM, Anil Vishnoi vishnoia...@gmail.comwrote:

 Hi All,

 I created two network, one private network for project (say
 'TestProject'), and second network at admin level, but its shared network.

 I spawned one VM and connected it to private network. VM successfully
 boots up and get the IP address from its respective private network DHCP.

 I spawned second VM and connected it to the shared network, and it boots
 up successfully and gets IP address from the shared network DHCP.

 I spawned third VM and connected it to both the network, and the order
 was 1.Private Network 2. Shared Network. VM boots up successfully but only
 gets the IP address from the private network DHCP and second NIC didn't
 receive any ip from shared network DHCP.

 Next i spawned fourth VM and this time i changed the ordering of NIC, 1.
 Shared Network 2.Private Network. VM gets the IP address from shared
 network DHCP but not from private network DHCP. Looks like whatever first
 network you add while creating VM, it will just make DHCP request for the
 first network only. Is this expected behavior ? My understanding is both
 the NIC should get IP address if DHCP is enabled for the connected networks.

 Few point i want to mention

 * I am using cirros VM image
 * Meta data service is running on my network node, but i am still not
 able to reach the mata data service.

 Please let me know if further details are needed for debugging this
 issue. Thanks in advance!!!

 --
 Thanks
 Anil

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Thanks
 Anil

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum][Grizzly] Second NIC is not getting IP address from the network

2013-04-17 Thread Anil Vishnoi
Aaron, Hopefully my final follow-up question :) , is it the case with all
the cloud linux images ? or its just specific to  cirros image?


On Thu, Apr 18, 2013 at 12:41 AM, Aaron Rosen aro...@nicira.com wrote:

 Yes, it will get the same ip from the dhcp server.


 On Wed, Apr 17, 2013 at 12:09 PM, Anil Vishnoi vishnoia...@gmail.comwrote:

 Thanks Aaron for quick response.

 OS dashboard shows that both the NIC got their IP addresses from
 respective DHCP servers. It also shows the second IP address which actually
 is requested by Cirros VM image. If i run udhcp manually from the VM, will
 it get the same IP address from its DHCP server? If yes, than i think there
 is no issue, but if not then i think it will create inconsistency on what
 dashboard is showing and what actual IP address it got assigned.




 On Thu, Apr 18, 2013 at 12:21 AM, Aaron Rosen aro...@nicira.com wrote:

 Hi,

 The cirros image only starts the dhcp client on the eth0 interface. If
 you have a vm with multiple interfaces you need to manually run udhcp -i
 interface or change the network configuration file in order to start the
 dhcp client for you .

 Aaron


 On Wed, Apr 17, 2013 at 11:44 AM, Anil Vishnoi vishnoia...@gmail.comwrote:

 Hi All,

 I created two network, one private network for project (say
 'TestProject'), and second network at admin level, but its shared network.

 I spawned one VM and connected it to private network. VM successfully
 boots up and get the IP address from its respective private network DHCP.

 I spawned second VM and connected it to the shared network, and it
 boots up successfully and gets IP address from the shared network DHCP.

 I spawned third VM and connected it to both the network, and the order
 was 1.Private Network 2. Shared Network. VM boots up successfully but only
 gets the IP address from the private network DHCP and second NIC didn't
 receive any ip from shared network DHCP.

 Next i spawned fourth VM and this time i changed the ordering of NIC,
 1. Shared Network 2.Private Network. VM gets the IP address from shared
 network DHCP but not from private network DHCP. Looks like whatever first
 network you add while creating VM, it will just make DHCP request for the
 first network only. Is this expected behavior ? My understanding is both
 the NIC should get IP address if DHCP is enabled for the connected 
 networks.

 Few point i want to mention

 * I am using cirros VM image
 * Meta data service is running on my network node, but i am still not
 able to reach the mata data service.

 Please let me know if further details are needed for debugging this
 issue. Thanks in advance!!!

 --
 Thanks
 Anil

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Thanks
 Anil





-- 
Thanks
Anil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] New schema for LDAP + Keystone Grizzly?

2013-04-17 Thread spzala

Hi Marcelo,

There is an open bug for a similar problem. I have found a workaround  
that, you need to create an entry manually for default domain in your  
tree under the new dn (ou=Domains) you have created. Something like,

dn: cn=default,ou=Domains,dc=openstack,dc=org
objectClass: groupOfNames
description: some description
ou: Default
member: cn=dumb,dc=nonexistent
cn: default

Hopefully this will take care of the problem.

Thanks!

Regards,
Sahdev Zala
IBM SWG



Quoting Marcelo Mariano Miziara marcelo.mizi...@serpro.gov.br:


Hello to all!

Before the release of version grizzly 3, the suggested schema in the
openstack documentation
(http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html)
worked fine. This is the suggested schema:
dn: cn=openstack,cn=org dc: openstack objectClass: dcObject objectClass:
organizationalUnit ou: openstack  dn: ou=Groups,cn=openstack,cn=org
objectClass: top objectClass: organizationalUnit ou: groups  dn:
ou=Users,cn=openstack,cn=org objectClass: top objectClass:
organizationalUnit ou: users  dn: ou=Roles,cn=openstack,cn=org objectClass:
top objectClass: organizationalUnit ou: rolesBut after the release of the
version grizzly 3 I think that's not enough anymore, mainly because of the
domain concept.

I'm kind of lost trying to make LDAP work with keystone now...does anyone
succeed in this?

I created a new dn, something like:
dn: ou=Domains,cn=openstack,cn=org objectClass: top objectClass:
organizationalUnit ou: DomainsBut when I run the keystone-manage db_sync
the default domain isn't created in the LDAP...When I manually create the
domain in there, I have a problem with authentication...

I think I must be doing something wrong, does anyone have a light?

Thanks in advance,
Marcelo M. Miziara
marcelo.mizi...@serpro.gov.br  -

 Esta mensagem do SERVIÇO FEDERAL DE PROCESSAMENTO DE DADOS (SERPRO),
empresa pública federal regida pelo disposto na Lei Federal nº 5.615, é
enviada exclusivamente a seu destinatário e pode conter informações
confidenciais, protegidas por sigilo profissional. Sua utilização
desautorizada é ilegal e sujeita o infrator às penas da lei. Se você a
recebeu indevidamente, queira, por gentileza, reenviá-la ao emitente,
esclarecendo o equívoco.

 This message from SERVIÇO FEDERAL DE PROCESSAMENTO DE DADOS (SERPRO) --
a government company established under Brazilian law (5.615/70) -- is
directed exclusively to its addressee and may contain confidential data,
protected under professional secrecy rules. Its unauthorized use is illegal
and may subject the transgressor to the law's penalties. If you're not the
addressee, please send it back, elucidating the failure.




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] Query regarding floating IP configuration

2013-04-17 Thread Anil Vishnoi
Hi All,

I am trying to setup openstack in my lab, where i have a plan to run
Controller+Network node on one physical machine and two compute node.
Controller/Network physical machine has 2 NIc, one connected to externet
network (internet) and second nic is on private network.

OS Network Administrator Guide says The node running quantum-l3-agent
should not have an IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of IP addresses from
the external network that can be used by OpenStack Networking for routers
that uplink to the external network.. So my confusion is, if i want to
send any REST API call to my controller/network node from external network,
i obviously need public IP address. But instruction i quoted says that we
should not have manual IP address on the NIC.

Does it mean we can't create floating IP pool in this kind of setup? Or we
need 3 NIC, 1 for private network, 1 for floating ip pool creation and 1
for external access to the machine?

OR is it that we can assign the public ip address to the br-ex, and remove
it from physical NIC? Please let me know if my query is not clear.
-- 
Thanks
Anil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack Multi Node Problems with OVS

2013-04-17 Thread Steve Heistand
hi folks,

so an update to the networking issues various people have been seeing.

I updated the openvswitch package from source instead of apt-get and
things are much better now.

The errors/warnings in the various log files are now gone.
the ovs-dpctl works now without complaint.

Im still not able to get the VMs to talk to the outside world but
Im beginning to think its a fault with my understanding.

I have a node that is both controller and network that is the gateway
for the rest of the compute nodes. I suspect that since I only have 1
routable IP address from our network folks Ive configured things
wrong.

s

-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] which network controller is the best for quantum grizzly?

2013-04-17 Thread Liu Wenmao
hi Heiko:

My network topology is very simple: a router connecting with two subnets,
each VM in the two subnets can ping each other.

So it needs l3 layer routing, I also need namespace for quantum
configuration. So is there a controller suitable for such a scenario?

Thanks.


On Wed, Apr 17, 2013 at 8:16 PM, Heiko Krämer i...@honeybutcher.de wrote:

  Hi Wenmao,

 i think you should plan your network topologie first and after that you
 can decide which controller are the best choice for you.

 Greetings
 Heiko


 On 17.04.2013 14:01, Liu Wenmao wrote:

 I have tried floodlight, but it does not support namespace, so I wonder is
 there a better network controller to support quantum?(nox, ryu ..)

  Wenmao Liu



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cinder + Live migration

2013-04-17 Thread Unmesh Gurjar
Hi Paras,

AFAIK, this should be possible in grizzly.
Here is the related bug: https://bugs.launchpad.net/nova/+bug/1074054


-- 
Thanks  Regards,
Unmesh G.

On Wed, Apr 17, 2013 at 10:04 PM, Paras pradhan pradhanpa...@gmail.comwrote:

 Hi,

 If we boot a instance from cinder volume. Is it possible to setup live
 migrations? Hypervisor would be KVM

 Thanks
 Paras.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #21

2013-04-17 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/21/Project:precise_havana_keystone_trunkDate of build:Wed, 17 Apr 2013 02:31:37 -0400Build duration:2 min 11 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFixed unicode username user creation errorby cbjchenedittests/test_backend.pyeditkeystone/common/sql/core.pyConsole Output[...truncated 2505 lines...]Finished at 20130417-0233Build needed 00:00:59, 13636k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170231~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170231~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmpIxIXzM/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpIxIXzM/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log a40f7fe155f2246eaa03b616ea01437da7759587..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304170231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [e4ec12e] Add TLS Support for LDAPdch -a [f846e28] Clean up duplicate methodsdch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304170231~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304170231~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #22

2013-04-17 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/22/Project:precise_havana_keystone_trunkDate of build:Wed, 17 Apr 2013 03:01:40 -0400Build duration:2 min 31 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFixed logging usage instead of LOGby revieweditkeystone/common/wsgi.pyConsole Output[...truncated 2508 lines...]Build needed 00:00:57, 13636k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170302~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170302~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmp7Cutqc/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmp7Cutqc/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log a40f7fe155f2246eaa03b616ea01437da7759587..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304170302~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [fccfa39] Fixed logging usage instead of LOGdch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [e4ec12e] Add TLS Support for LDAPdch -a [f846e28] Clean up duplicate methodsdch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304170302~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304170302~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170302~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304170302~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_nova_trunk #988

2013-04-17 Thread openstack-testing-bot
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/988/Project:raring_grizzly_nova_trunkDate of build:Wed, 17 Apr 2013 09:01:39 -0400Build duration:36 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix a typo in attach_interface error pathby Chuck Shorteditnova/compute/manager.pyFix _error_out_instance exception handlerby Chuck Shorteditnova/compute/manager.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunkCheckout:raring_grizzly_nova_trunk / /var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk - hudson.remoting.Channel@19e88093:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 4266bf21167f0c977db9fad40a41d30e721104b1 (remotes/origin/stable/grizzly)Checkout:nova / /var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk/nova - hudson.remoting.LocalChannel@7e5bfd3dWiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision 6ac1592276351d1292efee43f2ed74aace1caa2f (remotes/origin/stable/grizzly)Checking out Revision 6ac1592276351d1292efee43f2ed74aace1caa2f (remotes/origin/stable/grizzly)No emails were triggered.[raring_grizzly_nova_trunk] $ /bin/sh -xe /tmp/hudson8956225611130969336.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsERROR:root:Cloud Archive installation only supported on: ['precise']Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_nova_trunk #983

2013-04-17 Thread openstack-testing-bot
Title: precise_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/983/Project:precise_grizzly_nova_trunkDate of build:Wed, 17 Apr 2013 09:01:38 -0400Build duration:10 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesFix a typo in attach_interface error pathby Chuck Shorteditnova/compute/manager.pyFix _error_out_instance exception handlerby Chuck Shorteditnova/compute/manager.pyConsole Output[...truncated 18383 lines...]deleting and forgetting pool/main/n/nova/nova-conductor_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-console_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-consoleauth_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-doc_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-network_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-novncproxy_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-objectstore_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-scheduler_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-spiceproxy_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-volume_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-network_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-plugins_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xvpvncproxy_2013.1+git201304081200~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/python-nova_2013.1+git201304081200~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: da1763bd5c9b550daecf44646fe766050b831a6eINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmpb1lNor/novamk-build-deps -i -r -t apt-get -y /tmp/tmpb1lNor/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 4216ba7971caa0939461000a084014b91656e77c..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.1+git201304170902~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [da1763b] Don't join metadata twice in instance_get_all()dch -a [bcd02dd] Replace metadata joins with another querydch -a [e93ea66] Optimize resource tracker queries for instancesdch -a [fd66545] Set defaultbranch in .gitreview to stable/grizzlydch -a [3861f8c] improve handling of an empty dnsmasq --domaindch -a [c118890] Fix _error_out_instance exception handlerdch -a [44d42e2] libvirt: Get driver type from base image type.dch -a [c244d66] After migrate, catch and remove deleted instancesdch -a [b2ec668] Fix legacy_net_info guarddch -a [308e721] Fix a typo in attach_interface error pathdch -a [54fd249] Correct network uuid field for os-network extensiondch -a [8859914] Security groups may be unavailabledch -a [994ed95] Catch NoValidHost exception during live-migrationdch -a [99b77cc] Resolve conflicting mac address in resizedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201304170902~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201304170902~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing nova_2013.1+git201304170902~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly nova_2013.1+git201304170902~precise-0ubuntu1_amd64.changes+ [ 0 != 0 ]+ jenkins-cli build -p pipeline_parameters=pipeline_parameters -p PARENT_BUILD_TAG=jenkins-precise_grizzly_nova_trunk-983 pipeline_runnerEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_python-novaclient_trunk #106

2013-04-17 Thread openstack-testing-bot
Title: precise_grizzly_python-novaclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-novaclient_trunk/106/Project:precise_grizzly_python-novaclient_trunkDate of build:Wed, 17 Apr 2013 16:31:36 -0400Build duration:2 min 13 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesCleanup unused importby gtt116edittests/v1_1/test_cloudpipe.pyedittests/v1_1/test_fixed_ips.pyedittests/v1_1/test_certs.pyeditnovaclient/v1_1/cloudpipe.pyeditnovaclient/v1_1/coverage_ext.pyeditnovaclient/v1_1/contrib/baremetal.pyedittests/v1_1/test_networks.pyeditsetup.pyedittests/v1_1/test_floating_ip_dns.pyedittests/v1_1/test_fping.pyeditdoc/source/conf.pyeditnovaclient/client.pyedittests/v1_1/test_coverage_ext.pyConsole Output[...truncated 1919 lines...]DEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_amd64.changes']File "pool/main/p/python-novaclient/python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_all.deb" is already registered with different checksums!md5 expected: 101893ca7eeab0b8379f15d5005fe53d, got: 8fe5f425a09ed16c5c002251ff7d18e2sha1 expected: c3abe0455f565d1bfea6e49a351876a444999a3e, got: 602809f5dee9fa0d7dc5f0236ca465dae35c94fasha256 expected: 2098a2d7de33209e7d91b4a9118d8550ff9ff5f66826ed60e2db95912285a0bd, got: 22a0bc175cb713aec094532e28393b34280229a19bb2b18acc9b6485d1e7ebcdsize expected: 85090, got: 85214There have been errors!ERROR:root:Error occurred during package creation/build: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254ERROR:root:Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-novaclient/grizzly /tmp/tmpPiabRO/python-novaclientmk-build-deps -i -r -t apt-get -y /tmp/tmpPiabRO/python-novaclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 69f9971da54b46a8883148e4cef6346c7933b6ec..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [dccdd02] Cleanup unused importdch -a [c230812] Make --vlan option work in network-create in VLAN modedch -a [e8b665e] Support force update quotadch -a [2a495c0] make sure .get() also updates _infodch -a [328805f] Add coverage-reset command to reset Nova coverage data.dch -a [1216a32] Fixing shell command 'service-disable' descriptiondch -a [8ce2330] Fix problem with nova --versiondebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_amd64.changesTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.17.gdccdd02+git201304171631~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net

[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #23

2013-04-17 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/23/Project:precise_havana_keystone_trunkDate of build:Wed, 17 Apr 2013 18:01:43 -0400Build duration:2 min 28 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemoved unused importsby dolph.mathewseditkeystone/identity/backends/ldap/core.pyedittests/test_auth_plugin.pyeditkeystone/trust/controllers.pyeditkeystone/trust/routers.pyeditkeystone/auth/plugins/token.pyedittests/test_v3_protection.pyeditkeystone/identity/backends/sql.pyedittests/test_backend_ldap.pyedittests/test_ipv6.pyedittests/test_keystoneclient.pyeditkeystone/trust/backends/kvs.pyedittests/_ldap_livetest.pyeditkeystone/auth/routers.pyedittests/test_auth.pyedittests/test_backend.pyedittests/test_catalog.pyeditkeystone/trust/core.pyedittests/_ldap_tls_livetest.pyeditkeystone/policy/backends/sql.pyConsole Output[...truncated 2535 lines...]ERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304171802~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304171802~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmpzCIRqs/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpzCIRqs/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log a40f7fe155f2246eaa03b616ea01437da7759587..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304171802~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [335470d] Removed unused importsdch -a [fccfa39] Fixed logging usage instead of LOGdch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [e4ec12e] Add TLS Support for LDAPdch -a [f846e28] Clean up duplicate methodsdch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304171802~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304171802~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304171802~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304171802~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #24

2013-04-17 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/24/Project:precise_havana_cinder_trunkDate of build:Wed, 17 Apr 2013 18:01:48 -0400Build duration:2 min 49 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd parsing to extra-specs key checkby john.griffitheditcinder/volume/drivers/solidfire.pyConsole Output[...truncated 1380 lines...]DEBUG:root:['bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpNeoSfIbzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 18.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-3bdb0304-eaa0-4f20-ab2b-0d4d1fc13b2d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-3bdb0304-eaa0-4f20-ab2b-0d4d1fc13b2d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpNeoSfI/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpNeoSfI/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304171803~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [cc7fe54] Add missing space to "volumes already consumed" messagedch -a [a95a214] Add capabilities reporting to ThinLVM driverdch -a [0d8f269] NetApp: Fix failing NetApp testsdch -a [e64f664] Use VERSION var for volume_stats version (Gluster/NFS)dch -a [0e3ea4e] Add parsing to extra-specs key checkdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-3bdb0304-eaa0-4f20-ab2b-0d4d1fc13b2d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-3bdb0304-eaa0-4f20-ab2b-0d4d1fc13b2d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_havana_nova_trunk #77

2013-04-17 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/77/Project:precise_havana_nova_trunkDate of build:Wed, 17 Apr 2013 18:03:07 -0400Build duration:9 min 51 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole Output[...truncated 390 lines...]Receiving objects:   8% (14400/169580), 3.68 MiB | 5 KiB/s   Receiving objects:   8% (14611/169580), 3.72 MiB | 5 KiB/s   Receiving objects:   8% (14730/169580), 3.75 MiB | 5 KiB/s   Receiving objects:   8% (14859/169580), 3.77 MiB | 5 KiB/s   Receiving objects:   8% (15060/169580), 3.81 MiB | 5 KiB/s   Receiving objects:   9% (15263/169580), 3.81 MiB | 5 KiB/s   Receiving objects:   9% (15414/169580), 3.81 MiB | 5 KiB/s   Receiving objects:   9% (15518/169580), 3.89 MiB | 8 KiB/s   Receiving objects:   9% (15720/169580), 3.89 MiB | 8 KiB/s   Receiving objects:   9% (15761/169580), 3.94 MiB | 11 KiB/s   Receiving objects:   9% (15805/169580), 3.95 MiB | 7 KiB/s   Receiving objects:   9% (15890/169580), 3.96 MiB | 7 KiB/s   Receiving objects:   9% (16014/169580), 3.99 MiB | 7 KiB/s   Receiving objects:   9% (16101/169580), 4.00 MiB | 6 KiB/s   Receiving objects:   9% (16266/169580), 4.04 MiB | 7 KiB/s   Receiving objects:   9% (16467/169580), 4.07 MiB | 9 KiB/s   Receiving objects:   9% (16548/169580), 4.09 MiB | 8 KiB/s   Receiving objects:   9% (16672/169580), 4.11 MiB | 5 KiB/s   Receiving objects:   9% (16943/169580), 4.17 MiB | 5 KiB/s   Receiving objects:  10% (16958/169580), 4.17 MiB | 5 KiB/s   Receiving objects:  10% (16984/169580), 4.18 MiB | 3 KiB/s   Receiving objects:  10% (17147/169580), 4.21 MiB | 1 KiB/s   Receiving objects:  10% (17191/169580), 4.21 MiB | 1 KiB/s   Receiving objects:  10% (17361/169580), 4.25 MiB   Receiving objects:  10% (17402/169580), 4.25 MiB   error: RPC failed; result=56, HTTP code = 200fatal: The remote end hung up unexpectedlyfatal: early EOFfatal: index-pack failed	at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771)	... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041)	at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970)	at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236)	at hudson.remoting.UserRequest.perform(UserRequest.java:118)	at hudson.remoting.UserRequest.perform(UserRequest.java:48)	at hudson.remoting.Request$2.run(Request.java:326)	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)	at java.util.concurrent.FutureTask.run(FutureTask.java:166)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)	at hudson.remoting.Engine$1$1.run(Engine.java:60)	at java.lang.Thread.run(Thread.java:722)-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_nova_trunk #78

2013-04-17 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/78/Project:precise_havana_nova_trunkDate of build:Wed, 17 Apr 2013 18:31:33 -0400Build duration:8 min 16 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80Changesremoving unused variable from a testby tilottama.gaateditnova/tests/test_hypervapi.pyConsole Output[...truncated 35426 lines...]git log 0f5261c670f0f0d2e203a2ad54d6b62dfee980a1..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304171832~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [c18df8a] removing unused variable from a testdch -a [55e4d4a] Remove orphaned db method instance_test_and_setdch -a [f0f18c7] Imported Translations from Transifexdch -a [059a05d] Imported Translations from Transifexdch -a [fd26589] Make sure confirm_resize finishes before setting vm_state to ACTIVEdch -a [188f306] Make compute/manager use conductor for unrescue()dch -a [262b285] Add an extension to show the mac address of a ip in server(s)dch -a [45ce810] Cleans up orphan compute_nodes not cleaned up by compute managerdch -a [5182bca] Allow for the power state interval to be configured.dch -a [b5cf22d] Imported Translations from Transifexdch -a [7172fdc] Fix bug in os-availability-zone extension.dch -a [5f68160] Remove unnecessary db call in scheduler driver live-migration codedch -a [64a0bca] baremetal: Change node api related to prov_mac_addressdch -a [842a6ac] Don't join metadata twice in instance_get_all()dch -a [328ece2] Imported Translations from Transifexdch -a [c1ef86e] Don't hide stacktraces for unexpected errors in rescuedch -a [7f9874b] Fix issues with check_instance_shared_storage.dch -a [588b565] Remove "undefined name" pyflake errorsdch -a [8de3502] Optimize some of compute/manager's periodic tasks' DB queriesdch -a [e728394] Optimize some of the periodic task database queries in n-cpudch -a [ba9cd2a] Change DB API instance functions for selective metadata fetchingdch -a [c3568f9] Replace metadata joins with another querydch -a [a2a9f16] xenapi: Make _connect_volume exc handler eventlet safedch -a [fa291f0] Fix typo: libvir => libvirtdch -a [bc3d61d] Remove multi scheduler.dch -a [17ba935] Remove unnecessary LOG initialisationdch -a [c8ce0ce] Remove unnecessary parens.dch -a [2afb205] Simplify random host choice.dch -a [5e7ef21] Add NOVA_LOCALEDIR env variabledch -a [862aec3] Imported Translations from Transifexdch -a [38e8e8b] Clarify volume related exception messagedch -a [9f51df6] Cleanup trailing whitespace in api samples.dch -a [676b16b] Fix error message in pre_live_migration.dch -a [a62e623] set timeout for paramiko ssh connectiondch -a [152e460] Define LOG globally in baremetal_deploy_helperdch -a [1214941] baremetal: Integrate provisioning and non-provisioning interfacesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.2+git201304171832~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A nova_2013.2+git201304171832~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana nova_2013.2+git201304171832~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana nova_2013.2+git201304171832~precise-0ubuntu1_amd64.changes+ [ 0 != 0 ]Email was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #24

2013-04-17 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/24/Project:precise_havana_keystone_trunkDate of build:Wed, 17 Apr 2013 23:01:37 -0400Build duration:3 min 4 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesbug 1159888 broken links in rst docby jsavakeditdoc/source/configuringservices.rsteditdoc/source/developing.rsteditdoc/source/middlewarearchitecture.rsteditdoc/source/setup.rsteditdoc/source/installing.rstSync with oslo-incubator copy of setup.pyby revieweditkeystone/openstack/common/setup.pyRemove non-production middleware from sample pipelinesby dolph.mathewseditetc/keystone.conf.sampleWhat is this for?by dolph.mathewseditkeystone/token/controllers.pyConsole Output[...truncated 2547 lines...]bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmpBLF_tt/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpBLF_tt/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log a40f7fe155f2246eaa03b616ea01437da7759587..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304172301~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [2d7f991] What is this for?dch -a [335470d] Removed unused importsdch -a [9f7b370] Remove non-production middleware from sample pipelinesdch -a [fccfa39] Fixed logging usage instead of LOGdch -a [8c67341] Sync with oslo-incubator copy of setup.pydch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [e4ec12e] Add TLS Support for LDAPdch -a [f846e28] Clean up duplicate methodsdch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [1121b8d] bug 1159888 broken links in rst docdch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304172301~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304172301~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304172301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304172301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_python-keystoneclient_trunk #103

2013-04-17 Thread openstack-testing-bot
 text/html; charset=UTF-8: Unrecognized 
-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_python-keystoneclient_trunk #110

2013-04-17 Thread openstack-testing-bot
 text/html; charset=UTF-8: Unrecognized 
-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp