Re: [Openstack] Understanding flavors of VM

2012-12-05 Thread Marco CONSONNI
Hello Ahmed,

good investigation: there's something I knew and something that I didn't.

As far as I understood, _base directory should be a cache for images, NOT a
directory used for instances.

I mean, compute nodes keep an image cache for preventing the download from
glance each and every time they need  to start an instance.

To be honest it seems like I missed something because, from your
investigation, the storage is kept under _base. Strange. I didn't know that.

Thanks,
Marco.




On Tue, Dec 4, 2012 at 6:35 PM, Ahmed Al-Mehdi ahmedalme...@gmail.comwrote:

 Hi Marco,

 This is really good stuff, thank you very much for helping out.  I am
 creating some instances to test out how/where the different storage related
 elements are created.

 I created two VM instance:

 Instance 1 : 20GB boot disk
 Instance 2 : 10GB boot disk, 2 GB Ephemeral disk.

 root@bodega:/var/lib/nova# ls -lh -R instances
 instances:
 total 12K
 drwxrwxr-x 2 nova nova 4.0K Dec  4 09:01 _base
 drwxrwxr-x 2 nova nova 4.0K Nov 28 11:44 instance-0001
 drwxrwxr-x 2 nova nova 4.0K Dec  4 09:01 instance-0002

 instances/_base:
 total 240M
 -rw-r--r-- 1 nova nova  40M Dec  4 08:51
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f
 -rw-r--r-- 1 libvirt-qemu kvm   10G Dec  4 09:01
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f_10
 -rw-r--r-- 1 nova kvm   20G Dec  4 08:51
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f_20
 -rw-rw-r-- 1 nova nova 9.4M Nov 28 11:44
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f.part
 -rw-r--r-- 1 nova nova 2.0G Dec  4 09:01 ephemeral_0_2_None
 ==
 -rw-r--r-- 1 libvirt-qemu kvm  2.0G Dec  4 09:01 ephemeral_0_2_None_2
 =

 instances/instance-0001:
 total 1.9M
 -rw-rw 1 nova kvm   26K Nov 28 11:45 console.log
 -rw-r--r-- 1 libvirt-qemu kvm  1.9M Dec  4 07:01 disk
 -rw-rw-r-- 1 nova nova 1.4K Nov 28 11:44 libvirt.xml

 instances/instance-0002:
 total 1.8M
 -rw-rw 1 libvirt-qemu kvm   27K Dec  4 09:02 console.log
 -rw-r--r-- 1 libvirt-qemu kvm  1.6M Dec  4 09:03 disk
 -rw-r--r-- 1 libvirt-qemu kvm  193K Dec  4 09:01 disk.local
 -rw-rw-r-- 1 nova nova 1.6K Dec  4 09:01 libvirt.xml
 root@bodega:/var/lib/nova#

 It seems all the boot disk and ephemeral disk are created as files in
 /var/lib/nova/instance/_base.  I don't understand why there are two files
 of size 2GB  (lines marked above with =).  I will look into that later
 on.

 I am running into an issue creating a volume for which I will post a
 separate message.

 Thank you again very much.

 Regards,
 Ahmed.




 On Tue, Dec 4, 2012 at 8:56 AM, Marco CONSONNI mcocm...@gmail.com wrote:

 Sorry, the directory you need to check is  /var/lib/nova/instances.

 MCo.


 On Tue, Dec 4, 2012 at 5:54 PM, Marco CONSONNI mcocm...@gmail.comwrote:

 Hi Ahmed,

 very technical questions.
 I'm not sure my answers are right: I'm just an user...

 In order to answer, I've just look at what happens and made some guess.
 Feel free to verify yourself.

 I'm assuming you are using KVM as I'm doing.

 The space for the boot disk and the ephemeral disk should be represented
 as files in the physical node where the VM is hosted.
 In order to check that, go to directory  /var/lib/nova on the node where
 the VM is running.
 As far as I understand, this is where nova (and KVM) keep the running
 instances' information.
 You should see a directory for each running instance named as
 instance-xxx, where xxx uniquely identifies an instance (there are
 several ways for uniquely identify an instance, this is one of many... but
 this is a different story).
 Go into one of these and check what you find.

 For what concerns nova-scheduler, I don't know what exactly does. I'm
 afraid that you need to test and see what happens.

 A nova command can help for inspecting what a node is using, in terms of
 resources.

 At the controller node (or any other node where you installed nova
 client), type the following command substituting OpenStack02 with the name
 of the node you want to inspect:

 *$ nova host-describe OpenStack02*


 +-+--+-+---+-+

 | HOST| PROJECT  | cpu | memory_mb |
 disk_gb |


 +-+--+-+---+-+

 | OpenStack02 | (total)  | 16  | 24101 | 90
 |

 | OpenStack02 | (used_max)   | 13  | 7680  | 0
 |

 | OpenStack02 | (used_now)   | 13  | 8192  | 0
 |

 | OpenStack02 | 456ec9d355ae4feebe48a2e79e703225 | 4   | 2048  | 0
 |

 | OpenStack02 | fb434e07b687494bb669fde23f497970 | 9   | 5632  | 0
 |


 +-+--+-+---+-+

 It return a brief report of the resources currently used by a node.

 To my knowledge,  the dashboard does not provide a similar page, at the
 time being.

 Hope it helps,
 Marco.




 On Tue, Dec 4, 

Re: [Openstack] Announcing OpenStack Day, 15th December, Bangalore India

2012-12-05 Thread Razique Mahroua
Awesome initiative Atul
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 5 déc. 2012 à 06:04, Atul Jha atul@csscorp.com a écrit :Hi all,We are organizing one day event on OpenStack in Bangalore India.Schedule and registration information is available on http://www.openstack.org/blog/2012/12/announcing-openstack-day-15-december-bangalore-india/If your in town you should attend it if your team is based in India you should spread the word about the event and ask them to attend it.We are trying to spread word about OpenStack here and everyone`s help is much needed/appreciated.Thanks,Atul Jhahttp://www.csscorp.com/common/email-disclaimer.php___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] New volume status stuck at Creating after creation in Horizon

2012-12-05 Thread Ahmed Al-Mehdi
I posted the cinder-scheduler log in my first post, but here they are here
again.  There are generated right around the time frame when I created the
volume.  I am trying to understand the error message VolumeNotFound:
Volume 9dd360bf-9ef2-499f-ac6e-
893abf5dc5ce could not be found.  Is this error message related to
volume_group cinder-volumes or the new volume I just created.


2012-12-04 09:05:02 23552 DEBUG cinder.openstack.common.rpc.
amqp [-] received {u'_context_roles': [u'Member', u'admin'],
u'_context_request_id': u'req-1b122042-c3e4-4c1e-8285-ad148c8c2367',
u'_context
_quota_class': None, u'args': {u'topic': u'cinder-volume', u'image_id':
None, u'snapshot_id': None, u'volume_id':
u'9dd360bf-9ef2-499f-ac6e-893abf5dc5ce'}, u'_context_auth_token':
'SANITIZED', u'_co
ntext_is_admin': False, u'_context_project_id':
u'70e5c14a28a14666a86e85b62ca6ae18', u'_context_timestamp':
u'2012-12-04T17:05:02.375789', u'_context_read_deleted': u'no',
u'_context_user_id': u'386d0
f02d6d045e7ba49d8edac7bb43f', u'method': u'create_volume',
u'_context_remote_address': u'10.176.20.102'} _safe_log
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/common.py:195
2012-12-04 09:05:02 23552 DEBUG cinder.openstack.common.rpc.amqp [-]
unpacked context: {'user_id': u'386d0f02d6d045e7ba49d8edac7bb43f', 'roles':
[u'Member', u'admin'], 'timestamp': u'2012-12-04T17:05:
02.375789', 'auth_token': 'SANITIZED', 'remote_address':
u'10.176.20.102', 'quota_class': None, 'is_admin': False, 'request_id':
u'req-1b122042-c3e4-4c1e-8285-ad148c8c2367', 'project_id': u'70e5c14a
28a14666a86e85b62ca6ae18', 'read_deleted': u'no'} _safe_log
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/common.py:195
2012-12-04 09:05:02 23552 ERROR cinder.openstack.common.rpc.amqp [-]
Exception during message handling
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp Traceback
(most recent call last):
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py,
line 276, in _process_data
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp rval =
self.proxy.dispatch(ctxt, version, method, **args)
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py,
line 145, in dispatch
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return
getattr(proxyobj, method)(ctxt, **kwargs)
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/scheduler/manager.py, line 98, in
_schedule
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp
db.volume_update(context, volume_id, {'status': 'error'})
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/db/api.py, line 256, in
volume_update
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return
IMPL.volume_update(context, volume_id, values)
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py, line 124,
in wrapper
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return
f(*args, **kwargs)
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py, line 1071,
in volume_update
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp
volume_ref = volume_get(context, volume_id, session=session)
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py, line 124,
in wrapper
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return
f(*args, **kwargs)
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp   File
/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py, line 1014,
in volume_get
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp raise
exception.VolumeNotFound(volume_id=volume_id)
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp
VolumeNotFound: Volume 9dd360bf-9ef2-499f-ac6e-893abf5dc5ce could not be
found.
2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp

Thank you,
Ahmed.



On Tue, Dec 4, 2012 at 11:10 PM, Huang Zhiteng winsto...@gmail.com wrote:

 Can you check the cinder scheduler log?

 On Wed, Dec 5, 2012 at 1:44 AM, Ahmed Al-Mehdi ahmedalme...@gmail.com
 wrote:
  Hello,
 
  I setup a two node OpenStack setup, one controller-node and one
  compute-node.  I am using Quantum, Cinder services, and KVM for
  virtualization.  I am running into an issue creating a volume through
  Horizon which I will attach to a VM later on.  The status of volume in
  Horizon is stuck at Creating.  The output of cinder list shows
 nothing.
 
  The iscsi service is setup properly, as far as I can tell.  I feel there
 is
  a 

Re: [Openstack] [Cinder] New volume status stuck at Creating after creation in Horizon

2012-12-05 Thread Razique Mahroua
Hi Ahmed,can you run$ pvdisplayand$ vgdisplaycan we see /etc/cinder/cinder.conf ?thanks,
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 5 déc. 2012 à 09:54, Ahmed Al-Mehdi ahmedalme...@gmail.com a écrit :I posted the cinder-scheduler log in my first post, but here they are here again. There are generated right around the time frame when I created the volume. I am trying to understand the error message "VolumeNotFound: Volume 9dd360bf-9ef2-499f-ac6e-
893abf5dc5ce could not be found". Is this error message related to volume_group "cinder-volumes" or the new volume I just created.2012-12-04 09:05:02 23552 DEBUG cinder.openstack.common.rpc.
amqp [-] received {u'_context_roles': [u'Member', u'admin'], u'_context_request_id': u'req-1b122042-c3e4-4c1e-8285-ad148c8c2367', u'_context
_quota_class': None, u'args': {u'topic': u'cinder-volume', u'image_id': 
None, u'snapshot_id': None, u'volume_id': u'9dd360bf-9ef2-499f-ac6e-893abf5dc5ce'}, u'_context_auth_token': 'SANITIZED', u'_co
ntext_is_admin': False, u'_context_project_id': u'70e5c14a28a14666a86e85b62ca6ae18', u'_context_timestamp': u'2012-12-04T17:05:02.375789', u'_context_read_deleted': u'no', u'_context_user_id': u'386d0

f02d6d045e7ba49d8edac7bb43f', u'method': u'create_volume', 
u'_context_remote_address': u'10.176.20.102'} _safe_log 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/common.py:195
2012-12-04 09:05:02 23552 DEBUG cinder.openstack.common.rpc.amqp [-] unpacked context: {'user_id': u'386d0f02d6d045e7ba49d8edac7bb43f', 'roles': [u'Member', u'admin'], 'timestamp': u'2012-12-04T17:05:

02.375789', 'auth_token': 'SANITIZED', 'remote_address': 
u'10.176.20.102', 'quota_class': None, 'is_admin': False, 'request_id': 
u'req-1b122042-c3e4-4c1e-8285-ad148c8c2367', 'project_id': u'70e5c14a
28a14666a86e85b62ca6ae18', 'read_deleted': u'no'} _safe_log /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/common.py:1952012-12-04 09:05:02 23552 ERROR cinder.openstack.common.rpc.amqp [-] Exception during message handling

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py", line 276, in _process_data

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, **args)2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py", line 145, in dispatch

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs)2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/scheduler/manager.py", line 98, in _schedule

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp db.volume_update(context, volume_id, {'status': 'error'})2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/db/api.py", line 256, in volume_update

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return IMPL.volume_update(context, volume_id, values)2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 124, in wrapper

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return f(*args, **kwargs)2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 1071, in volume_update

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp volume_ref = volume_get(context, volume_id, session=session)2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 124, in wrapper

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp return f(*args, **kwargs)2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/api.py", line 1014, in volume_get

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp raise exception.VolumeNotFound(volume_id=volume_id)2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqp VolumeNotFound: Volume 9dd360bf-9ef2-499f-ac6e-893abf5dc5ce could not be found.

2012-12-04 09:05:02 23552 TRACE cinder.openstack.common.rpc.amqpThank you,Ahmed.On Tue, Dec 4, 2012 at 11:10 PM, Huang Zhiteng winsto...@gmail.com wrote:
Can you check the cinder scheduler log?

On Wed, Dec 5, 2012 at 1:44 AM, Ahmed Al-Mehdi ahmedalme...@gmail.com wrote:
 Hello,

 I setup a two node OpenStack setup, one controller-node and one
 compute-node. I am using Quantum, Cinder services, and KVM for
 virtualization. I am running into an issue creating a volume through
 Horizon which I will attach to a VM later 

Re: [Openstack] Blueprint proposal: Drop setuptools_git for including data/config files

2012-12-05 Thread Sascha Peilicke
On 12/04/2012 11:01 PM, Nah, Zhongyue wrote:
 Is it possible to generate a MANIFEST.in file using setuptools-git? Would 
 that solve simplicity and efficiency?

Please join the discussion on openstack-dev@. It was my mistake sending
this technical proposal to this ML initially.

 
 Sent from my iPhone
 
 On Dec 4, 2012, at 9:07 PM, Thierry Carrez thie...@openstack.org wrote:
 
 Sascha Peilicke wrote:
 Currently, the majority of OpenStack components make use of the
 Python module setuptools_git in order to install additional
 configuration files. This is basically the same functionality that
 the MANIFEST.in file (setuptools/distribute) provides, but
 automatic.

 Note: This is a development topic, it should (have) be(en) posted to
 openstack-dev to reach the appropriate audience. Please follow-up there.

 However, we recently discovered that this approach has issues from
 a packaging perspective. We weren't getting all the data/config
 files that the python package usually gets even though we were
 running the same commands:

 $ python setup.py build

 followed by:

 $ python setup.py install --skip-build

 We are building RPM packages from release tarballs (such as [1]),
 which of course don't include the .git directory. Therefore the
 setuptools_git approach can't do its magic, thus our package builds
 get wrong results. Having OpenStack components rely on
 setuptools_git at build time means we have to distribute the whole
 git repository along with the source code tarball. Of course this
 makes no sense, since it would increase the size of release
 tarballs dramatically and won't get shipped in distributions
 anyway.Therefore, we (and potentially other distribution vendors)
 would have to track these files manually in our RPM spec files. 
 Some reviews have already been opened on the topic (admittedly
 before we discovered the real issue). Given the different outcome
 of each review it seems that not everybody is aware that
 setuptools_git is used or of what it does.

 https://review.openstack.org/#/c/17122/ (ceilometer) - this one
 got accepted before we knew what was going on

 https://review.openstack.org/#/c/17347/ (cinder) - abandoned until
 the situation is clarified

 https://review.openstack.org/#/c/17355/ (nova) - rejected

 So the better solution would be to stop using setuptools_git and
 just include all the data/config files that are meant to be
 distributed in the MANIFEST.in file. This is what every Python
 developer should know about and has the benefit of increased
 transparency about what gets installed and what not. We created a
 blueprint to track this [2].

 Thoughts?

 A bit of history here:

 We used to rely on MANIFEST.in to list all files, but people routinely
 forgot to add new files to it. Apparently every Python developer
 doesn't know (or care) about this. The end result was that we
 discovered very late (sometimes after the release itself) that we
 built incomplete tarballs. As a quick search[1] shows, I have
 personally filed 27 bugs so far on the subject, so it's not a corner case.

 [1] http://bit.ly/TDim7U

 Relying on setuptools_git instead allows to avoid that issue
 altogether. The projects that adopted it became a non-issue. The
 projects that didn't adopt it yet are still a problem. I was about to
 push setuptools_git support to projects that don't use it yet.

 In summary, I would hate it if we went back to the previous situation.
 I'm not personally attached to setuptools_git, but any proposed
 replacement solution should keep its simplicity.

 -- 
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 


-- 
With kind regards,
Sascha Peilicke
SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nuernberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer HRB 16746 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud-Init for Windows

2012-12-05 Thread Razique Mahroua
Thanks for that !I will def. check it
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 4 déc. 2012 à 19:16, Alessandro Pilotti a...@pilotti.it a écrit :We just release a new project for initializing Windows cloud instances on OpenStack:http://www.cloudbase.it/cloud-init-for-windows-instances/Some quick facts about it:	• Supports HTTP and ConfigDriveV2 metadata sources	• Provides out the box: user creation, password injection, static networking configuration, hostname, SSH public keys and userdata scripts (Powershell, Cmd or Bash)	• It’s highly modular and can be easily extended to provide support for more features and metadata sources.	• Works on any hypervisor (Hyper-V, KVM, Xen, etc)	• Works on Windows Server 2003 / 2003 R2 / 2008 / 2008 R2 / 2012 and Windows 7 and 8.	• It’s platform independent, meaning that we plan to add other OSs, e.g.: FreeBSD	• Written in Python	• Open source, Apache 2 licensed:https://github.com/alexpilotti/cloudbase-initIt's currently in beta status, we are looking for help to test it on various hypervisor / guest combinations. I'd be glad to answer any question (and fix any bug)! :-)We did most of our testing so far on Windows 2008 R2 and Windows 2012 using ConfigDriveV2 metadata on Grizzly, but we plan to add more platforms to the tests soon.IRC: alexpilottiThanks!
Alessandro PilottiCloudbase Solutions | CEO-MVP ASP.Net / IISWindows Azure InsiderRed Hat Certified Engineer-

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cloud-Init for Windows

2012-12-05 Thread yz
very good , thanks


2012/12/5 Razique Mahroua razique.mahr...@gmail.com

 Thanks for that !
 I will def. check it

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 4 déc. 2012 à 19:16, Alessandro Pilotti a...@pilotti.it a écrit :

 We just release a new project for initializing Windows cloud instances on
 OpenStack:

 http://www.cloudbase.it/cloud-init-for-windows-instances/

 Some quick facts about it:

 • Supports HTTP and ConfigDriveV2 metadata sources
 • Provides out the box: user creation, password injection, static
 networking configuration, hostname, SSH public keys and userdata scripts
 (Powershell, Cmd or Bash)
 • It’s highly modular and can be easily extended to provide support for
 more features and metadata sources.
 • Works on any hypervisor (Hyper-V, KVM, Xen, etc)
 • Works on Windows Server 2003 / 2003 R2 / 2008 / 2008 R2 / 2012 and
 Windows 7 and 8.
 • It’s platform independent, meaning that we plan to add other OSs, e.g.:
 FreeBSD
 • Written in Python
 • Open source, Apache 2 licensed:
 https://github.com/alexpilotti/cloudbase-init

 It's currently in beta status, we are looking for help to test it on
 various hypervisor / guest combinations. I'd be glad to answer any question
 (and fix any bug)! :-)
 We did most of our testing so far on Windows 2008 R2 and Windows 2012
 using ConfigDriveV2 metadata on Grizzly, but we plan to add more platforms
 to the tests soon.

 IRC: alexpilotti


 Thanks!

 Alessandro Pilotti
 Cloudbase Solutions | CEO
 -
 MVP ASP.Net http://asp.net/ / IISWindows Azure Insider
 Red Hat Certified Engineer
 -





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] A confuse about the FlatDHCP network

2012-12-05 Thread Lei Zhang
Hi all,

I am reading the
http://docs.openstack.org/trunk/openstack-compute/admin/content/libvirt-flat-dhcp-networking.html,
I got the following deploy architecture. But there are several that I am
confused.

   - How and why 192.168.0.0/24 ip range exist? It is necessary or not? The
   eth1 on the each physical machine own two ip(10.0.0.0/24 and
   192.168.0.0/24)? Is that possible? In the nova-compute, the eth1 should
   be bridged by br100. the eth1 should not own any IP address, right?
   - In a better way, should we separate the nova-network/eth0 to the
   internet public switch for access the internet by all VMs. and the
   nova-compute/eth0 should be bind to a internal switch for admin access use.
   Is it right?

 --
Lei Zhang

Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] How does VMs connect to cinder created Volumes ?

2012-12-05 Thread Skible OpenStack

Hello everyone,

I have cinder installed on my controller node, can anyone explain to me 
how a VM becomes aware of the volume ?


After all a VM is launched on a node different than the controller one 
so if the VM wants to store something on the Volume, does it have to 
send the data to the controller node which in his turn will save the 
data on the volume using cinder or is there a direct connection between 
the VM and the volume after allocation, if yes, is this connection on 
the hypervisor level or what level exaclty?


Thanks, really appreciate it if you can help me out there !

Best regards
Fellow Stacker

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Communication between Qunatum Server and openVSwitch agent

2012-12-05 Thread Skible OpenStack

Hello everyone,

I am experiencing a problem after activating my firewall on compute 
nodes. My VMs can't be configured so i guess there is no communication 
between quantum components ( quantum server and OpenVSwitch).


If i desactive the firewall, everything goes back to normal ! So does 
anyone know what port the openvSwitch agent is using to communicate with 
Qunatum Server ?



Regards,
Fellow Stacker


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How does VMs connect to cinder created Volumes ?

2012-12-05 Thread Razique Mahroua
Hey man,actually, the server which runs the cinder-volume services expects a LVM volume group - and depending on the driver you are using, a LV is carved out and exposed (let's say via iscsi) to the node which runs the instance.The compute driver attaches that volume to the running instance afterwards, so the instance only sees a raw volume you'd add a filesytem into :You can add a layer by holding that LVM volume somewhere else, making one another exposition from the place the VG is to the controllerLVM PV |- VG  Cinder manages the LV  ISCSI (whatever) Exposition to the compute node - Compute driver adds that volume to the instance - Which sees it as a raw volumeRegards,
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 5 déc. 2012 à 10:53, Skible OpenStack skible.openst...@gmail.com a écrit :Hello everyone,I have cinder installed on my controller node, can anyone explain to me how a VM becomes aware of the volume ?After all a VM is launched on a node different than the controller one so if the VM wants to store something on the Volume, does it have to send the data to the controller node which in his turn will save the data on the volume using cinder or is there a direct connection between the VM and the volume after allocation, if yes, is this connection on the hypervisor level or what level exaclty?Thanks, really appreciate it if you can help me out there !Best regardsFellow Stacker___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Communication between Qunatum Server and openVSwitch agent

2012-12-05 Thread Razique Mahroua
You can insert a logging rule between the last one which drops packetwith iptables something like this would do the trick :iptables -I INPUT (next-to-the-last rule number) -j LOG --log-prefix "blocked packets : "
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 5 déc. 2012 à 10:57, Skible OpenStack skible.openst...@gmail.com a écrit :Hello everyone,I am experiencing a problem after activating my firewall on compute nodes. My VMs can't be configured so i guess there is no communication between quantum components ( quantum server and OpenVSwitch).If i desactive the firewall, everything goes back to normal ! So does anyone know what port the openvSwitch agent is using to communicate with Qunatum Server ?Regards,Fellow Stacker___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Communication between Qunatum Server and openVSwitch agent

2012-12-05 Thread Skible OpenStack

  
  
Merci infiniment monsieur Mahroua
  
  Le 05/12/2012 11:25, Razique Mahroua a crit:


  
  You can insert a logging rule between the last one which drops
  packet
  with iptables something like this would do the trick :
  
  
  iptables -I
  INPUT (next-to-the-last rule number) -j LOG --log-prefix
  "blocked packets : "
  

  Razique Mahroua-Nuage  Co
razique.mahr...@gmail.com
Tel:
+33 9 72 37 94 15
  
  



  Le 5 dc. 2012  10:57, Skible OpenStack skible.openst...@gmail.com
a crit :
  
  Hello everyone,

I am experiencing a problem after activating my firewall on
compute nodes. My VMs can't be configured so i guess there
is no communication between quantum components ( quantum
server and OpenVSwitch).

If i desactive the firewall, everything goes back to normal
! So does anyone know what port the openvSwitch agent is
using to communicate with Qunatum Server ?


Regards,
Fellow Stacker


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
  


  


  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Communication between Qunatum Server and openVSwitch agent

2012-12-05 Thread Razique Mahroua
Sure :)BTW for obtaining the line number of the lines:iptables -L -nv --line-numbers
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 5 déc. 2012 à 11:27, Skible OpenStack skible.openst...@gmail.com a écrit :
  

  
  
Merci infiniment monsieur Mahroua
  
  Le 05/12/2012 11:25, Razique Mahroua a écrit:


  
  You can insert a logging rule between the last one which drops
  packet
  with iptables something like this would do the trick :
  
  
  iptables -I
  INPUT (next-to-the-last rule number) -j LOG --log-prefix
  "blocked packets : "
  

  Razique Mahroua-Nuage  Co
razique.mahr...@gmail.com
Tel:
+33 9 72 37 94 15
  
  Pièce jointe.jpeg



  Le 5 déc. 2012 à 10:57, Skible OpenStack skible.openst...@gmail.com
a écrit :
  
  Hello everyone,

I am experiencing a problem after activating my firewall on
compute nodes. My VMs can't be configured so i guess there
is no communication between quantum components ( quantum
server and OpenVSwitch).

If i desactive the firewall, everything goes back to normal
! So does anyone know what port the openvSwitch agent is
using to communicate with Qunatum Server ?


Regards,
Fellow Stacker


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
  


  


  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How does VMs connect to cinder created Volumes ?

2012-12-05 Thread Marco CONSONNI
Hello,

As far as I understand, the communication between a VM, running on a node,
and a volume, 'running' on another node, is carried out bu open-iscsi
(ISCSI client running on the node where you run the VM) and tgt (iSCSI
server running on the node where you host the volumes).

Cinder's daemons, in particular cinder-volume, just issue commands for tgt
to create volumes and making them available on the network.
After that, when you connect  a volume to a VM, very likely the Hypervisor
is instructed to retrieve the volume through the iSCSI client.

Anyway, I found out that:

-- the node that host the storage MUST run both tgt and cinder-volume
-- cinder-scheduler and cinder-api can be deployed on any other node
-- any compute node MUST run open-iscsi (iSCSI, client) in order to access
volumes

Hope it helps,
Marco.


On Wed, Dec 5, 2012 at 10:53 AM, Skible OpenStack 
skible.openst...@gmail.com wrote:

 Hello everyone,

 I have cinder installed on my controller node, can anyone explain to me
 how a VM becomes aware of the volume ?

 After all a VM is launched on a node different than the controller one so
 if the VM wants to store something on the Volume, does it have to send the
 data to the controller node which in his turn will save the data on the
 volume using cinder or is there a direct connection between the VM and the
 volume after allocation, if yes, is this connection on the hypervisor level
 or what level exaclty?

 Thanks, really appreciate it if you can help me out there !

 Best regards
 Fellow Stacker

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 500 Internal Server error and [Errno 113] EHOSTUNREACH when adding a new node

2012-12-05 Thread Gui Maluf
Andrew, you're right. I just set up the swift + glance integration and
uploaded again the image! everything worked fine! :D


On Tue, Dec 4, 2012 at 1:55 PM, Gui Maluf guimal...@gmail.com wrote:

 I've change the default_store to file on glance-api.conf
 even after changin this I'm getting the same error related to swift!



 On Tue, Dec 4, 2012 at 1:45 PM, Razique Mahroua razique.mahr...@gmail.com
  wrote:

 Oh my bad, I misread.
 Let us know how it's going with Andrew's tip :)

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 4 déc. 2012 à 16:22, Gui Maluf guimal...@gmail.com a écrit :




 On Tue, Dec 4, 2012 at 1:16 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:


 *Razique Mahroua** - **Nuage  Co*
  razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 4 déc. 2012 à 16:01, Gui Maluf guimal...@gmail.com a écrit :

 Hy, I've a ubuntu 12.04 cloud controller + node runing essex in
 multi_node mode.
 So I'm trying to install a new node but without success.

 Node and controller have the same /etc/network/interface/;

 You mean they both have the same IP address ?

  Course not. All machines have different IP.


 Node is running nova-{api-metadata,compute,network,volume};
 nova.conf: http://paste.openstack.org/show/27390/
 Node have this line above  uncommented and --my_ip changed
 --enabled_apis=ec2,osapi_compute,osapi_volume,metadata

 Here is the nova-compute.log: http://paste.openstack.org/show/27387/with 
 complete error.

 I'm trying to install two nodes, and from both I can reach all services
 on cloud-controller.
 telnet cloud-controller 9292 works from both nodes. They all have the
 same nova.conf. I dont know what else can be this error.
 I've check many things and I cant find a solution to this.
 Thanks in advance!

 --
 *guilherme* \n
 \t *maluf*


 Is keystone running ?
 What $ keystone user-list shows ?

 the whole cloud-controller-node works fine. keystone, vm, network,
 volume. The step of adding nodes is not working! I'm trying to remember
 anything I'd done. And the only think I can remeber is that I've changed
 the hostname of the CC-node.
 I dont know how this can affect the whole system and wherelse get more
 info about this issue!




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 Regards,
 Razique




 --
 *guilherme* \n
 \t *maluf*





 --
 *guilherme* \n
 \t *maluf*




-- 
*guilherme* \n
\t *maluf*
NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Accessing Nova DB from the Compute Host

2012-12-05 Thread Trinath Somanchi
Hi-

Is there any way with out using the nova-client from the compute host, to
access the nova database?

I tried, using the /nova/db/api.py and /nova/db/sqlalchemy/api.py class
definitions for accessing the database but failed to get the data.


I get this error for the sample def. i have written.

 File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
5263, in sampledata_by_host
filter(models.Instance.host == host_name).all()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
2115, in all
return list(self)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
2227, in __iter__
return self._execute_and_instances(context)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
2240, in _execute_and_instances
close_with_result=True)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
2231, in _connection_from_session
**kw)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
730, in connection
close_with_result=close_with_result)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
736, in _connection_for_bind
return engine.contextual_connect(**kwargs)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line
2490, in contextual_connect
self.pool.connect(),
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 224, in
connect
return _ConnectionFairy(self).checkout()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 387, in
__init__
rec = self._connection_record = pool._do_get()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 802, in
_do_get
return self._create_connection()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 188, in
_create_connection
return _ConnectionRecord(self)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 270, in
__init__
self.connection = self.__connect()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 330, in
__connect
connection = self.__pool._creator()
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py,
line 80, in connect
return dialect.connect(*cargs, **cparams)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py,
line 281, in connect
return self.dbapi.connect(*cargs, **cparams)
*OperationalError: (OperationalError) unable to open database file None None
*

Can any one help when does this error occur and how to resolve the same.

Thanks in advance.

-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Accessing Nova DB from the Compute Host

2012-12-05 Thread Razique Mahroua
HI Trinath,just add the right credentials into your .bashrc or any file the system user can source :export SERVICE_TOKEN=admin export OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=openstackexport OS_AUTH_URL=http://$keystone-IP:5000/v2.0/export SERVICE_ENDPOINT=http://$keystone-IP:35357/v2.0/and it would workRegards,	
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 5 déc. 2012 à 12:04, Trinath Somanchi trinath.soman...@gmail.com a écrit :Hi-Is there any way with out using the nova-client from the compute host, to access the nova database?I tried, using the /nova/db/api.py and /nova/db/sqlalchemy/api.py class definitions for accessing the database but failed to get the data.
I get this error for the sample def. i have written.File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 5263, in sampledata_by_host
  filter(models.Instance.host == host_name).all() File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2115, in all  return list(self) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2227, in __iter__
  return self._execute_and_instances(context) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2240, in _execute_and_instances  close_with_result=True)
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2231, in _connection_from_session  **kw) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 730, in connection
  close_with_result=close_with_result) File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 736, in _connection_for_bind  return engine.contextual_connect(**kwargs)
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2490, in contextual_connect  self.pool.connect(), File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 224, in connect
  return _ConnectionFairy(self).checkout() File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 387, in __init__  rec = self._connection_record = pool._do_get()
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 802, in _do_get  return self._create_connection() File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 188, in _create_connection
  return _ConnectionRecord(self) File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 270, in __init__  self.connection = self.__connect() File "/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 330, in __connect
  connection = self.__pool._creator() File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 80, in connect  return dialect.connect(*cargs, **cparams)
 File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 281, in connect  return self.dbapi.connect(*cargs, **cparams)OperationalError: (OperationalError) unable to open database file None None
Can any one help when does this error occur and how to resolve the same.Thanks in advance.-- Regards,--
Trinath Somanchi,+91 9866 235 130

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Accessing Nova DB from the Compute Host

2012-12-05 Thread Trinath Somanchi
Hi-

I have added the correct credentials with respect to my setup.

But still the same error exists.

Kindly help me resolve the issue.

-
Trinath


On Wed, Dec 5, 2012 at 4:38 PM, Razique Mahroua
razique.mahr...@gmail.comwrote:

 HI Trinath,
 just add the right credentials into your .bashrc or any file the system
 user can source :

 export SERVICE_TOKEN=admin

 export OS_TENANT_NAME=admin
 export OS_USERNAME=admin
 export OS_PASSWORD=openstack
 export OS_AUTH_URL=http://$keystone-IP:5000/v2.0/
 export SERVICE_ENDPOINT=http://$keystone-IP:35357/v2.0/

 and it would work

 Regards,
  *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 5 déc. 2012 à 12:04, Trinath Somanchi trinath.soman...@gmail.com a
 écrit :

 Hi-

 Is there any way with out using the nova-client from the compute host, to
 access the nova database?

 I tried, using the /nova/db/api.py and /nova/db/sqlalchemy/api.py class
 definitions for accessing the database but failed to get the data.


 I get this error for the sample def. i have written.

  File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
 5263, in sampledata_by_host
 filter(models.Instance.host == host_name).all()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2115, in all
 return list(self)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2227, in __iter__
 return self._execute_and_instances(context)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2240, in _execute_and_instances
 close_with_result=True)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2231, in _connection_from_session
 **kw)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
 730, in connection
 close_with_result=close_with_result)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
 736, in _connection_for_bind
 return engine.contextual_connect(**kwargs)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line
 2490, in contextual_connect
 self.pool.connect(),
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 224, in
 connect
 return _ConnectionFairy(self).checkout()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 387, in
 __init__
 rec = self._connection_record = pool._do_get()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 802, in
 _do_get
 return self._create_connection()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 188, in
 _create_connection
 return _ConnectionRecord(self)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 270, in
 __init__
 self.connection = self.__connect()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 330, in
 __connect
 connection = self.__pool._creator()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py,
 line 80, in connect
 return dialect.connect(*cargs, **cparams)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py,
 line 281, in connect
 return self.dbapi.connect(*cargs, **cparams)
 *OperationalError: (OperationalError) unable to open database file None
 None*

 Can any one help when does this error occur and how to resolve the same.

 Thanks in advance.

 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130

  ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Regards,
--
Trinath Somanchi,
+91 9866 235 130
NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Distributed rate-limiting

2012-12-05 Thread Karajgi, Rohit
Hi,

Sorry to bring alive a fairly old thread, but I had a few questions on Nova's 
rate limiting in a distributed/ load balanced Openstack environment.

My understanding is Turnstile manages the situation where, the in-memory rate 
limits that are configured on load balanced API servers
are imposed properly on the incoming requests, so each API server is correctly 
updated/synced with the used rate limits.
Can you please confirm this understanding?

Also, I don't think this is part of the Openstack trunk code, and if so, is 
there any reason why it's not part of Nova, as it was meant to be a replacement?

Regards,
Rohit

-Original Message-
From: openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net 
[mailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net] On 
Behalf Of Kevin L. Mitchell
Sent: Saturday, March 17, 2012 3:15 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] Distributed rate-limiting

Howdy, folks.  I've been working on a replacement for nova's rate-limiting 
middleware that will handle the multiple-node case, and I've developed a fairly 
generic rate-limiting package, along with a second package that adapts it to 
nova.  (This means you could also use this rate-limiting setup with, say, 
glance, or with any other project that uses Python middleware.)  Here is some 
information:

* Turnstile
Turnstile is a piece of WSGI middleware that performs true distributed
rate-limiting.  System administrators can run an API on multiple
nodes, then place this middleware in the pipeline prior to the
application.  Turnstile uses a Redis database to track the rate at
which users are hitting the API, and can then apply configured rate
limits, even if each request was made against a different API node.

- https://github.com/klmitch/turnstile
- http://pypi.python.org/pypi/turnstile

* nova_limits
This package provides the ``nova_limits`` Python module, which
contains the ``nova_preprocess()`` preprocessor, the
``NovaClassLimit`` limit class, and the ``NovaTurnstileMiddleware``
replacement middleware class, all for use with Turnstile.  These
pieces work together to provide class-based rate limiting integration
with nova.

- https://github.com/klmitch/nova_limits
- http://pypi.python.org/pypi/nova_limits

Both packages should be fairly well documented (start with README.rst), and 
please feel free to log issues or make pull requests.
--
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack :: Folsom] Quantum Network Node setup

2012-12-05 Thread balaji patnala
Hi,

Iam having essex setup [where we dont have Quantum Network] and now if we
want to have Folsom setup and if we dont want to use L3-Agent and Quantum
router for DNAT of Floating-IPs.

How can we achieve this with Folsom.?

As well i want to understand the real time deployment of Teanant VMs for
Public Access using Public -IPs, I hope it can be achieved in Folsom setup
as well. Can any one give me inputs on this.

Just curious to understand the quantum router use case apart from
NAT,iptable rules any other advantages we have in real time deployments.

please share your understanding and experience.

Thanks in advance.

regards,
balaji

On Mon, Nov 12, 2012 at 6:17 AM, gong yong sheng gong...@linux.vnet.ibm.com
 wrote:

  On 11/10/2012 10:06 PM, balaji patnala wrote:

  Hi Yong,

 I downloaded the Quantum Architecture in Folsom Powerpoint prepared by you
 and found that in slide-10:

 lL3-agent
 lTo implement floating Ips and other L3 features, such as NAT
 *lOne per network*


 Can you elaborate on the comment 'one per network' for L3-Agent.


 # If use_namespaces is set as False then the agent can only configure one
 router.
 # This is done by setting the specific router_id.
 # router_id =

 # Each L3 agent can be associated with at most one external network.  This
 # value should be set to the UUID of that external network.  If empty,
 # the agent will enforce that only a single external networks exists and
 # use that external network id
 # gateway_external_network_id =

 two options:
 1. use namespace = False and set router_id to specific one can support
 multi l3-agent,
 or
 2. create multiples external networks, and use gateway_external_network_id
 = to run multil3-agent
 This way, we must set router's gateway port:
 we can create router with external_gateway_info:
 such as quantum router-create router1 --external_gateway_info
 network_id=id
 or quantum router-create router2
 quantum router-gateway-set


  As i understood that the L3-Agent will be only one for complete setup.
 If we have more than one Network Node then we must install dhcp-agent and
 L3-Agent in each of these Network Nodes.

 So, the comment of 'one per network' is like we can have one
 router/gateway per tenant network.

 Can you give us your comments on this.


 you can reach at this target by creating a router or external network per
 tenant.

  bye,
 balaji



 On Wed, Oct 31, 2012 at 10:38 AM, balaji patnala patnala...@gmail.comwrote:

 Hi Yong,

 Thanks for information.
 I think you mean to say that these Quantum Network Node is not per Tenant
 basis and it can serve all the Tenants of DC setup.

 Just want to understand what will be the advantages we are thinking of by
 doing so.

 Regards,
 Balaji
 On Tue, Oct 30, 2012 at 2:26 PM, gong yong sheng 
 gong...@linux.vnet.ibm.com wrote:

  Hi,
 In fact, we can split Quantum network Node into two categories:
 one is for dhcp, which install ovs agent and dhcp agent. We can have one
 such kind of node
 one is for l3 agent, we can deal with one external network on one l3
 agent. We can have many nodes of this kind.

 Regards,

 Yong Sheng Gong

 On 10/30/2012 02:27 PM, balaji patnala wrote:

 Hi Salvatore,

 Just want to understand more on Network Node in the below given app_demo
 page.

 As i see in the setup, it looks like there will be one Quantum Network
 Node for one Data centre setup. Please correct me if my assumptions are
 wrong.

 This Quantum Network Node will have all the virtual routers, gateway
 which can be created with quantum-l3-agent plugin.

 Also my assumption is that this quantum Network Node will serve all the
 Tenant virtual gateways and routers created using quantum.

 Please give us some more information on this to understand the setup.

 Also do we have any specific reason for having quantum Network Node
 instead of keeping these plugin on the Controller Node similar to earlier
 release like Essex.

 Thanks in advance.

 Regards,
 Balaji

 On Fri, Oct 26, 2012 at 3:31 PM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 Hi Trinath,

 Even if is perfectly reasonable to run the DHCP/L3 agents in the
 controller node, the advice we give in the administration guide is slightly
 different.
 As suggested in [1], the only Quantum component running on the
 controller node should be the API server.
 The DHCP and L3 agents might run in a dedicated network node. Please
 note you will need also the L2 agent running on that node.

 Regards,
 Salvatore

 [1]
 http://docs.openstack.org/trunk/openstack-network/admin/content/app_demo.html

  On 26 October 2012 10:50, Trinath Somanchi trinath.soman...@gmail.com
  wrote:

  Hi Stackers-

 I have found many installation and configuration manuals for Openstack
 Folsom which state the installation and configuration of 
 Quantum-DHCP-Agent
 in the Controller machine.

 But I have doubt here,

 Can't we have the Quantum-DHCP/L3-Agent to be running in the Compute
 NODE rather than in the controller.

 How does the Installation and 

[Openstack] Instance VNC Console - Failed to connect to server (code: 1006)

2012-12-05 Thread Alex Vitola
I set up an environment with 1 and 2 Controler Cloud Compute Cloud.

When I try to access the machine by the Dashboard it shows me the
following message:

 Failed to connect to server (code: 1006)

If I access the Direct Compute Cloud by vnc can access the
console, but not by the panel.


More weird if I leave a tcpdump running on both servers not
beats anything on port 5900, in any of the two servers

ps.: I'm using the default settings in nova.conf


# # Novnc
novnc_enable = true
novncproxy_base_url = http://PUBLIC_IP_MANAGEMENT_NETWORK:6080/vnc_auto.html
vncserver_proxyclient_address = 127.0.0.1
vncserver_listen = 0.0.0.0

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] swift metrics graphite dashboard

2012-12-05 Thread Dieter Plaetinck
Hi,
i'm working on a graphite dashboard which was mainly driven to make sense out 
of the multitude of metrics across many swift servers.
home page: https://github.com/Dieterbe/graph-explorer
exploring some swift graphs: https://vimeo.com/54912886

Dieter

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Distributed rate-limiting

2012-12-05 Thread Kevin L. Mitchell
On Wed, 2012-12-05 at 14:12 +, Karajgi, Rohit wrote:
 My understanding is Turnstile manages the situation where, the
 in-memory rate limits that are configured on load balanced API servers
 are imposed properly on the incoming requests, so each API server is
 correctly updated/synced with the used rate limits.
 Can you please confirm this understanding?

Yes.  Turnstile uses Redis to coordinate rate limit configuration and
bucket data, in order to provide rate limiting.

 Also, I don't think this is part of the Openstack trunk code, and if
 so, is there any reason why it's not part of Nova, as it was meant to
 be a replacement?

I wrote Turnstile to be general; it can be used for Nova, Keystone, or
any other system for which rate limiting is desired.  (I in fact
designed it with a goal of being able to use it for some personal
projects which are not OpenStack-related.)  This is the primary reason
it's not a direct part of any OpenStack repository.  That said, it is
hosted on github and I welcome pull-requests…and I'm not at all adverse
to the suggestion that it become an OpenStack project; I'm just not
convinced that that would be generally desired, or that it would be
generally beneficial…
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Accessing Nova DB from the Compute Host

2012-12-05 Thread Mohammed Naser
Hi there!

You need to make sure that you load the settings as if you don't load them,
SQLAlchemy will try to read the default path (which it seems like it's
doing).  You can take a look at this script which interacts with the
database

https://github.com/openstack/nova/blob/master/tools/xenserver/vm_vdi_cleaner.py

It'd be helpful to paste the script as well to help us debug it.

Regards,
Mohammed Naser


On Wed, Dec 5, 2012 at 6:15 AM, Trinath Somanchi trinath.soman...@gmail.com
 wrote:

 Hi-

 I have added the correct credentials with respect to my setup.

 But still the same error exists.

 Kindly help me resolve the issue.

 -
 Trinath


 On Wed, Dec 5, 2012 at 4:38 PM, Razique Mahroua razique.mahr...@gmail.com
  wrote:

 HI Trinath,
 just add the right credentials into your .bashrc or any file the system
 user can source :

 export SERVICE_TOKEN=admin

 export OS_TENANT_NAME=admin
 export OS_USERNAME=admin
 export OS_PASSWORD=openstack
 export OS_AUTH_URL=http://$keystone-IP:5000/v2.0/
 export SERVICE_ENDPOINT=http://$keystone-IP:35357/v2.0/

 and it would work

 Regards,
  *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 5 déc. 2012 à 12:04, Trinath Somanchi trinath.soman...@gmail.com a
 écrit :

 Hi-

 Is there any way with out using the nova-client from the compute host, to
 access the nova database?

 I tried, using the /nova/db/api.py and /nova/db/sqlalchemy/api.py class
 definitions for accessing the database but failed to get the data.


 I get this error for the sample def. i have written.

  File /usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
 5263, in sampledata_by_host
 filter(models.Instance.host == host_name).all()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2115, in all
 return list(self)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2227, in __iter__
 return self._execute_and_instances(context)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2240, in _execute_and_instances
 close_with_result=True)
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line
 2231, in _connection_from_session
 **kw)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
 730, in connection
 close_with_result=close_with_result)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
 736, in _connection_for_bind
 return engine.contextual_connect(**kwargs)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line
 2490, in contextual_connect
 self.pool.connect(),
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 224,
 in connect
 return _ConnectionFairy(self).checkout()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 387,
 in __init__
 rec = self._connection_record = pool._do_get()
File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 802,
 in _do_get
 return self._create_connection()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 188,
 in _create_connection
 return _ConnectionRecord(self)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 270,
 in __init__
 self.connection = self.__connect()
   File /usr/lib/python2.7/dist-packages/sqlalchemy/pool.py, line 330,
 in __connect
 connection = self.__pool._creator()
   File
 /usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py, line
 80, in connect
 return dialect.connect(*cargs, **cparams)
   File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py,
 line 281, in connect
 return self.dbapi.connect(*cargs, **cparams)
 *OperationalError: (OperationalError) unable to open database file None
 None*

 Can any one help when does this error occur and how to resolve the same.

 Thanks in advance.

 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130

  ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Regards,
 --
 Trinath Somanchi,
 +91 9866 235 130


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com
NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Essex volume attach issue on Debian Wheezy

2012-12-05 Thread Alberto Molina Coballes
Hi all,

We're facing an issue attaching a volume to a running instance in an Essex
deployment on Debian Wheezy.

nova-volume is installed on the cloud controller, but nova-network is installed
on the computes nodes in a multi_host setup.

The relevant configuration parameters in nova.conf are (nexentastor-ce is used
for volume storage):

volume_driver=nova.volume.nexenta.volume.NexentaDriver
use_local_volumes=false
nexenta_host=172.22.222.2
nexenta_volume=nova
nexenta_user=admin
nexenta_password=

Volumes can be created properly:

$ nova volume-create --display_name demovol1 1
$ nova volume-list
++---+--+--+-+-+
| ID |   Status  | Display Name | Size | Volume Type | Attached to |
++---+--+--+-+-+
| 1  | available | demovol1 | 1| None| |
++---+--+--+-+-+

But attaching to a volume fails with no error:

$ nova volume-attach 63abfd8a-...-...-... 1 /dev/vdc

and the volume still remains available.

It seems that the problem is related to these logs found in the compute node 
(nova-compute.log):

TRACE nova.rpc.amqp ProcessExecutionError: Unexpected error while running 
command.
TRACE nova.rpc.amqp Command: sudo nova-rootwrap iscsiadm -m node -T 
iqn.1986-03.com.sun:02:nova-volume-001 -p 172.22.222.2:3260
TRACE nova.rpc.amqp Exit code: 1
TRACE nova.rpc.amqp Stdout: ''
TRACE nova.rpc.amqp Stderr: 'Traceback (most recent call last):\n  File 
/usr/bin/nova-rootwrap, line 69, in module\n
env=filtermatch.get_environment(userargs))\n  File 
/usr/lib/python2.7/subprocess.py, line 679, in __init__\nerrread, 
errwrite)\n  File /usr/lib/python2.7/subprocess.py, line 1249, in 
_execute_child\nraise child_exception\nOSError: [Errno 2] No such file or 
directory\n'

Trying to execute this command from the command line (as nova user):

nova@calisto:~$ sudo nova-rootwrap iscsiadm -m node -T 
iqn.1986-03.com.sun:02:nova-volume-001 -p 172.22.222.2:3260
Traceback (most recent call last):
  File /usr/bin/nova-rootwrap, line 69, in module
env=filtermatch.get_environment(userargs))
  File /usr/lib/python2.7/subprocess.py, line 679, in __init__
errread, errwrite)
  File /usr/lib/python2.7/subprocess.py, line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

Whereas executing the same command as root without sudo nova-rootwrap seems to
work ok:

root@calisto:~# iscsiadm -m node -T iqn.1986-03.com.sun:02:nova-volume-001 
-p 172.22.222.2:3260
# BEGIN RECORD 2.0-873
node.name = iqn.1986-03.com.sun:02:nova-volume-001
node.tpgt = 1
node.startup = manual
...

Any tips on this?

Cheers!

Alberto

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] nova.virt.libvirt.imagecache is removing good base file

2012-12-05 Thread Davide Guerri
Hi all,
I've a bad problem with nova.virt.libvirt.imagecache: it keeps removing good 
(not stale) base images even if they are used by running VMs.

I have a multi-node installation with shared storage (as described here: 
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html)

Here is a log except:

---
nova-compute.log:2012-12-05 16:21:29 INFO nova.virt.libvirt.imagecache [-] 
Removable base files: 
/var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc 
/var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc_80 
/var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357 
/var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357_40 
/var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357_80 
/var/lib/nova/instances/_base/eed4493e949ae9303c0342c50860f1ba368e8177 
/var/lib/nova/instances/_base/eed4493e949ae9303c0342c50860f1ba368e8177_80 
/var/lib/nova/instances/_base/34c234536ae3df4ea641258c57c35872c10cd0d2
nova-compute.log:2012-12-05 16:21:29 INFO nova.virt.libvirt.imagecache [-] 
Removing base file: 
/var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc_80
nova-compute.log:2012-12-05 17:11:34 INFO nova.virt.libvirt.imagecache [-] 
Active base files: 
/var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357_40 
/var/lib/nova/instances/_base/eed4493e949ae9303c0342c50860f1ba368e8177_80 
/var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc_80
---


For my understanding the backing file 
eed4493e949ae9303c0342c50860f1ba368e8177_80 is removed even if it's considered 
Active.

Actually the node that is removing base files is not the node where instances 
related to those backing files run.

It's correct to share the whole /var/lib/nova/instances/ directory or the 
_base subdirectory should reside locally to each node?

Thanks in advance for any help you can provide.

Davide.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack] config_drive Image UUID doesn't create disk.config

2012-12-05 Thread Vishvananda Ishaya

On Dec 4, 2012, at 3:48 AM, Jian Hua Geng gen...@cn.ibm.com wrote:

 Vish,
 
 Many thanks for u comments, but as you know to support windows sysprep image, 
 we need save the unattend.xml in the CDROM or C:\ device. So, we want to 
 extend the config drive to attach a CDROM device when launch VM.
 
 Anyway, I think attach a CDROM when launch new VM is a common requirement, 
 right?
 

Sounds like we need some modifications to allow for an attached cd-rom to be 
specified in block_device_mapping.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] resizing instance fails

2012-12-05 Thread Vishvananda Ishaya

On Dec 4, 2012, at 1:15 AM, Marco CONSONNI mcocm...@gmail.com wrote:

 Not sure, but it seems like this feature is available for XenServer, only 
 http://osdir.com/ml/openstack-cloud-computing/2011-10/msg00473.html
 
 Does anybody know more?

Resize should work for kvm as well, but you will need hostnames to resolve 
properly and passwordless ssh access between your compute hosts.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Understanding flavors of VM

2012-12-05 Thread Vishvananda Ishaya

On Dec 4, 2012, at 9:35 AM, Ahmed Al-Mehdi ahmedalme...@gmail.com wrote:

 Hi Marco,
 
 This is really good stuff, thank you very much for helping out.  I am 
 creating some instances to test out how/where the different storage related 
 elements are created.
 
 I created two VM instance:
 
 Instance 1 : 20GB boot disk
 Instance 2 : 10GB boot disk, 2 GB Ephemeral disk.
 
 root@bodega:/var/lib/nova# ls -lh -R instances
 instances:
 total 12K
 drwxrwxr-x 2 nova nova 4.0K Dec  4 09:01 _base
 drwxrwxr-x 2 nova nova 4.0K Nov 28 11:44 instance-0001
 drwxrwxr-x 2 nova nova 4.0K Dec  4 09:01 instance-0002
 
 instances/_base:
 total 240M
 -rw-r--r-- 1 nova nova  40M Dec  4 08:51 
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f
 -rw-r--r-- 1 libvirt-qemu kvm   10G Dec  4 09:01 
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f_10
 -rw-r--r-- 1 nova kvm   20G Dec  4 08:51 
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f_20
 -rw-rw-r-- 1 nova nova 9.4M Nov 28 11:44 
 8af61c9e86557f7244c6e5a2c45e1177c336bd1f.part
 -rw-r--r-- 1 nova nova 2.0G Dec  4 09:01 ephemeral_0_2_None 
 ==

 -rw-r--r-- 1 libvirt-qemu kvm  2.0G Dec  4 09:01 ephemeral_0_2_None_2
 =

There isn't really a need for two copies here. This is a bug I will get someone 
to investigate.

 
 instances/instance-0001:
 total 1.9M
 -rw-rw 1 nova kvm   26K Nov 28 11:45 console.log
 -rw-r--r-- 1 libvirt-qemu kvm  1.9M Dec  4 07:01 disk
 -rw-rw-r-- 1 nova nova 1.4K Nov 28 11:44 libvirt.xml
 
 instances/instance-0002:
 total 1.8M
 -rw-rw 1 libvirt-qemu kvm   27K Dec  4 09:02 console.log
 -rw-r--r-- 1 libvirt-qemu kvm  1.6M Dec  4 09:03 disk
 -rw-r--r-- 1 libvirt-qemu kvm  193K Dec  4 09:01 disk.local

The disk.local is the ephemeral disk, using ephemeral_0_2_None_2 as a backing 
file.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A confuse about the FlatDHCP network

2012-12-05 Thread Vishvananda Ishaya

On Dec 5, 2012, at 1:53 AM, Lei Zhang zhang.lei@gmail.com wrote:

 Hi all,
 
 I am reading the 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/libvirt-flat-dhcp-networking.html,
  I got the following deploy architecture. But there are several that I am 
 confused.
 
 How and why 192.168.0.0/24 ip range exist? It is necessary or not? The eth1 
 on the each physical machine own two ip(10.0.0.0/24 and 192.168.0.0/24)? Is 
 that possible? In the nova-compute, the eth1 should be bridged by br100. the 
 eth1 should not own any IP address, right?
The addresses will be moved on to the bridge. The point of having an ip address 
is so that things like rabbit and mysql can communicate over a different set of 
addresses than the guest network. Usually this would be done on a separate eth 
device (eth2) or vlan, but I was trying to keep

 In a better way, should we separate the nova-network/eth0 to the internet 
 public switch for access the internet by all VMs. and the nova-compute/eth0 
 should be bind to a internal switch for admin access use. Is it right?
Ideally there are three eth devices / vlans a) public (for 99 adddresses in 
diagram) b) management (for 192 addresses in diagram) c) guest (for 10 
addresses in diagram)

 
 -- 
 Lei Zhang
 
 Blog: http://jeffrey4l.github.com
 twitter/weibo: @jeffrey4l
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova.virt.libvirt.imagecache is removing good base file

2012-12-05 Thread Vishvananda Ishaya
This is a known issue in folsom and stable/folsom. You should turn off the 
image cache if you are using shared storage.

https://bugs.launchpad.net/nova/+bug/1078594

See the upgrade notes here to see how to disable the imagecache run:

http://wiki.openstack.org/ReleaseNotes/Folsom#OpenStack_Compute_.28Nova.29

Note that the current version of stable/folsom (and 2012.2.1) turn off 
imagecache by default.

Vish

On Dec 5, 2012, at 9:56 AM, Davide Guerri davide.gue...@gmail.com wrote:

 Hi all,
 I've a bad problem with nova.virt.libvirt.imagecache: it keeps removing 
 good (not stale) base images even if they are used by running VMs.
 
 I have a multi-node installation with shared storage (as described here: 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-migrations.html)
 
 Here is a log except:
 
 ---
 nova-compute.log:2012-12-05 16:21:29 INFO nova.virt.libvirt.imagecache [-] 
 Removable base files: 
 /var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc 
 /var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc_80 
 /var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357 
 /var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357_40 
 /var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357_80 
 /var/lib/nova/instances/_base/eed4493e949ae9303c0342c50860f1ba368e8177 
 /var/lib/nova/instances/_base/eed4493e949ae9303c0342c50860f1ba368e8177_80 
 /var/lib/nova/instances/_base/34c234536ae3df4ea641258c57c35872c10cd0d2
 nova-compute.log:2012-12-05 16:21:29 INFO nova.virt.libvirt.imagecache [-] 
 Removing base file: 
 /var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc_80
 nova-compute.log:2012-12-05 17:11:34 INFO nova.virt.libvirt.imagecache [-] 
 Active base files: 
 /var/lib/nova/instances/_base/7fab5ccd237dbb7428e9a47e26eb278e9b66e357_40 
 /var/lib/nova/instances/_base/eed4493e949ae9303c0342c50860f1ba368e8177_80 
 /var/lib/nova/instances/_base/cf38dbb4e4468fe68f8486ed6ade984f766086cc_80
 ---
 
 
 For my understanding the backing file 
 eed4493e949ae9303c0342c50860f1ba368e8177_80 is removed even if it's 
 considered Active.
 
 Actually the node that is removing base files is not the node where instances 
 related to those backing files run.
 
 It's correct to share the whole /var/lib/nova/instances/ directory or the 
 _base subdirectory should reside locally to each node?
 
 Thanks in advance for any help you can provide.
 
 Davide.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex volume attach issue on Debian Wheezy

2012-12-05 Thread Vishvananda Ishaya
Probably wheezy puts iscsiadm somewhere that rootwrap can't find it.

iscsiadm: CommandFilter, /sbin/iscsiadm, root   

   
iscsiadm_usr: CommandFilter, /usr/bin/iscsiadm, root

You should do a:

which iscsiadm

If it doesn't match the above you need to add a new filter to 
/etc/nova/rootwrap.d/volume.filters

Vish

On Dec 5, 2012, at 9:47 AM, Alberto Molina Coballes alb.mol...@gmail.com 
wrote:

 Hi all,
 
 We're facing an issue attaching a volume to a running instance in an Essex
 deployment on Debian Wheezy.
 
 nova-volume is installed on the cloud controller, but nova-network is 
 installed
 on the computes nodes in a multi_host setup.
 
 The relevant configuration parameters in nova.conf are (nexentastor-ce is used
 for volume storage):
 
 volume_driver=nova.volume.nexenta.volume.NexentaDriver
 use_local_volumes=false
 nexenta_host=172.22.222.2
 nexenta_volume=nova
 nexenta_user=admin
 nexenta_password=
 
 Volumes can be created properly:
 
 $ nova volume-create --display_name demovol1 1
 $ nova volume-list
 ++---+--+--+-+-+
 | ID |   Status  | Display Name | Size | Volume Type | Attached to |
 ++---+--+--+-+-+
 | 1  | available | demovol1 | 1| None| |
 ++---+--+--+-+-+
 
 But attaching to a volume fails with no error:
 
 $ nova volume-attach 63abfd8a-...-...-... 1 /dev/vdc
 
 and the volume still remains available.
 
 It seems that the problem is related to these logs found in the compute node 
 (nova-compute.log):
 
 TRACE nova.rpc.amqp ProcessExecutionError: Unexpected error while running 
 command.
 TRACE nova.rpc.amqp Command: sudo nova-rootwrap iscsiadm -m node -T 
 iqn.1986-03.com.sun:02:nova-volume-001 -p 172.22.222.2:3260
 TRACE nova.rpc.amqp Exit code: 1
 TRACE nova.rpc.amqp Stdout: ''
 TRACE nova.rpc.amqp Stderr: 'Traceback (most recent call last):\n  File 
 /usr/bin/nova-rootwrap, line 69, in module\n
 env=filtermatch.get_environment(userargs))\n  File 
 /usr/lib/python2.7/subprocess.py, line 679, in __init__\nerrread, 
 errwrite)\n  File /usr/lib/python2.7/subprocess.py, line 1249, in 
 _execute_child\nraise child_exception\nOSError: [Errno 2] No such file or 
 directory\n'
 
 Trying to execute this command from the command line (as nova user):
 
 nova@calisto:~$ sudo nova-rootwrap iscsiadm -m node -T 
 iqn.1986-03.com.sun:02:nova-volume-001 -p 172.22.222.2:3260
 Traceback (most recent call last):
  File /usr/bin/nova-rootwrap, line 69, in module
env=filtermatch.get_environment(userargs))
  File /usr/lib/python2.7/subprocess.py, line 679, in __init__
errread, errwrite)
  File /usr/lib/python2.7/subprocess.py, line 1249, in _execute_child
raise child_exception
 OSError: [Errno 2] No such file or directory
 
 Whereas executing the same command as root without sudo nova-rootwrap seems 
 to
 work ok:
 
 root@calisto:~# iscsiadm -m node -T 
 iqn.1986-03.com.sun:02:nova-volume-001 -p 172.22.222.2:3260
 # BEGIN RECORD 2.0-873
 node.name = iqn.1986-03.com.sun:02:nova-volume-001
 node.tpgt = 1
 node.startup = manual
 ...
 
 Any tips on this?
 
 Cheers!
 
 Alberto
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex volume attach issue on Debian Wheezy

2012-12-05 Thread Alberto Molina Coballes
2012/12/5 Vishvananda Ishaya vishvana...@gmail.com:
 Probably wheezy puts iscsiadm somewhere that rootwrap can't find it.

 iscsiadm: CommandFilter, /sbin/iscsiadm, root
 iscsiadm_usr: CommandFilter, /usr/bin/iscsiadm, root

 You should do a:

 which iscsiadm


Thanks for the quick response but it seems that iscsiadm location is correct:

nova@calisto:~$ which iscsiadm
/usr/bin/iscsiadm

Alberto

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: [swift3] api - boto and libcloud = AccessDenied

2012-12-05 Thread Antonio Messina
Hi all,

I'm trying to access SWIFT using the S3 API compatibility layer, but I
always get an AccessDenied.

I'm running folsom on ubuntu precise 12.04 LTS, packages are from
ubuntu-cloud.archive.canonical.com repository. Swift is correctly
configured, login and password have been tested with the web interface and
from command line. Glance uses it to store the images.

I've installed swift-plugin-s3 and I've configured proxy-server.conf as
follow:

pipeline = catch_errors healthcheck cache ratelimit authtoken keystoneauth
swift3  proxy-logging proxy-server
[filter:swift3]
use = egg:swift3#swift3

I've then tried to connect using my keystone login and password (and I've
also tried with the EC2 tokens, with the same result).

The code I'm using is:

from libcloud.storage.types import Provider as StorageProvider
from libcloud.storage.providers import get_driver as get_storage_driver

s3driver = get_storage_driver(StorageProvider.S3)
s3 = s3driver(ec2access, ec2secret, secure=False, host=s3host, port=8080)
s3.list_containers()

What I get is:

Traceback (most recent call last):
  File stdin, line 1, in module
  File
/home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
line 176, in list_containers
response = self.connection.request('/')
  File
/home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
line 605, in request
connection=self)
  File
/home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
line 93, in __init__
raise Exception(self.parse_error())
  File
/home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
line 68, in parse_error
raise InvalidCredsError(self.body)
libcloud.common.types.InvalidCredsError: '?xml version=1.0
encoding=UTF-8?\r\nError\r\n  CodeAccessDenied/Code\r\n
MessageAccess denied/Message\r\n/Error'


Using boto instead:

 import boto
 s3conn = boto.s3.connection.S3Connection( aws_access_key_id=ec2access,
aws_secret_access_key=ec2secret, port=s3port, host=s3host,
is_secure=False,debug=3)
 s3conn.get_all_buckets()
send: 'GET / HTTP/1.1\r\nHost: cloud-storage1:8080\r\nAccept-Encoding:
identity\r\nDate: Wed, 05 Dec 2012 19:25:00 GMT\r\nContent-Length:
0\r\nAuthorization: AWS
7c67d5b35b5a4127887c5da319c70a18:WXVx9AONXvIkDiIdg8rUnfncFnM=\r\nUser-Agent:
Boto/2.6.0 (linux2)\r\n\r\n'
reply: 'HTTP/1.1 403 Forbidden\r\n'
header: Content-Type: text/xml; charset=UTF-8
header: Content-Length: 124
header: X-Trans-Id: tx7a823c742f624f2682bfddb19f31bcc2
header: Date: Wed, 05 Dec 2012 19:24:42 GMT
Traceback (most recent call last):
  File stdin, line 1, in module
  File
/home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/boto/s3/connection.py,
line 364, in get_all_buckets
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
?xml version=1.0 encoding=UTF-8?
Error
  CodeAccessDenied/Code
  MessageAccess denied/Message
/Error

Login and password work when using the command line tool `swift`.

I think I may be missing something very basilar here, but I couldn't find
so much documentation...

Thanks in advance

.a.

-- 
antonio.s.mess...@gmail.com
arcimbo...@gmail.com
GC3: Grid Computing Competence Center
http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Essex volume attach issue on Debian Wheezy

2012-12-05 Thread Vishvananda Ishaya

On Dec 5, 2012, at 11:33 AM, Alberto Molina Coballes alb.mol...@gmail.com 
wrote:

 2012/12/5 Vishvananda Ishaya vishvana...@gmail.com:
 Probably wheezy puts iscsiadm somewhere that rootwrap can't find it.
 
 iscsiadm: CommandFilter, /sbin/iscsiadm, root
 iscsiadm_usr: CommandFilter, /usr/bin/iscsiadm, root
 
 You should do a:
 
 which iscsiadm
 
 
 Thanks for the quick response but it seems that iscsiadm location is correct:
 
 nova@calisto:~$ which iscsiadm
 /usr/bin/iscsiadm
 

and /etc/nova/rootwrap.d/volume.filters contains the line:

 iscsiadm_usr: CommandFilter, /usr/bin/iscsiadm, root

?

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] resizing instance fails

2012-12-05 Thread Clint Walsh
Hi,


Resize should work for kvm as well, but you will need hostnames to resolve
 properly and passwordless ssh access between your compute hosts.


Does 'hostnames'  mean that of the VM or the compute nodes or both?

Also what is the reason for compute host access direct to other compute
hosts?

Resize would be very useful for our tenants.

---
Clint Walsh
NeCTAR Research Cloud Support



On 6 December 2012 05:39, Vishvananda Ishaya vishvana...@gmail.com wrote:


  On Dec 4, 2012, at 1:15 AM, Marco CONSONNI mcocm...@gmail.com wrote:

 Not sure, but it seems like this feature is available for XenServer, only
 http://osdir.com/ml/openstack-cloud-computing/2011-10/msg00473.html

 Does anybody know more?



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Bug Squash Days - 12/6 for docs, 12/13 for code

2012-12-05 Thread Anne Gentle
Tomorrow we are going to have a doc bug squash day. The code bug squash day
will be 12/13. I've started a wiki page to track in-person events and to
describe how the bug squash day works. Feel free to edit as you see fit:
http://wiki.openstack.org/BugDays/20121213BugSquashing

On a bug squash day, we give top priority and focus to triaging, fixing,
and closing bugs. For tomorrow's doc bugs, look at these two projects:

https://bugs.launchpad.net/openstack-manuals
https://bugs.launchpad.net/openstack-api-site

Pick a bug, assign yourself, and start debugging and fixing! Also keep an
eye on the review queues to keep the bug fixes moving through the system.

If you are new to the projects, check out the
http://wiki.openstack.org/DevQuickStart page to get started.

If you have questions, pop into #openstack any day, or #openstack-bugsquash
on the 6th or lucky 13th.

Ready to squash,
Anne
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] resizing instance fails

2012-12-05 Thread Vishvananda Ishaya
On Dec 5, 2012, at 1:14 PM, Clint Walsh clinton.wa...@unimelb.edu.au wrote:

 Hi,
 
 
 Resize should work for kvm as well, but you will need hostnames to resolve 
 properly and passwordless ssh access between your  compute hosts.
 
 Does 'hostnames'  mean that of the VM or the compute nodes or both?

compute nodes

 
 Also what is the reason for compute host access direct to other compute hosts?

Direct access is to copy the vm file across. This could be modified to store 
the file in a common location (like glance) but there are some issues related 
to raw disks that need to be solved.  There is a bp about this:

https://blueprints.launchpad.net/nova/+spec/resize-no-raw

 
 Resize would be very useful for our tenants.
 
 ---
 Clint Walsh
 NeCTAR Research Cloud Support
 
 
 
 On 6 December 2012 05:39, Vishvananda Ishaya vishvana...@gmail.com wrote:
 
 On Dec 4, 2012, at 1:15 AM, Marco CONSONNI mcocm...@gmail.com wrote:
 
 Not sure, but it seems like this feature is available for XenServer, only 
 http://osdir.com/ml/openstack-cloud-computing/2011-10/msg00473.html
 
 Does anybody know more?
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] resizing instance fails

2012-12-05 Thread Clint Walsh
Vish,

thanks for the clarification re hostnames.

NeCTAR uses shared storage across compute nodes for VM images storage and
our compute nodes hostnames resolve

Is there a way around passwordless access between compute nodes for the
above config as the VM file doesnt need to be moved
its already on all compute nodes within a cell.

---
Clint Walsh
NeCTAR Research Cloud Support



On 6 December 2012 09:10, Vishvananda Ishaya vishvana...@gmail.com wrote:

   On Dec 5, 2012, at 1:14 PM, Clint Walsh clinton.wa...@unimelb.edu.au
 wrote:

 Hi,


  Resize should work for kvm as well, but you will need hostnames to
 resolve properly and passwordless ssh access between your compute hosts.


 Does 'hostnames'  mean that of the VM or the compute nodes or both?


  compute nodes


 Also what is the reason for compute host access direct to other compute
 hosts?


  Direct access is to copy the vm file across. This could be modified to
 store the file in a common location (like glance) but there are some issues
 related to raw disks that need to be solved.  There is a bp about this:

  https://blueprints.launchpad.net/nova/+spec/resize-no-raw


 Resize would be very useful for our tenants.

 ---
 Clint Walsh
 NeCTAR Research Cloud Support



 On 6 December 2012 05:39, Vishvananda Ishaya vishvana...@gmail.comwrote:


  On Dec 4, 2012, at 1:15 AM, Marco CONSONNI mcocm...@gmail.com wrote:

 Not sure, but it seems like this feature is available for XenServer, only
 http://osdir.com/ml/openstack-cloud-computing/2011-10/msg00473.html

 Does anybody know more?




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] resizing instance fails

2012-12-05 Thread Vishvananda Ishaya

On Dec 5, 2012, at 2:27 PM, Clint Walsh clinton.wa...@unimelb.edu.au wrote:

 Vish,
 
 thanks for the clarification re hostnames.
 
 NeCTAR uses shared storage across compute nodes for VM images storage and our 
 compute nodes hostnames resolve 
 
 Is there a way around passwordless access between compute nodes for the above 
 config as the VM file doesnt need to be moved
 its already on all compute nodes within a cell.

If you are using shared storage then you probably should use live-migrate 
instead of resize, but unfortunately this requires libvirt on the nodes to talk 
though so you need either passwordless access or some kind of tls keys.

If you don't mind rebooting then you might be able to use something along the 
lines the evacuate code that is under review now:

https://review.openstack.org/#/c/11086/

This allows vms to be restarted on another node even if the original host is 
down.

Vish


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Understanding flavors of VM

2012-12-05 Thread Michael Still
On 12/05/2012 06:59 PM, Marco CONSONNI wrote:

 To be honest it seems like I missed something because, from your
 investigation, the storage is kept under _base. Strange. I didn't know that.

Hi! The following description is libvirt specific. Bearing in mind that
this code is a moving target and has been re-written at least three
times in the last year [1], it works a bit like this...

- you request an instance using a given image
- that image is fetched to _base from glance (if its not already there)
- that image is format converted if required, and then resized to the
requested size
- the instance disk image is a copy-on-write layer on top of that
resized image in _base and is stored in the instance's directory
- the instance is booted using the COW layer

If another instance with the same image / disk size starts, it can short
circuit the process and use the already existing image, which is cool.

When an instance is terminated, the instance directory is removed, but
files in _base remain.

If you have image cache management turned on, then the files in _base
are periodically cleaned up. The files in _base are also checksummed to
try and detect file corruption, although that hasn't been the most loved
feature ever implemented.

Hope this helps,
Michael

1: Several times I have gone to write a blog post about how this works,
and then realized the code has changed again.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] questions about private, external network

2012-12-05 Thread Dan Wendlandt
the IP allocation pool can be a subset of the overal subnet range, if for
example, you want to limit the set of IP addresses that Quantum will hand
out to a sub-range of the total IPs in the subnet.  This can be useful if a
quantum network is shared with hosts outside of openstack (e.g., with
physical hosts provisioned outside of openstack).

dan

On Wed, Nov 28, 2012 at 5:09 PM, Ahmed Al-Mehdi ahmedalme...@gmail.comwrote:

 Thank you very much for the explanation.  However, I am still a bit
 confused.  In the command quantum subnet-create ... for external network,
 I am already providing start/end allocation pool IP addr.  What is the need
 for the 192.168.50.100/24 option?  In this case, is this option
 redundant  OR  not needed (as in not used by Quantum)  OR not correctly
 specified?

 You mentioned 192.168.50.100/30, how did you get /30?  Is that an
 example?  Or is that based on the start/end IP allocation pool?

 Thank you,
 Ahmed.



 On Wed, Nov 28, 2012 at 4:44 PM, gong yong sheng 
 gong...@linux.vnet.ibm.com wrote:

  On 11/29/2012 07:56 AM, Ahmed Al-Mehdi wrote:

 Hello,

  I have a few questions related to private and external network in
 Quantum.  I am running into some odd behavior with networking related to my
 VM instance that I am trying to resolve.


  # quantum net-create --tenant-id $put_id_of_service_tenant ext_net
 --router:external=True

  # quantum subnet-create --tenant-id $put_id_of_service_tenant
 --allocation-pool start=192.168.50.102,end=192.168.50.126 --gateway
 192.168.50.1 ext_net192.168.50.100/24 --enable_dhcp=False  (step b)


 -  192.168.50.100/24:  Is 192.168.50.100 assigned (reserved) for any
 purpose?  What is this cidr represent?

 It should be a wrong cidr.
 I think if you are using 192.168.50.100/30, the 192.168.59.101 will be
 reserved.



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Do we have any schema for keystone v3.0 request/responses

2012-12-05 Thread Ali, Haneef
Hi,

Do we have any  XSD file  for keystone v3.0 api?  All the examples show only 
json format.  I don't see even a single request/response example using xml. 
Does keystone v3.0 support xml content-type?  If so what is the namespace for 
the v3.0 schema?

Thanks
Haneef
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Do we have any schema for keystone v3.0 request/responses

2012-12-05 Thread heckj
Hey Ali,

We don't have an XSD for the V3 API sets - we've been holding off finalizing 
that up as we are making small implementation changes as we're getting it into 
practice and learning what ideas worked, and which didn't. Jorge (Rackspace) 
has something and offered to do more, but hasn't submitted it up for review, 
and I don't know what state it's in.

We also have modifications to the /token portion of the API that are pending 
final implementation (Guang is working on these now) - when that's complete, 
we'd very much welcome you helping us construct an XSD for ongoing use.

-joe


On Dec 5, 2012, at 4:16 PM, Ali, Haneef haneef@hp.com wrote:
 Hi,
  
 Do we have any  XSD file  for keystone v3.0 api?  All the examples show only 
 json format.  I don’t see even a single request/response example using xml. 
 Does keystone v3.0 support xml content-type?  If so what is the namespace for 
 the v3.0 schema?
  
 Thanks
 Haneef
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Will Heat Work Without systemd (i.e. will it work with init)?

2012-12-05 Thread Steve Baker

On 12/05/2012 08:55 AM, Rickard, Ronald wrote:


I am attempting to install/configure Heat on RHEL 6.3.  This server 
already has other OpenStack (Essex release) products installed: nova, 
glance, keystone, etc.  I built the RPMs for Heat (v7) and Heat JEOS 
(v7) by commenting out the requirements on systemd-units and the 
systemd unit files in the heat.spec because RHEL 6.3 uses init instead 
of systemd.  I am thinking I can replace these systemd unit files with 
init.d scripts to startup Heat.  I installed the RPMs and am at the 
step in the process where I am creating a JEOS with heat_jeos:


heat-jeos --y create F17-x86_64-cfntools --register-with-glance

It takes almost 10 minutes and I see activity in the /var/lib/oz/isos 
and /var/lib/oz/isocontent directory, but eventually, I see the 
following error:


Traceback (most recent call last):

File /usr/bin/heat-jeos, line 375, in module

main()

File /usr/bin/heat-jeos, line 363, in main

result = cmd(opts, args)

File /usr/bin/heat-jeos, line 139, in command_create

build_jeos(get_oz_guest(final_tdl))

File /usr/lib/python2.6/site-packages/heat_jeos/utils.py, line 132, 
in build_jeos


guest.customize(libvirt_xml)

File /usr/lib/python2.6/site-packages/oz/RedHat.py, line 1166, in 
customize


return self._internal_customize(libvirt_xml, no)

File /usr/lib/python2.6/site-packages/oz/RedHat.py, line 1150, in 
_internal_customize


self.do_customize(guestaddr)

File /usr/lib/python2.6/site-packages/oz/RedHat.py, line 1104, in 
do_customize


self.guest_execute_command(guestaddr, content)

File /usr/lib/python2.6/site-packages/oz/RedHat.py, line 474, in 
guest_execute_command


command, timeout, tunnels)

File /usr/lib/python2.6/site-packages/oz/ozutil.py, line 362, in 
ssh_execute_command


return subprocess_check_output(cmd)

File /usr/lib/python2.6/site-packages/oz/ozutil.py, line 329, in 
subprocess_check_output


raise SubprocessException('%s' failed(%d): %s % (cmd, retcode, 
stderr), retcode)


oz.ozutil.SubprocessException: 'ssh -i /etc/oz/id_rsa-icicle-gen -F 
/dev/null -o ServerAliveInterval=30 -o StrictHostKeyChecking=no -o 
ConnectTimeout=10 -o UserKnownHostsFile=/dev/null -o 
PasswordAuthentication=no root@W.X.Y.Z yum -y update fedora-release


yum -y install yum-plugin-fastestmirror cloud-init python-psutil 
python-boto


yum -y update

sed --in-place -e s/Type=oneshot/Type=oneshot\nTimeoutSec=0/ 
/lib/systemd/system/cloud-final.service' failed(2): Warning: 
Permanently added 'W.X.Y.Z' (RSA) to the list of known hosts.


Error: Cannot retrieve metalink for repository: fedora. Please verify 
its path and try again


Error: Cannot retrieve metalink for repository: fedora. Please verify 
its path and try again


Error: Cannot retrieve metalink for repository: fedora. Please verify 
its path and try again


sed: can't read /lib/systemd/system/cloud-final.service: No such file 
or directory



We haven't tested image creation on RHEL 6.3. Most likely we'll focus on 
RHEL 6.4 (when it is released) as our highest priority RHEL target. 
Patches for any distro are welcome though.


You don't actually need to build your own images if the pre-built ones 
meet your needs:

https://github.com/heat-api/prebuilt-jeos-images/downloads

Hopefully this can get you to the next phase of your evaluation. Let us 
know if you have any more issues.


cheers
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Why my vm often change into shut off status by itself?

2012-12-05 Thread pyw
My virtual machine created, often in the absence of intervention into the
stopped state:

pyw@ven-1:~/devstack$ virsh list --all
 Id Name State
--
  - instance-0040shut off
  - instance-0044shut off
  - instance-0045shut off
  - instance-0046shut off
  - instance-0047shut off
  - instance-005bshut off
  - instance-005eshut off
  - instance-0065shut off
  - instance-006eshut off
  - instance-0075shut off
  - instance-0076shut off
  - instance-0077shut off
  - instance-007cshut off
  - instance-007dshut off
  - instance-0081shut off
  - instance-0082shut off
  - instance-0083shut off
  - instance-0084shut off
  - instance-0085shut off

query the nova the database, you can see:
vm_state: stopped
Why is this?
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A confuse about the FlatDHCP network

2012-12-05 Thread Lei Zhang
thank you very much, Vishvananda.
But I am still confused about the 192.168.0.0/24 and the 10.0.0.0/8 ip.
What means by The addresses will be moved on to the bridge. It means the
192.168.0.0/8 will be disappear?  In my opinion, the bridged NIC (eth1)
should be worked under promiscuous mode and its IP should be 0.0.0.0. So
the eth1 should not own any IP.
But if the 192 address doesn't exist,  how the compute-note communicate
with each other? Through the eth0? I have no idea.


On Thu, Dec 6, 2012 at 3:12 AM, Vishvananda Ishaya vishvana...@gmail.comwrote:


 On Dec 5, 2012, at 1:53 AM, Lei Zhang zhang.lei@gmail.com wrote:

 Hi all,

 I am reading the
 http://docs.openstack.org/trunk/openstack-compute/admin/content/libvirt-flat-dhcp-networking.html,
 I got the following deploy architecture. But there are several that I am
 confused.

- How and why 192.168.0.0/24 ip range exist? It is necessary or not?
The eth1 on the each physical machine own two ip(10.0.0.0/24 and
192.168.0.0/24)? Is that possible? In the nova-compute, the eth1
should be bridged by br100. the eth1 should not own any IP address, right?

 The addresses will be moved on to the bridge. The point of having an ip
 address is so that things like rabbit and mysql can communicate over a
 different set of addresses than the guest network. Usually this would be
 done on a separate eth device (eth2) or vlan, but I was trying to keep



- In a better way, should we separate the nova-network/eth0 to the
internet public switch for access the internet by all VMs. and the
nova-compute/eth0 should be bind to a internal switch for admin access use.
Is it right?

 Ideally there are three eth devices / vlans a) public (for 99 adddresses
 in diagram) b) management (for 192 addresses in diagram) c) guest (for 10
 addresses in diagram)



 --
 Lei Zhang

 Blog: http://jeffrey4l.github.com
 twitter/weibo: @jeffrey4l

  ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Lei Zhang

Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-05 Thread Wangpan
if the hypervisor is kvm, you can check the qemu log is there any exceptions, 
like /var/log/libvirt/qemu/instance-0111.log
if Xen used, log file is at /var/log/xen/qemu-*.log

2012-12-06



Wangpan



发件人:pyw
发送时间:2012-12-06 12:05
主题:Re: [Openstack] Why my vm often change into shut off status by itself?
收件人:Wangpanhzwang...@corp.netease.com
抄送:

Thank you for answer.
My image is OK,after vm created, I can login my instance.
I have many virtual machines running (10+), generally after a few days(1+) a 
virtual machine automatic shutoff occurs.Yesterday, the situation becomes 
worse, suddenly all the virtual machines all shut off.I did not see anything 
unusual logs in nova (did not see the warning/error)



I use devstack , and nova version is(last commit):
*   commit 3d418dcf860894523eff62a8338d09d58e994b0e

  Merge: e76848a 8b4896b
 Author: Jenkins jenk...@review.openstack.org
 Date:   Sun Nov 4 00:49:54 2012 +

 Merge handles empty dhcp_domain with hostname in metadata into 
stable/folsom


And I do not upgrade nova.




2012/12/6 Wangpan hzwang...@corp.netease.com

hi pengyuwei,
I guess may be your image of VMs is broken,so your VMs will shutoff after being 
created,
just a guess.
good luck!

2012-12-06



Wangpan



发件人:pyw
发送时间:2012-12-06 11:00
主题:[Openstack] Why my vm often change into shut off status by itself?
收件人:openstackopenstack@lists.launchpad.net
抄送:

My virtual machine created, often in the absence of intervention into the 
stopped state: 


pyw@ven-1:~/devstack$ virsh list --all
 Id Name State
--
  - instance-0040shut off

  - instance-0044shut off
  - instance-0045shut off
  - instance-0046shut off
  - instance-0047shut off
  - instance-005bshut off
  - instance-005eshut off
  - instance-0065shut off
  - instance-006eshut off
  - instance-0075shut off
  - instance-0076shut off
  - instance-0077shut off
  - instance-007cshut off
  - instance-007dshut off
  - instance-0081shut off
  - instance-0082shut off
  - instance-0083shut off
  - instance-0084shut off
  - instance-0085shut off


query the nova the database, you can see:
vm_state: stopped
Why is this?___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Why my vm often change into shut off status by itself?

2012-12-05 Thread Veera Reddy
On Thu, Dec 6, 2012 at 8:29 AM, pyw pengyu...@gmail.com wrote:

 instance-0040shut off




Hi,

Try to start vm with virsh command

 virsh start  instance-0040

With this  we can see what is actual problem

Regards,
Veera.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Understanding flavors of VM

2012-12-05 Thread Lei Zhang
Hi Michael,

Could you send us the doc link? Thanks a lot.


On Thu, Dec 6, 2012 at 6:47 AM, Michael Still
michael.st...@canonical.comwrote:

 On 12/05/2012 06:59 PM, Marco CONSONNI wrote:

  To be honest it seems like I missed something because, from your
  investigation, the storage is kept under _base. Strange. I didn't know
 that.

 Hi! The following description is libvirt specific. Bearing in mind that
 this code is a moving target and has been re-written at least three
 times in the last year [1], it works a bit like this...

 - you request an instance using a given image
 - that image is fetched to _base from glance (if its not already there)
 - that image is format converted if required, and then resized to the
 requested size
 - the instance disk image is a copy-on-write layer on top of that
 resized image in _base and is stored in the instance's directory
 - the instance is booted using the COW layer

 If another instance with the same image / disk size starts, it can short
 circuit the process and use the already existing image, which is cool.

 When an instance is terminated, the instance directory is removed, but
 files in _base remain.

 If you have image cache management turned on, then the files in _base
 are periodically cleaned up. The files in _base are also checksummed to
 try and detect file corruption, although that hasn't been the most loved
 feature ever implemented.

 Hope this helps,
 Michael

 1: Several times I have gone to write a blog post about how this works,
 and then realized the code has changed again.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Lei Zhang

Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance VNC Console - Failed to connect to server (code: 1006)

2012-12-05 Thread Lei Zhang
why the novacproxy_base_url is
http://PUBLIC_IP_MANAGEMENT_NETWORK:6080/vnc_auto.html The
PUBLIC_IP_MANAGEMENT_NETWORK show be physical machine. could you check

   - The vnc server is setup properly.

use vnc client to connect the server vncview master

   - novncproxy is start

netstate -nltp | grep 6080


On Wed, Dec 5, 2012 at 11:30 PM, Alex Vitola alex.vit...@gmail.com wrote:

 I set up an environment with 1 and 2 Controler Cloud Compute Cloud.

 When I try to access the machine by the Dashboard it shows me the
 following message:

  Failed to connect to server (code: 1006)

 If I access the Direct Compute Cloud by vnc can access the
 console, but not by the panel.


 More weird if I leave a tcpdump running on both servers not
 beats anything on port 5900, in any of the two servers

 ps.: I'm using the default settings in nova.conf


 # # Novnc
 novnc_enable = true
 novncproxy_base_url =
 http://PUBLIC_IP_MANAGEMENT_NETWORK:6080/vnc_auto.html
 vncserver_proxyclient_address = 127.0.0.1
 vncserver_listen = 0.0.0.0

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Lei Zhang

Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance no route to host problem

2012-12-05 Thread Lei Zhang
Could you check the iptables in the vm? Whether it drop the packets on the
port 80


On Thu, Dec 6, 2012 at 12:29 AM, Patrick Petit 
patrick.michel.pe...@gmail.com wrote:

 Dear Stackers,

 I am running instance wordpress.WikiServer


 $ nova list

 +--+--+++
 | ID   | Name | Status
 | Networks   |

 +--+--+++
 | 6be47af7-2e29-4b4c-afeb-0a7f760f5970 | test2| ACTIVE
 | xlcloud=172.16.1.6 |
 | 5a4c552f-933c-4a06-8e6f-164176380af5 | wordpress.DatabaseServer | ACTIVE
 | xlcloud=172.16.1.3 |
 | ddb120d9-e1ad-444c-8490-37ecb15f500e | wordpress.WikiServer | ACTIVE
 | xlcloud=172.16.1.4, 10.197.217.131 |

 +--+--+++


 With Security Group setup as:

 $ nova secgroup-list

 +---++
 | Name  | Description
|

 +---++
 | default   | default
|

 +---++


 $ nova secgroup-list-rules default
 +-+---+-+---+--+
 | IP Protocol | From Port | To Port | IP Range  | Source Group |
 +-+---+-+---+--+
 | icmp| -1| -1  | 0.0.0.0/0 |  |
 | tcp | 22| 22  | 0.0.0.0/0 |  |
 | tcp | 80| 80  | 0.0.0.0/0 |  |
 +-+---+-+---+--+

 I can ping and ssh through the fix or floating IP without any problem
 (172.16.1.4, 10.197.217.131).
 But HTTP requests on port 80 doesn't go through.
 I get a no route host error message from wget or telnet for example.

 Ex. $ telnet 172.16.1.4 80
 Trying 172.16.1.4...
 telnet: Unable to connect to remote host: No route to host.
 Clearly it's not a routing problem.

 Any idea what the problem could be or hints to debug it.

 Thanks
 Patrick



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Lei Zhang

Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_quantum_trunk #108

2012-12-05 Thread openstack-testing-bot
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/108/Project:precise_grizzly_quantum_trunkDate of build:Wed, 05 Dec 2012 09:31:00 -0500Build duration:1 min 47 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd VIF binding extensionsby gkottoneditquantum/plugins/linuxbridge/lb_quantum_plugin.pyeditquantum/plugins/openvswitch/ovs_quantum_plugin.pyaddquantum/extensions/portbindings.pyeditquantum/tests/unit/linuxbridge/test_linuxbridge_plugin.pyeditetc/policy.jsoneditquantum/tests/unit/openvswitch/test_openvswitch_plugin.pyConsole Output[...truncated 2775 lines...]ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-f8ff15bf-c9cb-44ad-98df-693761ecc80a', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-f8ff15bf-c9cb-44ad-98df-693761ecc80a', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmptIBtJ3/quantummk-build-deps -i -r -t apt-get -y /tmp/tmptIBtJ3/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201212050931~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-f8ff15bf-c9cb-44ad-98df-693761ecc80a', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-f8ff15bf-c9cb-44ad-98df-693761ecc80a', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_deploy #5

2012-12-05 Thread openstack-testing-bot
Title: precise_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_deploy/5/Project:precise_grizzly_deployDate of build:Wed, 05 Dec 2012 11:38:19 -0500Build duration:46 minBuild cause:Started by command lineStarted by command lineStarted by command lineBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesBuild Artifactslogs/test-02.os.magners.qa.lexington-log.tar.gzlogs/test-03.os.magners.qa.lexington-log.tar.gzlogs/test-04.os.magners.qa.lexington-log.tar.gzlogs/test-05.os.magners.qa.lexington-log.tar.gzlogs/test-06.os.magners.qa.lexington-log.tar.gzlogs/test-07.os.magners.qa.lexington-log.tar.gzlogs/test-08.os.magners.qa.lexington-log.tar.gzlogs/test-09.os.magners.qa.lexington-log.tar.gzlogs/test-10.os.magners.qa.lexington-log.tar.gzlogs/test-12.os.magners.qa.lexington-log.tar.gzConsole Output[...truncated 16211 lines...]INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Archiving logs on test-07.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-12.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-05.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-10.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonERROR:root:Unable to get information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonERROR:root:Unable to get information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-10.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonERROR:root:Unable to get information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.+ exit 1Build step 'Execute shell' marked build as failureArchiving artifactsEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_deploy #7

2012-12-05 Thread openstack-testing-bot
Title: precise_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_deploy/7/Project:precise_grizzly_deployDate of build:Wed, 05 Dec 2012 14:09:08 -0500Build duration:46 minBuild cause:Started by command lineBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesBuild Artifactslogs/test-02.os.magners.qa.lexington-log.tar.gzlogs/test-03.os.magners.qa.lexington-log.tar.gzlogs/test-04.os.magners.qa.lexington-log.tar.gzlogs/test-05.os.magners.qa.lexington-log.tar.gzlogs/test-06.os.magners.qa.lexington-log.tar.gzlogs/test-07.os.magners.qa.lexington-log.tar.gzlogs/test-08.os.magners.qa.lexington-log.tar.gzlogs/test-09.os.magners.qa.lexington-log.tar.gzlogs/test-10.os.magners.qa.lexington-log.tar.gzlogs/test-12.os.magners.qa.lexington-log.tar.gzConsole Output[...truncated 16641 lines...]INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.9p1)INFO:paramiko.transport:Authentication (publickey) successful!INFO:paramiko.transport:Secsh channel 1 opened.INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Archiving logs on test-07.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-12.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-05.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-10.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-10.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.+ exit 1Build step 'Execute shell' marked build as failureArchiving artifactsEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_keystone_trunk #47

2012-12-05 Thread openstack-testing-bot
Title: precise_grizzly_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/47/Project:precise_grizzly_keystone_trunkDate of build:Wed, 05 Dec 2012 16:31:01 -0500Build duration:2 min 59 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesAdded documentation for the external auth supportby alogaeditdoc/source/index.rstadddoc/source/external-auth.rstBug 1075090 -- Fixing log messages in python source code to support internationalization.by nachiappan.veerappan-nachiappaneditkeystone/common/utils.pyeditkeystone/common/cms.pyeditkeystone/common/sql/nova.pyeditkeystone/common/sql/legacy.pyeditkeystone/config.pyeditkeystone/common/ldap/core.pyeditkeystone/test.pyeditkeystone/clean.pyeditkeystone/common/sql/core.pyeditkeystone/common/wsgi.pyeditkeystone/common/bufferedhttp.pyeditkeystone/catalog/backends/templated.pyeditkeystone/catalog/core.pyeditkeystone/common/ldap/fakeldap.pyeditkeystone/policy/backends/rules.pyuse keystone test and change config during setUpby iartarisiedittests/test_cert_setup.pyOnly import * from core modulesby dolph.mathewsaddkeystone/catalog/controllers.pyeditHACKING.rsteditkeystone/contrib/admin_crud/core.pyaddkeystone/policy/controllers.pyeditkeystone/service.pyeditkeystone/catalog/__init__.pyeditkeystone/contrib/user_crud/core.pyeditkeystone/policy/__init__.pyeditkeystone/policy/core.pyeditkeystone/identity/routers.pyeditkeystone/identity/__init__.pyeditkeystone/catalog/core.pyeditkeystone/identity/controllers.pyConsole Output[...truncated 2106 lines...]Package: keystonePackage-Time: 0Source-Version: 2013.1+git201212051631~precise-0ubuntu1Space: 0Status: failedVersion: 2013.1+git201212051631~precise-0ubuntu1Finished at 20121205-1632Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1+git201212051631~precise-0ubuntu1.dsc']' returned non-zero exit status 3ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1+git201212051631~precise-0ubuntu1.dsc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmp9C1Ls5/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmp9C1Ls5/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log af8761d9e0add62a83604b77ab015f5a8b3120a9..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/keystone/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201212051631~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [c858c1b] Only 'import *' from 'core' modulesdch -a [77dee93] use keystone test and change config during setUpdch -a [84a0b2d] Bug 1075090 -- Fixing log messages in python source code to support internationalization.dch -a [8c15e3e] Added documentation for the external auth supportdch -a [5b73757] Validate password type (bug 1081861)debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.1+git201212051631~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A keystone_2013.1+git201212051631~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1+git201212051631~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_

[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_quantum_trunk #109

2012-12-05 Thread openstack-testing-bot
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/109/Project:precise_grizzly_quantum_trunkDate of build:Wed, 05 Dec 2012 17:31:01 -0500Build duration:1 min 46 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesReturns more appropriate error when address pool is exhaustedby gkottoneditquantum/api/v2/base.pyConsole Output[...truncated 2779 lines...]ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-7162aeca-f657-4b1d-874d-fcaf28172bc8', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpGuItmq/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpGuItmq/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/precise-grizzly --forcedch -b -D precise --newversion 2013.1+git201212051731~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [b939b07] Returns more appropriate error when address pool is exhausteddch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-7162aeca-f657-4b1d-874d-fcaf28172bc8', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'precise-amd64-7162aeca-f657-4b1d-874d-fcaf28172bc8', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_quantum_trunk #111

2012-12-05 Thread openstack-testing-bot
Title: raring_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/111/Project:raring_grizzly_quantum_trunkDate of build:Wed, 05 Dec 2012 17:31:01 -0500Build duration:2 min 52 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesReturns more appropriate error when address pool is exhaustedby gkottoneditquantum/api/v2/base.pyConsole Output[...truncated 3236 lines...]ERROR:root:Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-68d85e51-1d76-42b4-8e4c-d4faa77e7c11', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpP9QVs7/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpP9QVs7/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log d0a284de2ff272ae65a62bc2722c0274ff1be7d2..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 2013.1+git201212051731~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [b939b07] Returns more appropriate error when address pool is exhausteddch -a [64f2a38] Add VIF binding extensionsdch -a [4aaf0fe] Sort router testcases as group for L3NatDBTestCasedch -a [0dea610] Refactor resources listing testcase for test_db_plugin.pydch -a [b836e71] l3 agent rpcdch -a [4ec139e] Fix rootwrap cfg for src installed metadata proxy.dch -a [643a36b] Add metadata_agent.ini to config_path in setup.py.dch -a [e56f174] add state_path sample back to l3_agent.ini filedch -a [06b2b2b] plugin/ryu: make live-migration work with Ryu plugindch -a [797036f] Remove __init__.py from bin/ and tools/.dch -a [e4ee84f] Removes unused code in quantum.commondch -a [d06b511] Fixes import order nitsdch -a [dc107a5] update state_path default to be the same valuedch -a [87e9b62] Use /usr/bin/ for the metadata proxy in l3.filtersdch -a [681d7d3] prevent deletion of router interface if it is needed by a floating ipdch -a [58cb6ce] Completes coverage of quantum.agent.linux.utilsdch -a [ac81d9d] Fixes Rpc related exception in NVP plugindch -a [0c3dd5a] add metadata proxy support for Quantum Networksdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-68d85e51-1d76-42b4-8e4c-d4faa77e7c11', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-r', '-c', 'raring-amd64-68d85e51-1d76-42b4-8e4c-d4faa77e7c11', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp