Re: [Openstack] [Cinder] FilterScheduler not handling multi-backend

2013-07-11 Thread Jérôme Gallard
Hi,

Did you specified in your cinder.conf the volume_backend_name for each
backend section? Example:

[backend1]
...
volume_backend_name=...

Maybe this can help you:
http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/multi_backend.html

Regards,
Jérôme

On Mon, Jul 8, 2013 at 2:50 PM, yatin kumbhare  wrote:
> Hello Folks,
>
> I have cinder volume service, setup with both FC and ISCSI driver
> (multi-backend).
>
> Here's cinder.conf
>
> scheduler_host_manager=cinder.scheduler.host_manager.HostManager
> scheduler_default_filters=AvailabilityZoneFilter,CapacityFilter
> scheduler_default_weighers=CapacityWeigher
> scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
>
> enabled_backends=3PAR-ISCSI,3PAR-FC
>
> [3PAR-ISCSI]
> 
>
> [3PAR-FC]
> 
>
> $cinder-manage  host list
>
> hostzone
> ask27   nova
> ask27@3PAR-FC   nova
> ask27@3PAR-ISCSInova
>
> $cinder extra-specs-list
> +--+---++
> |  ID  |  Name |
> extra_specs |
> +--+---++
> | 08109e24-79e0-4d24-bb8b-f26c39c6f0e2 |   FC  |  {u'persona': u'11 -
> VMware', u'volume_backend_name': u'HP3PARFCDriver'}   |
> | b94d25b3-c022-4fb1-ba00-53ff9b6901e7 | ISCSI | {u'persona': u'11 -
> VMware', u'volume_backend_name': u'HP3PARISCSIDriver'} |
> +--+---++
>
> now when,I create volume
>
> $cinder create --display-name vol99 --volume-type ISCSI 1
>
>
> $cinder show 15820d30-44ff-47e3-9dec-63921800e1b9
> +--+--+
> |   Property   |
> Value   |
> +--+--+
> | attachments  |
> []|
> |  availability_zone   |
> nova   |
> |   bootable   |
> false   |
> |  created_at  |
> 2013-07-08T06:33:02.186351
> |
> | display_description  |
> None   |
> | display_name |
> vol99   |
> |  id  |
> 15820d30-44ff-47e3-9dec-63921800e1b9
> |
> |   metadata   | {u'CPG': u'ESX_CLUSTERS_RAID5_250GB_FC',
> u'3ParName': u'osv-FYINMET-R.Od7GOSGADhuQ', u'snapCPG':
> u'ESX_CLUSTERS_RAID5_250GB_FC'} |
> |os-vol-host-attr:host |
> ask27@3PAR-FC |
> | os-vol-tenant-attr:tenant_id |
> 9e27e1aded67424d895bf83a4026484d
> |
> | size |
> 1 |
> | snapshot_id  |
> None   |
> | source_volid |
> None   |
> |status|
> available |
> | volume_type  |
> FC|
> +--+--+
>
>
> Inside cinder scheduler.log
>
> Filtered [host 'ask27@3PAR-ISCSI': free_capacity_gb: 245, host
> 'ask27@3PAR-FC': free_capacity_gb: 246] _schedule
> /usr/lib/python2.6/site-packages/cinder/scheduler/filter_scheduler.py:208
> Choosing WeighedHost [host: ask27@3PAR-FC, weight: 246.0] _schedule
> /usr/lib/python2.6/site-packages/cinder/scheduler/filter_scheduler.py:214
> Making asynchronous cast on cinder-volume.ask27@3PAR-FC...
>
> For volume-type ISCSI, host: ask27@3PAR-FC gets selected wrongly.
>
> Due to this nova volume-attach loads up wrong volume_driver to attach
> volume.
>
> I had already looked at bug list, there's nothing i could relate this to.
>
> Has anybody has seen this issue before?
>
>
> Thanks and Regards,
> Yatin Kumbhare
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~

Re: [Openstack] [Cinder] Re: Multiple machines hosting cinder-volumes with Folsom ?

2013-05-31 Thread Jérôme Gallard
Hi Sylvain,

Great to know that you found how to solve your issue.

Thanks for reporting that you found the Grizzly doc confusing.
In fact, the Grizzly release introduced the multi-backend feature. This
feature allows to have more than one backend on a same compute (ie, to be
able to have several cinder-volume running on a same compute). This feature
is not available in Folsom: you can only run one cinder-volume per compute
(in that case, if you want to manage several backends, you have to have
several computes).

Thanks a lot for your remarks,
Jérôme


On Fri, May 31, 2013 at 1:55 PM, Sylvain Bauza
wrote:

>  Thanks but it didn't match my needs. I already know how to deploy Cinder
> on a single host, my point was more relative to deploying a second
> Cinder-volume instance, and if yes, what to do.
>
> Nevermind, I successed in deploying a second Cinder-volume, just by
> looking at the packages and the confs. It's pretty straightforward, so I'm
> not surprised it wasn't documented. Nevertheless, I think that the Grizzly
> doc I mentioned [1] is confusing : by looking at it, I was thinking Cinder
> was unable to have two distinct volumes with Folsom release. Maybe updating
> the folsom branch for Cinder documentation, precising it *is* possible, is
> worth a try ?
>
> Anyway, I'm documenting out the process in my own (new) blog. Keep tuned,
> I'll post the URL out there.
>
> -Sylvain
>
>
>
> Le 31/05/2013 11:39, Jérôme Gallard a écrit :
>
> Hi Sylvain,
>
>  Maybe the folsom documentation for cinder will help you:
>
> http://docs.openstack.org/folsom/openstack-compute/install/apt/content/osfolubuntu-cinder.html
>
>
>  Regards,
> Jérôme
>
>
> On Fri, May 31, 2013 at 9:21 AM, Sylvain Bauza  > wrote:
>
>> Putting openstack-ops@ in the loop :-)
>>
>> Le 30/05/2013 17:26, Sylvain Bauza a écrit :
>>
>>  Le 30/05/2013 15:25, Sylvain Bauza a écrit :
>>>
>>>> Hi,
>>>>
>>>> It sounds quite unclear for me about the possibility *in Folsom* to
>>>> have two distinct Cinder hosts having each one LVM backend called
>>>> cinder-volumes ?
>>>>
>>>> As per the doc [1], I would say the answer is no, but could you please
>>>> confirm ?
>>>>
>>>> If so, do you have any idea on how to trick a nearly full LVM
>>>> cinder-volumes VG ? (I can't hardly add a new disk for adding a second PV).
>>>>
>>>> Thanks,
>>>> -Sylvain
>>>>
>>>> [1] :
>>>> http://docs.openstack.org/grizzly/openstack-block-storage/admin/content/multi_backend.html
>>>>
>>>
>>> Replying to myself. As per [2], it seems having a multiple cinder-volume
>>> setup in Folsom is achiveable. Could someone from Cinder confirm that this
>>> setup is OK ?
>>>
>>> [2] : https://lists.launchpad.net/openstack/msg21825.html
>>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Re: Multiple machines hosting cinder-volumes with Folsom ?

2013-05-31 Thread Jérôme Gallard
Hi Sylvain,

Maybe the folsom documentation for cinder will help you:
http://docs.openstack.org/folsom/openstack-compute/install/apt/content/osfolubuntu-cinder.html


Regards,
Jérôme


On Fri, May 31, 2013 at 9:21 AM, Sylvain Bauza
wrote:

> Putting openstack-ops@ in the loop :-)
>
> Le 30/05/2013 17:26, Sylvain Bauza a écrit :
>
>  Le 30/05/2013 15:25, Sylvain Bauza a écrit :
>>
>>> Hi,
>>>
>>> It sounds quite unclear for me about the possibility *in Folsom* to have
>>> two distinct Cinder hosts having each one LVM backend called cinder-volumes
>>> ?
>>>
>>> As per the doc [1], I would say the answer is no, but could you please
>>> confirm ?
>>>
>>> If so, do you have any idea on how to trick a nearly full LVM
>>> cinder-volumes VG ? (I can't hardly add a new disk for adding a second PV).
>>>
>>> Thanks,
>>> -Sylvain
>>>
>>> [1] : http://docs.openstack.org/**grizzly/openstack-block-**
>>> storage/admin/content/multi_**backend.html
>>>
>>
>> Replying to myself. As per [2], it seems having a multiple cinder-volume
>> setup in Folsom is achiveable. Could someone from Cinder confirm that this
>> setup is OK ?
>>
>> [2] : 
>> https://lists.launchpad.net/**openstack/msg21825.html
>>
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Multi backend config issue

2013-04-17 Thread Jérôme Gallard
Hi,

Yes, it's very surprising. I manage to obtain your error by doing the
operations manually (compute and guest are ubuntu 12.04 and devstack
deployment).

Another interesting thing is that, in my case, with multi-backend enabled,
tempest tells me everything is right:

/opt/stack/tempest# nosetests -sv
tempest.tests.volume.test_volumes_actions.py
nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_attach_detach_volume_to_instance[smoke]
... ok
tempest.tests.volume.test_volumes_actions.VolumesActionsTest.test_get_volume_attachment
... ok

--
Ran 2 tests in 122.465s

OK


I don't think that error is linked to the distribution. With my
configuration, if I remove the multi-backend option, attachment is possible.

Regards,
Jérôme


On Wed, Apr 17, 2013 at 3:22 PM, Steve Heistand wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> in my case (as near as I can tell) its something to do with the inability
> for ubuntu 12.04 (as a vm) to do hot plug pci stuff.
> the node itself in as 12.04 just the vm part that doesnt work as ubuntu.
> havent tried 12.10 or rarring as a vm.
>
> steve
>
> On 04/17/2013 05:42 AM, Heiko Krämer wrote:
> > Hi Steve,
> >
> > yeah it's running ubuntu 12.04 on the nodes and on the vm.
> >
> > But configuration parsing error should have normally nothing todo with a
> distribution
> > ?! Maybe the oslo version or something like that.
> >
> > But thanks for your hint.
> >
> > Greetings Heiko
> >
> > On 17.04.2013 14:36, Steve Heistand wrote:
> >> what OS Are you running in the VM? I had similar issues with ubuntu
> 12.04 but
> >> things worked great with centos 6.4
> >>
> >>
> >> On 04/17/2013 01:15 AM, Heiko Krämer wrote:
> >>> Hi Guys,
> >>>
> >>> I'm running in a strange config issue with cinder-volume service. I
> try to use
> >>> the multi backend feature in grizzly and the scheduler works fine but
> the volume
> >>> service are not running correctly. I can create/delete volumes but not
> attach.
> >>>
> >>> My cinder.conf (abstract): / // Backend Configuration//
> >>> //scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler//
> >>> //scheduler_host_manager=cinder.scheduler.host_manager.HostManager// //
> >>> //enabled_backends=storage1,storage2// //[storage1]//
> >>> //volume_group=nova-volumes//
> >>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
> >>> //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm// // //
> //[storage2]//
> >>> //volume_group=nova-volumes//
> >>> //volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver//
> >>> //volume_backend_name=LVM_ISCSI// //iscsi_helper=tgtadm/
> >>>
> >>>
> >>>
> >>> this section is on each host the same. If i try to attach an existing
> volume to
> >>> an instance i'll get the following error on cinder-volume:
> >>>
> >>> /2013-04-16 17:18:13AUDIT [cinder.service] Starting cinder-volume
> node
> >>> (version 2013.1)// //2013-04-16 17:18:13 INFO
> [cinder.volume.manager]
> >>> Updating volume status// //2013-04-16 17:18:13 INFO
> [cinder.volume.iscsi]
> >>> Creating iscsi_target for:
> volume-b83ff42b-9a58-4bf9-8d95-945829d3ee9d//
> >>> //2013-04-16 17:18:13 INFO [cinder.openstack.common.rpc.common]
> Connected to
> >>>  AMQP server on 10.0.0.104:5672// //2013-04-16 17:18:13 INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// //2013-04-16 17:18:14 INFO
> [cinder.volume.manager] Updating
> >>> volume status// //2013-04-16 17:18:14 INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// //2013-04-16 17:18:14 INFO
> >>> [cinder.openstack.common.rpc.common] Connected to AMQP server on
> >>> 10.0.0.104:5672// //2013-04-16 17:18:26ERROR
> >>> [cinder.openstack.common.rpc.amqp] Exception during message handling//
> >>> //Traceback (most recent call last):// //  File
> >>>
> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py",
> line 430,
> >>> in _process_data// //rval = self.proxy.dispatch(ctxt, version,
> method,
> >>> **args)// //  File
> >>>
> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/dispatcher.py",
> >>> line 133, in dispatch// //return getattr(proxyobj, method)(ctxt,
> **kwargs)//
> >>> //  File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py",
> line 665,
> >>> in initialize_connection// //return
> >>> self.driver.initialize_connection(volume_ref, connector)// //  File
> >>> "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py", line 336,
> in
> >>> initialize_connection// //if self.configuration.iscsi_helper ==
> 'lioadm'://
> >>> //  File
> "/usr/lib/python2.7/dist-packages/cinder/volume/configuration.py", line
> >>>  83, in __getattr__// //return getattr(self.local_conf, value)//
> //  File
> >>> "/usr/lib/python2.7/dist-packages/oslo/confi

[Openstack] Cinder Multi-Backend Documentation

2013-03-15 Thread Jérôme Gallard
Hi John, Michael,

I would like to help for the Cinder documentation.

I have noticed that there is no open bug for the multi-backend documentation.
Anne allowed me to open a bug (
https://lists.launchpad.net/openstack/msg21938.html ).
Is it OK for you if I assign this bug to me or do you have any other
plans for the writing of this documentation?

If it's OK for you, I will write this documentation with the help of:
https://wiki.openstack.org/wiki/Cinder-multi-backend

Thanks a lot,
Jérôme

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Call for help on Grizzly documentation

2013-03-15 Thread Jérôme Gallard
Hi Anne,

I'm a newbie in the "documentation area" but, I would like to give an help.

I haven't found any documentations (or open bug) regarding the cinder
multi-backend feature (
https://wiki.openstack.org/wiki/Cinder-multi-backend --
https://review.openstack.org/#/c/21815/ ).

Do you know if this documentation is planed to be written?
Can I open a bug for this?

Thanks a lot,
Jérôme

On Tue, Mar 12, 2013 at 3:27 PM, Anne Gentle  wrote:
> Hi all,
> You all did great with DocImpact, but now that we're less than a month from
> release, the tiny doc team is facing a long list of doc bugs that won't be
> done by April 4th, many generated by DocImpact flags.
>
> We typically do a "release" of the docs about a month after the actual
> release date, to ensure packages are available and to try to get our doc bug
> backlog to a manageable level.
>
> As you can see from our backlog for operator docs in openstack-manuals [1]
> and API docs in api-site [2], there are over 50 confirmed doc bugs for
> Grizzly operator and admin docs and less than 20 for API docs. With those
> numbers we need all the help we can get.
>
> Please dive in, the patch process is just like code and fully documented.
> [3] We're on IRC in #openstack-doc and can answer any questions you have as
> you go.
>
> Thanks!
> Anne, Tom, Diane, Laura, Emilien, Daisy, and all the other doc peeps
>
> 1. https://launchpad.net/openstack-manuals/+milestone/grizzly
> 2. https://launchpad.net/openstack-api-site/+milestone/grizzly
> 3. http://wiki.openstack.org/Documentation/HowTo
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Munin plugins for essex (nova, keystone, glance)

2012-05-09 Thread Jérôme Gallard
Very useful !

Thanks,
Jérôme

On Wed, May 9, 2012 at 11:58 AM, Razique Mahroua
wrote:

> Amazing work
> congratulations and thank you.
>
> I'm myself working on zabbix templates. Maybe we could share our effort
> and create a monitoring repo for openstack (bash scripts, monitoring
> templates, etc..)
>
> what you guys think ?
>
> *Nuage & Co - Razique Mahroua** *
> razique.mahr...@gmail.com
>
>
> Le 9 mai 2012 à 11:31, Abaakouk Mehdi a écrit :
>
> Hi,
>
> I have recently updated and created some munin plugin for nova, keystone,
> glance of the essex release.
>
> The following metrics are retrieved:
> - glance: total/used size by tenant
> - glance: number of images per status
> - keystone: number of tenant enabled and total tenants
> - nova: total/allocated/associated number of floating ip
> - nova: number of each king of nova services are available
> - nova: number of launched instances
> - nova: number instances per power_state/vm_state/vcpus/root_gb... (any
> field in nova instances table)
>
> Let me known if other interesting metrics can be retrieved or if something
> is wrong with this one.
>
> This plugins can be found here:
> - https://github.com/sileht/openstack-munin
>
> Or directly in munin contrib repo:
> - https://github.com/munin-monitoring/contrib
>
>
> Regards,
>
> --
> Mehdi ABAAKOUK
> sil...@sileht.net
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack and routing configuration

2012-04-19 Thread Jérôme Gallard
Hi everyone,

I would like to know if someone has already tried to setup routing
configurations with nova-network.

>From my understanding, nova-network only deals with NAT, but I am
thinking about configurations to directly routes my virtual networks
on my intranet.

I am thinking to develop a new driver for the network manager which
can configure a RIP router (Quagga, ...).

Any feedbacks ?

Thanks.

Regards,
Jérôme

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Interaction between nova and melange : ip fixed not found

2012-02-29 Thread Jérôme Gallard
Hi Jason,

Thank you very much for your answer.
The problem about the wrong ip address is solved now! Perhaps this
octect should be excluded automatically by nova at the network
creation time?

Regarding the other problem about nova/melange, in fact, I creates all
my networks with the nova-manage command:
nova-manage network create --label=public
--project_id=def761d251814aa8a10a1e268206f02d
--fixed_range_v4=172.16.0.0/24 --priority=0 --gateway=172.16.0.1
But it seems that the nova.fixed_ips table is not well filled.

Thanks again,
Jérôme

On Tue, Feb 28, 2012 at 16:31, Jason Kölker  wrote:
> On Tue, 2012-02-28 at 11:52 +0100, Jérôme Gallard wrote:
>> Hi all,
>>
>> I use the trunk version of Nova, Quantum (with the OVS plugin) and Melange.
>> I created networks, everything seems to be right.
>>
>> I have two questions :
>> - the first VM I boot takes always a wrong IP address (for instance
>> 172.16.0.0). However, when I boot a second VM, this one takes a good
>> IP (for instance 172.16.0.2). Do you know why this can happened ?
>
> The default melange policy allows assignment of the network address and
> synthesise a gateway address (if it is not specified). It will not hand
> out the gateway address. The "fix" is to create an ip policy that
> restricts the octect 0. I think the syntax is something like
>
> `melange policy create -t {tennant} name={block_name}
> desc={policy_name}` (This should return the policy_id for the next
> command)
>
> `melange unusable_ip_octet create -t {tennant} policy_id={policy_id}
> octect=0`
>
> `melange ip_block update -t {tennant} id={block_id}
> policy_id={policy_id}`
>
>
>> - I have an error regarding an fixed IP not found. Effectively, when I
>> check the nova database, the fixed_ip table is empty but as I am using
>> quantum and melange and their tables seems to be nicely filled. Do you
>> have an idea about this issue ?
>> This is a copy/paste of the error:
>> 2012-02-28 10:45:53 DEBUG nova.rpc.common [-] received
>> {u'_context_roles': [u'admin'], u'_context_request_id':
>> u'req-461788a6-3570-4fa9-8620-6705eb69243c', u│··
>> '_context_read_deleted': u'no', u'args': {u'address': u'172.16.0.2'},
>> u'_context_auth_token': None, u'_context_strategy': u'noauth',
>> u'_context_is_admin': Tr│··
>> ue, u'_context_project_id': None, u'_context_timestamp':
>> u'2012-02-28T09:45:53.484445', u'_context_user_id': None, u'method':
>> u'lease_fixed_ip', u'_context_r│··
>> emote_address': None} from (pid=8844) _safe_log
>> /usr/local/src/nova/nova/rpc/common.py:144 │··
>> 2012-02-28 10:45:53 DEBUG nova.rpc.common
>> [req-461788a6-3570-4fa9-8620-6705eb69243c None None] unpacked context:
>> {'request_id': u'req-461788a6-3570-4fa9-8620│··
>> -6705eb69243c', 'user_id': None, 'roles': [u'admin'], 'timestamp':
>> '2012-02-28T09:45:53.484445', 'is_admin': True, 'auth_token': None,
>> 'project_id': None, 'r│··
>> emote_address': None, 'read_deleted': u'no', 'strategy': u'noauth'}
>> from (pid=8844) unpack_context
>> /usr/local/src/nova/nova/rpc/amqp.py:187 │··
>> 2012-02-28 10:45:53 DEBUG nova.network.manager
>> [req-461788a6-3570-4fa9-8620-6705eb69243c None None] Leased IP
>> |172.16.0.2| from (pid=8844) lease_fixed_ip
>> /us│··r/local/src/nova/nova/network/manager.py:1186 │··
>> 2012-02-28 10:45:53 ERROR nova.rpc.common [-] Exception during message
>> handling │··(nova.rpc.common): TRACE: Traceback (most recent call
>> last): │··
>> (nova.rpc.common): TRACE: File "/usr/local/src/nova/nova/rpc/amqp.py",
>> line 250, in _process_data │··(nova.rpc.common): TRACE: rval =
>> node_func(context=ctxt, **node_args) │··(nova.rpc.common): TRACE: File
>> "/usr/local/src/nova/nova/network/manager.py", line 1187, in
>> lease_fixed_ip │··(nova.rpc.common): TRACE: fixed_ip =
>> self.db.fixed_ip_get_by_address(context, address) │··
>> (nova.rpc.common): TRACE: File "/usr/local/src/nova/nova/db/api.py",
>> line 473, in fixed_ip_get_by_address │··(nova.rpc.common): TRACE:
>> return IMPL.fixed_ip_get_by_address(context, address)
>> │··(nova.rpc.common): TRACE: File
>> "/usr/local/src/nova/nova/db/sqlalchemy/api.py", line 119, in wrapper
>> │··
>> (nova.rpc.common): TRACE: return f(*args, **kwargs)
>> │··(nova.rpc.common): T

[Openstack] Interaction between nova and melange : ip fixed not found

2012-02-28 Thread Jérôme Gallard
Hi all,

I use the trunk version of Nova, Quantum (with the OVS plugin) and Melange.
I created networks, everything seems to be right.

I have two questions :
- the first VM I boot takes always a wrong IP address (for instance
172.16.0.0). However, when I boot a second VM, this one takes a good
IP (for instance 172.16.0.2). Do you know why this can happened ?

- I have an error regarding an fixed IP not found. Effectively, when I
check the nova database, the fixed_ip table is empty but as I am using
quantum and melange and their tables seems to be nicely filled. Do you
have an idea about this issue ?
This is a copy/paste of the error:
2012-02-28 10:45:53 DEBUG nova.rpc.common [-] received
{u'_context_roles': [u'admin'], u'_context_request_id':
u'req-461788a6-3570-4fa9-8620-6705eb69243c', u│··
'_context_read_deleted': u'no', u'args': {u'address': u'172.16.0.2'},
u'_context_auth_token': None, u'_context_strategy': u'noauth',
u'_context_is_admin': Tr│··
ue, u'_context_project_id': None, u'_context_timestamp':
u'2012-02-28T09:45:53.484445', u'_context_user_id': None, u'method':
u'lease_fixed_ip', u'_context_r│··
emote_address': None} from (pid=8844) _safe_log
/usr/local/src/nova/nova/rpc/common.py:144 │··
2012-02-28 10:45:53 DEBUG nova.rpc.common
[req-461788a6-3570-4fa9-8620-6705eb69243c None None] unpacked context:
{'request_id': u'req-461788a6-3570-4fa9-8620│··
-6705eb69243c', 'user_id': None, 'roles': [u'admin'], 'timestamp':
'2012-02-28T09:45:53.484445', 'is_admin': True, 'auth_token': None,
'project_id': None, 'r│··
emote_address': None, 'read_deleted': u'no', 'strategy': u'noauth'}
from (pid=8844) unpack_context
/usr/local/src/nova/nova/rpc/amqp.py:187 │··
2012-02-28 10:45:53 DEBUG nova.network.manager
[req-461788a6-3570-4fa9-8620-6705eb69243c None None] Leased IP
|172.16.0.2| from (pid=8844) lease_fixed_ip
/us│··r/local/src/nova/nova/network/manager.py:1186 │··
2012-02-28 10:45:53 ERROR nova.rpc.common [-] Exception during message
handling │··(nova.rpc.common): TRACE: Traceback (most recent call
last): │··
(nova.rpc.common): TRACE: File "/usr/local/src/nova/nova/rpc/amqp.py",
line 250, in _process_data │··(nova.rpc.common): TRACE: rval =
node_func(context=ctxt, **node_args) │··(nova.rpc.common): TRACE: File
"/usr/local/src/nova/nova/network/manager.py", line 1187, in
lease_fixed_ip │··(nova.rpc.common): TRACE: fixed_ip =
self.db.fixed_ip_get_by_address(context, address) │··
(nova.rpc.common): TRACE: File "/usr/local/src/nova/nova/db/api.py",
line 473, in fixed_ip_get_by_address │··(nova.rpc.common): TRACE:
return IMPL.fixed_ip_get_by_address(context, address)
│··(nova.rpc.common): TRACE: File
"/usr/local/src/nova/nova/db/sqlalchemy/api.py", line 119, in wrapper
│··
(nova.rpc.common): TRACE: return f(*args, **kwargs)
│··(nova.rpc.common): TRACE: File
"/usr/local/src/nova/nova/db/sqlalchemy/api.py", line 1131, in
fixed_ip_get_by_address │··
(nova.rpc.common): TRACE: raise
exception.FixedIpNotFoundForAddress(address=address) │··
(nova.rpc.common): TRACE: FixedIpNotFoundForAddress: Fixed ip not
found for address 172.16.0.2. │··
(nova.rpc.common): TRACE:

Thank you very much for your help.

Regards,
Jérôme

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp