Re: [openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread Prashant Shetty
No. I was able to bring up multi-node setup with different compute using
devstack branch stable/ocata.
Those nova-manage commands are required in multi-node setup only,
all-in-one setup will be taken care by devstack itself.

Did nova-manage discover_host command detected the computes you configured?.
If you are not seeing any requests in nova-compute means, you have problem
in n-cond or n-sch.

Check logs n-sch.log and n-cond, you should see some clue about problem.

Thanks,
Prashant


On Tue, Mar 14, 2017 at 9:05 PM, Vikash Kumar <
vikash.ku...@oneconvergence.com> wrote:

> Thanks Prashant,
>
> I have chkd that. Does it have anything to run controller and compute
> on single node as solution ?
>
> On Tue, Mar 14, 2017 at 8:52 PM, Prashant Shetty <
> prashantshetty1...@gmail.com> wrote:
>
>> Couple of things to check,
>>
>>- On controller in nova.conf you should have [placement] section with
>>below info
>>   - [placement]
>>   os_region_name = RegionOne
>>   project_domain_name = Default
>>   project_name = service
>>   user_domain_name = Default
>>   password = 
>>   username = placement
>>   auth_url = 
>>   auth_type = password
>>- If nova service-list shows your nova-compute is UP and RUNNING, you
>>need to run discover commands on controller as below
>>   - nova-manage cell_v2 map_cell0 --database_connection 
>>   - nova-manage cell_v2 simple_cell_setup --transport-url
>>   
>>   - nova-manage cell_v2 discover_hosts --verbose
>>
>> Discover command should show message that it has discovered your compute
>> nodes. In case still instance launch fails check nova-conductor and
>> nova-scheduler logs for more info.
>>
>> For more information refer, https://docs.openstack.org/dev
>> eloper/nova/cells.html
>>
>>
>> Thanks,
>>
>> Prashant
>>
>> On Tue, Mar 14, 2017 at 8:33 PM, Vikash Kumar <
>> vikash.ku...@oneconvergence.com> wrote:
>>
>>> That was the weird thing. nova-compute doesn't had any error log.
>>> nova-compute logs also didn't had any instance create request also.
>>>
>>> On Tue, Mar 14, 2017 at 7:50 PM, luogangyi@chinamobile <
>>> luogan...@chinamobile.com> wrote:
>>>
>>>> From your log, we can see nova scheduler has already select target node
>>>> which is u’nfp’.
>>>>
>>>>
>>>> So you should check the nova-compute log from node nfp.
>>>>
>>>>
>>>> Probably, you are stuck at image downloading.
>>>>
>>>>  原始邮件
>>>> *发件人:* Vikash Kumar<vikash.ku...@oneconvergence.com>
>>>> *收件人:* openstack-dev<openstack-dev@lists.openstack.org>
>>>> *发送时间:* 2017年3月14日(周二) 18:22
>>>> *主题:* [openstack-dev] [nova][nova-scheduler] Instance boot stuck
>>>> in"Scheduling" state
>>>>
>>>> All,
>>>>
>>>> I brought up multinode setup with devstack. I am using Ocata
>>>> release. Instances boot are getting stuck in "scheduling" state. The state
>>>> never gets changed. Below is the link for scheduler log.
>>>>
>>>> http://paste.openstack.org/show/602635/
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Vikash
>>>>
>>>> 
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Vikash
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Vikash
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [nova][nova-scheduler] Instance boot stuck in"Scheduling" state

2017-03-14 Thread Prashant Shetty
Couple of things to check,

   - On controller in nova.conf you should have [placement] section with
   below info
  - [placement]
  os_region_name = RegionOne
  project_domain_name = Default
  project_name = service
  user_domain_name = Default
  password = 
  username = placement
  auth_url = 
  auth_type = password
   - If nova service-list shows your nova-compute is UP and RUNNING, you
   need to run discover commands on controller as below
  - nova-manage cell_v2 map_cell0 --database_connection 
  - nova-manage cell_v2 simple_cell_setup --transport-url
  
  - nova-manage cell_v2 discover_hosts --verbose

Discover command should show message that it has discovered your compute
nodes. In case still instance launch fails check nova-conductor and
nova-scheduler logs for more info.

For more information refer,
https://docs.openstack.org/developer/nova/cells.html


Thanks,

Prashant

On Tue, Mar 14, 2017 at 8:33 PM, Vikash Kumar <
vikash.ku...@oneconvergence.com> wrote:

> That was the weird thing. nova-compute doesn't had any error log.
> nova-compute logs also didn't had any instance create request also.
>
> On Tue, Mar 14, 2017 at 7:50 PM, luogangyi@chinamobile <
> luogan...@chinamobile.com> wrote:
>
>> From your log, we can see nova scheduler has already select target node
>> which is u’nfp’.
>>
>>
>> So you should check the nova-compute log from node nfp.
>>
>>
>> Probably, you are stuck at image downloading.
>>
>>  原始邮件
>> *发件人:* Vikash Kumar
>> *收件人:* openstack-dev
>> *发送时间:* 2017年3月14日(周二) 18:22
>> *主题:* [openstack-dev] [nova][nova-scheduler] Instance boot stuck
>> in"Scheduling" state
>>
>> All,
>>
>> I brought up multinode setup with devstack. I am using Ocata release.
>> Instances boot are getting stuck in "scheduling" state. The state never
>> gets changed. Below is the link for scheduler log.
>>
>> http://paste.openstack.org/show/602635/
>>
>>
>> --
>> Regards,
>> Vikash
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Vikash
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-23 Thread Prashant Shetty
Hi Matt,
I have addressed your comment on patch and uploaded new patch to master
branch.
Could you please check https://review.openstack.org/#/c/437381

Thanks,
Prashant

On Thu, Feb 23, 2017 at 2:34 PM, Prashant Shetty <
prashantshetty1...@gmail.com> wrote:

> Thanks Matt, I found out there was issue in my nova.conf on controller.
> [placement] section was missing on controller nova.conf.
> Looks like devstack ignores configuring nova.conf if n-cpu is not running.
>
> I have filed https://bugs.launchpad.net/devstack/+bug/1667219 and posted
> fix https://review.openstack.org/#/c/437274/.
> Let me know what you think.
>
> Thanks,
> Prashant
>
> On Wed, Feb 22, 2017 at 8:19 PM, Matt Riedemann <mriede...@gmail.com>
> wrote:
>
>> On 2/22/2017 9:33 AM, Prashant Shetty wrote:
>>
>>> Thanks Matt.
>>>
>>> Here are the steps I have performed, I dont see any error related to
>>> cell0 now but n-cond still not able to detect computes after discover as
>>> well :(.
>>>
>>> Do we need any cell setting on nova-compute nodes as well?.
>>>
>>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova service-list
>>> ++--+---+--+
>>> -+---++-+
>>> | Id | Binary   | Host  | Zone | Status  | State |
>>> Updated_at | Disabled Reason |
>>> ++--+---+--+
>>> -+---++-+
>>> | 7  | nova-conductor   | cntr11| internal | enabled | up|
>>> 2017-02-22T14:23:34.00 | -   |
>>> | 9  | nova-scheduler   | cntr11| internal | enabled | up|
>>> 2017-02-22T14:23:28.00 | -   |
>>> | 10 | nova-consoleauth | cntr11| internal | enabled | up|
>>> 2017-02-22T14:23:33.00 | -   |
>>> | 11 | nova-compute | esx-ubuntu-02 | nova | enabled | up|
>>> 2017-02-22T14:23:35.00 | -   |
>>> | 12 | nova-compute | esx-ubuntu-03 | nova | enabled | up|
>>> 2017-02-22T14:23:35.00 | -   |
>>> | 13 | nova-compute | esx-ubuntu-01 | nova | enabled | up|
>>> 2017-02-22T14:23:28.00 | -   |
>>> | 14 | nova-compute | kvm-3 | nova | enabled | up|
>>> 2017-02-22T14:23:28.00 | -   |
>>> | 15 | nova-compute | kvm-1 | nova | enabled | up|
>>> 2017-02-22T14:23:32.00 | -   |
>>> | 16 | nova-compute | kvm-2 | nova | enabled | up|
>>> 2017-02-22T14:23:32.00 | -   |
>>> ++--+---+--+
>>> -+---++-+
>>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$
>>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>>> map_cell0 --database_connection
>>> mysql+pymysql://root:vmware@127.0.0.1/nova?charset=utf8
>>> <http://root:vmware@127.0.0.1/nova?charset=utf8>
>>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>>> simple_cell_setup --transport-url
>>> rabbit://stackrabbit:vmware@60.0.24.49:5672/
>>> <http://stackrabbit:vmware@60.0.24.49:5672/>
>>>
>>> Cell0 is already setup
>>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>>> list_cells
>>> +---+--+
>>> |  Name | UUID |
>>> +---+--+
>>> |  None | ea6bec24-058a-4ba2-8d21-57d34c01802c |
>>> | cell0 | ---- |
>>> +---+--+
>>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>>> discover_hosts --verbose
>>> Found 2 cell mappings.
>>> Skipping cell0 since it does not contain hosts.
>>> Getting compute nodes from cell: ea6bec24-058a-4ba2-8d21-57d34c01802c
>>> Found 6 computes in cell: ea6bec24-058a-4ba2-8d21-57d34c01802c
>>> Checking host mapping for compute host 'kvm-3':
>>> a4b175d6-f5cc-45a8-9cf2-45726293b5c5
>>> Checking host mapping for compute host 'esx-ubuntu-02':
>>> 70281329-590c-4cb7-8839-fd84160345b7
>>> Checking host mapping for compute host 'esx-ubuntu-03':
>>> 04ea75a2-789e-483e-8d0e-4b0f79e012dc
>>> Checking host mapping for compute host 'k

Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-23 Thread Prashant Shetty
Thanks Matt, I found out there was issue in my nova.conf on controller.
[placement] section was missing on controller nova.conf.
Looks like devstack ignores configuring nova.conf if n-cpu is not running.

I have filed https://bugs.launchpad.net/devstack/+bug/1667219 and posted
fix https://review.openstack.org/#/c/437274/.
Let me know what you think.

Thanks,
Prashant

On Wed, Feb 22, 2017 at 8:19 PM, Matt Riedemann <mriede...@gmail.com> wrote:

> On 2/22/2017 9:33 AM, Prashant Shetty wrote:
>
>> Thanks Matt.
>>
>> Here are the steps I have performed, I dont see any error related to
>> cell0 now but n-cond still not able to detect computes after discover as
>> well :(.
>>
>> Do we need any cell setting on nova-compute nodes as well?.
>>
>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova service-list
>> ++--+---+--+
>> -+---++-+
>> | Id | Binary   | Host  | Zone | Status  | State |
>> Updated_at | Disabled Reason |
>> ++--+---+--+
>> -+---++-+
>> | 7  | nova-conductor   | cntr11| internal | enabled | up|
>> 2017-02-22T14:23:34.00 | -   |
>> | 9  | nova-scheduler   | cntr11| internal | enabled | up|
>> 2017-02-22T14:23:28.00 | -   |
>> | 10 | nova-consoleauth | cntr11| internal | enabled | up|
>> 2017-02-22T14:23:33.00 | -   |
>> | 11 | nova-compute | esx-ubuntu-02 | nova | enabled | up|
>> 2017-02-22T14:23:35.00 | -   |
>> | 12 | nova-compute | esx-ubuntu-03 | nova | enabled | up|
>> 2017-02-22T14:23:35.00 | -   |
>> | 13 | nova-compute | esx-ubuntu-01 | nova | enabled | up|
>> 2017-02-22T14:23:28.00 | -   |
>> | 14 | nova-compute | kvm-3 | nova | enabled | up|
>> 2017-02-22T14:23:28.00 | -   |
>> | 15 | nova-compute | kvm-1 | nova | enabled | up|
>> 2017-02-22T14:23:32.00 | -   |
>> | 16 | nova-compute | kvm-2 | nova | enabled | up|
>> 2017-02-22T14:23:32.00 | -   |
>> ++--+---+--+
>> -+---++-+
>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$
>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>> map_cell0 --database_connection
>> mysql+pymysql://root:vmware@127.0.0.1/nova?charset=utf8
>> <http://root:vmware@127.0.0.1/nova?charset=utf8>
>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>> simple_cell_setup --transport-url
>> rabbit://stackrabbit:vmware@60.0.24.49:5672/
>> <http://stackrabbit:vmware@60.0.24.49:5672/>
>>
>> Cell0 is already setup
>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>> list_cells
>> +---+--+
>> |  Name | UUID |
>> +---+--+
>> |  None | ea6bec24-058a-4ba2-8d21-57d34c01802c |
>> | cell0 | ---- |
>> +---+--+
>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$ nova-manage cell_v2
>> discover_hosts --verbose
>> Found 2 cell mappings.
>> Skipping cell0 since it does not contain hosts.
>> Getting compute nodes from cell: ea6bec24-058a-4ba2-8d21-57d34c01802c
>> Found 6 computes in cell: ea6bec24-058a-4ba2-8d21-57d34c01802c
>> Checking host mapping for compute host 'kvm-3':
>> a4b175d6-f5cc-45a8-9cf2-45726293b5c5
>> Checking host mapping for compute host 'esx-ubuntu-02':
>> 70281329-590c-4cb7-8839-fd84160345b7
>> Checking host mapping for compute host 'esx-ubuntu-03':
>> 04ea75a2-789e-483e-8d0e-4b0f79e012dc
>> Checking host mapping for compute host 'kvm-1':
>> dfabae3c-4ea9-4e8f-a496-8880dd9e89d9
>> Checking host mapping for compute host 'kvm-2':
>> d1cb30f5-822c-4c18-81fb-921ca676b834
>> Checking host mapping for compute host 'esx-ubuntu-01':
>> d00f8f16-af6b-437d-8136-bc744eb2472f
>> vmware@cntr11:~/nsbu_cqe_openstack/devstack$
>>
>> ​n-sch:
>> 2017-02-22 14:26:51.467 INFO nova.scheduler.host_manager
>> [req-56d1cefb-1dfb-481d-aaff-b7b6e05f83f0 None None] Successfully synced
>> instances from host 'kvm-2'.
>> 2017-02-22 14:26:51.608 INFO nova.scheduler.host_manager
>> [r

Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-22 Thread Prashant Shetty
7/dist-packages/oslo_messaging/rpc/server.py", line
218, in inner
return func(*args, **kwargs)

  File "/opt/stack/nova/nova/scheduler/manager.py", line 98, in
select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 79, in
select_destinations
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2017-02-22 14:27:23.425 WARNING nova.scheduler.utils
[req-1085ec50-29f7-4946-81e2-03c1378e8077 alt_demo admin] [instance:
c74f394f-c805-4b5c-ba42-507dfda2c5be] Setting instance to ERROR state.

On Wed, Feb 22, 2017 at 5:44 PM, Matt Riedemann <mriede...@gmail.com> wrote:

> On 2/21/2017 10:38 AM, Prashant Shetty wrote:
>
>> Hi Mark,
>>
>> Thanks for your reply.
>>
>> I tried "nova-manage cell_v2 discover_hosts" and it returned nothing and
>> still I have same issue on the node.
>>
>> Problem seems be the way devstack is getting configured,
>> As code suggest below we create cell0 on node where n-api and n-cpu
>> runs. In my case compute is running only n-cpu and controller is running
>> n-api service, due to this code there are no cell created in controller
>> or compute.
>>
>
> The nova_cell0 database is created here:
>
> https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1
> cf28fd9baa39d05436680616aef/lib/nova#L680
>
> That's the same place that the nova_api database is created.
>
>
>> We will not have this  problem in all-in-one-node setup.
>> --
>> # Do this late because it requires compute hosts to have started
>> if is_service_enabled n-api; then
>> if is_service_enabled n-cpu; then
>> create_cell
>> else
>> # Some CI systems like Hyper-V build the control plane on
>> # Linux, and join in non Linux Computes after setup. This
>> # allows them to delay the processing until after their whole
>> # environment is up.
>> echo_summary "SKIPPING Cell setup because n-cpu is not enabled.
>> You will have to do this manually before you have a working environment."
>> fi
>> fi
>>
>
> You're correct that when stacking the control node where n-api is running,
> you won't get to the create_cell call:
>
> https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1
> cf28fd9baa39d05436680616aef/stack.sh#L1371
>
> The create_cell function is what creates the cell0 mapping in the nova_api
> database and runs the simple_cell_setup command:
>
> https://github.com/openstack-dev/devstack/blob/7a30c7fcabac1
> cf28fd9baa39d05436680616aef/lib/nova#L943
>
> You're running discover_hosts from the control node where the nova_api
> database lives, so that looks correct.
>
> Can you run discover_hosts with the --verbose option to get some more
> details, i.e. how many cell mappings are there, how many host mappings and
> compute_nodes records are created?
>
> I think the issue is that you haven't run map_cell0 and simple_cell_setup.
> In the gating multinode CI job, the create_cell function in devstack is
> called because that's a 2-node job where n-cpu is running on both nodes,
> but n-api is only running on the control (primary) node. In your case you
> don't have that so you're going to have to run these command manually.
>
> The docs here explain how to set this up and the commands to run:
>
> https://docs.openstack.org/developer/nova/cells.html#setup-of-cells-v2
> https://docs.openstack.org/developer/nova/cells.html#fresh-install
>
>
> ---
>>
>> vmware@cntr11:~$ nova-manage cell_v2 discover_hosts
>> vmware@cntr11:~$ nova service-list
>> ++--+---+--+
>> -+---++-+
>> | Id | Binary   | Host  | Zone | Status  | State |
>> Updated_at | Disabled Reason |
>> ++--+---+--+
>> -+---++-+
>> | 3  | nova-conductor   | cntr11| internal | enabled | up|
>> 2017-02-21T15:34:13.00 | -   |
>> | 5  | nova-scheduler   | cntr11| internal | enabled | up|
>> 2017-02-21T15:34:15.00 | -   |
>> | 6  | nova-consoleauth | cntr11| internal | enabled | up|
>> 2017-02-21T15:34:11.00 | -   |
>> | 7  | nova-compute | esx-ubuntu-02 | nova | enabled | up|
>> 2017-02-21T15:34:14.00 | -   |
>> | 8  | nova-compute | esx-ubuntu-03 | nova | enabled |

Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-21 Thread Prashant Shetty
Appreciate some help on this issue.

Thanks,
Prashant

On Tue, Feb 21, 2017 at 9:08 PM, Prashant Shetty <
prashantshetty1...@gmail.com> wrote:

> Hi Mark,
>
> Thanks for your reply.
>
> I tried "nova-manage cell_v2 discover_hosts" and it returned nothing and
> still I have same issue on the node.
>
> Problem seems be the way devstack is getting configured,
> As code suggest below we create cell0 on node where n-api and n-cpu runs.
> In my case compute is running only n-cpu and controller is running n-api
> service, due to this code there are no cell created in controller or
> compute.
>
> We will not have this  problem in all-in-one-node setup.
> --
> # Do this late because it requires compute hosts to have started
> if is_service_enabled n-api; then
> if is_service_enabled n-cpu; then
> create_cell
> else
> # Some CI systems like Hyper-V build the control plane on
> # Linux, and join in non Linux Computes after setup. This
> # allows them to delay the processing until after their whole
> # environment is up.
> echo_summary "SKIPPING Cell setup because n-cpu is not enabled.
> You will have to do this manually before you have a working environment."
> fi
> fi
> ---
>
> vmware@cntr11:~$ nova-manage cell_v2 discover_hosts
> vmware@cntr11:~$ nova service-list
> ++--+---+--+
> -+---++-+
> | Id | Binary   | Host  | Zone | Status  | State |
> Updated_at | Disabled Reason |
> ++--+---+--+
> -+---++-+
> | 3  | nova-conductor   | cntr11| internal | enabled | up|
> 2017-02-21T15:34:13.00 | -   |
> | 5  | nova-scheduler   | cntr11| internal | enabled | up|
> 2017-02-21T15:34:15.00 | -   |
> | 6  | nova-consoleauth | cntr11| internal | enabled | up|
> 2017-02-21T15:34:11.00 | -   |
> | 7  | nova-compute | esx-ubuntu-02 | nova | enabled | up|
> 2017-02-21T15:34:14.00 | -   |
> | 8  | nova-compute | esx-ubuntu-03 | nova | enabled | up|
> 2017-02-21T15:34:16.00 | -   |
> | 9  | nova-compute | kvm-3 | nova | enabled | up|
> 2017-02-21T15:34:07.00 | -   |
> | 10 | nova-compute | kvm-2 | nova | enabled | up|
> 2017-02-21T15:34:13.00 | -   |
> | 11 | nova-compute | esx-ubuntu-01 | nova | enabled | up|
> 2017-02-21T15:34:14.00 | -   |
> | 12 | nova-compute | kvm-1 | nova | enabled | up|
> 2017-02-21T15:34:09.00 | -   |
> ++--+---+--+
> -+---++-+
> vmware@cntr11:~$
> vmware@cntr11:~$ nova-manage cell_v2 list_cells
> +--+--+
> | Name | UUID |
> +--+--+
> +--+--+
> vmware@cntr11:~$
>
>
> Thanks,
> Prashant
>
> On Tue, Feb 21, 2017 at 1:02 AM, Matt Riedemann <mriede...@gmail.com>
> wrote:
>
>> On 2/20/2017 10:31 AM, Prashant Shetty wrote:
>>
>>> Thanks Jay for the response. Sorry I missed out on copying right error.
>>>
>>> Here is the log:
>>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No
>>> valid host was found. There are not enough hosts available.
>>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>>> 2017-02-20 14:24:06.217 ERROR nova.conductor.manager
>>> [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] No cell mapping
>>> found for cell0 while trying to record scheduling failure. Setup is
>>> incomplete.
>>>
>>> I tried command you mentioned, still I see same error on conductor.
>>>
>>> As part of stack.sh on controller I see below command was executed
>>> related to "cell". Isn't it devstack should take care of this part
>>> during initial bringup or am I missing any parameters in localrc for
>>> same?.
>>>
>>> NOTE: I have not explicitly enabled n-cell in localrc
>>>
>>> 2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done
>>> +lib/nova:init_nova:683recreate_database nova
>>> +lib/database:recreate_database:112local db=nova
>>> +lib/database:recreate_database:113recreate_database_mysql nova
>>> +

Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-21 Thread Prashant Shetty
Hi Mark,

Thanks for your reply.

I tried "nova-manage cell_v2 discover_hosts" and it returned nothing and
still I have same issue on the node.

Problem seems be the way devstack is getting configured,
As code suggest below we create cell0 on node where n-api and n-cpu runs.
In my case compute is running only n-cpu and controller is running n-api
service, due to this code there are no cell created in controller or
compute.

We will not have this  problem in all-in-one-node setup.
--
# Do this late because it requires compute hosts to have started
if is_service_enabled n-api; then
if is_service_enabled n-cpu; then
create_cell
else
# Some CI systems like Hyper-V build the control plane on
# Linux, and join in non Linux Computes after setup. This
# allows them to delay the processing until after their whole
# environment is up.
echo_summary "SKIPPING Cell setup because n-cpu is not enabled. You
will have to do this manually before you have a working environment."
fi
fi
---

vmware@cntr11:~$ nova-manage cell_v2 discover_hosts
vmware@cntr11:~$ nova service-list
++--+---+--+-+---++-+
| Id | Binary   | Host  | Zone | Status  | State |
Updated_at | Disabled Reason |
++--+---+--+-+---++-+
| 3  | nova-conductor   | cntr11| internal | enabled | up|
2017-02-21T15:34:13.00 | -   |
| 5  | nova-scheduler   | cntr11| internal | enabled | up|
2017-02-21T15:34:15.00 | -   |
| 6  | nova-consoleauth | cntr11| internal | enabled | up|
2017-02-21T15:34:11.00 | -   |
| 7  | nova-compute | esx-ubuntu-02 | nova | enabled | up|
2017-02-21T15:34:14.00 | -   |
| 8  | nova-compute | esx-ubuntu-03 | nova | enabled | up|
2017-02-21T15:34:16.00 | -   |
| 9  | nova-compute | kvm-3 | nova | enabled | up|
2017-02-21T15:34:07.00 | -   |
| 10 | nova-compute | kvm-2 | nova | enabled | up|
2017-02-21T15:34:13.00 | -   |
| 11 | nova-compute | esx-ubuntu-01 | nova | enabled | up|
2017-02-21T15:34:14.00 | -   |
| 12 | nova-compute | kvm-1 | nova | enabled | up|
2017-02-21T15:34:09.00 | -   |
++--+---+--+-+---++-+
vmware@cntr11:~$
vmware@cntr11:~$ nova-manage cell_v2 list_cells
+--+--+
| Name | UUID |
+--+--+
+--+--+
vmware@cntr11:~$


Thanks,
Prashant

On Tue, Feb 21, 2017 at 1:02 AM, Matt Riedemann <mriede...@gmail.com> wrote:

> On 2/20/2017 10:31 AM, Prashant Shetty wrote:
>
>> Thanks Jay for the response. Sorry I missed out on copying right error.
>>
>> Here is the log:
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No
>> valid host was found. There are not enough hosts available.
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>> 2017-02-20 14:24:06.217 ERROR nova.conductor.manager
>> [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] No cell mapping
>> found for cell0 while trying to record scheduling failure. Setup is
>> incomplete.
>>
>> I tried command you mentioned, still I see same error on conductor.
>>
>> As part of stack.sh on controller I see below command was executed
>> related to "cell". Isn't it devstack should take care of this part
>> during initial bringup or am I missing any parameters in localrc for
>> same?.
>>
>> NOTE: I have not explicitly enabled n-cell in localrc
>>
>> 2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done
>> +lib/nova:init_nova:683recreate_database nova
>> +lib/database:recreate_database:112local db=nova
>> +lib/database:recreate_database:113recreate_database_mysql nova
>> +lib/databases/mysql:recreate_database_mysql:56  local db=nova
>> +lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
>> -h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova;'
>> +lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
>> -h127.0.0.1 -e 'CREATE DATABASE nova CHARACTER SET utf8;'
>> +lib/nova:init_nova:684recreate_database nova_cell0
>> +lib/database:recreate_database:112local db=nova_cell0
>> +lib/database:recreate_database:113recreate_database_mysql
>> nova_cell0
>> +lib/databases/mysql:recreate_database

Re: [openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-20 Thread Prashant Shetty
Thanks Jay for the response. Sorry I missed out on copying right error.

Here is the log:
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost: No valid
host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.217 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] No cell mapping
found for cell0 while trying to record scheduling failure. Setup is
incomplete.

I tried command you mentioned, still I see same error on conductor.

As part of stack.sh on controller I see below command was executed related
to "cell". Isn't it devstack should take care of this part during initial
bringup or am I missing any parameters in localrc for same?.

NOTE: I have not explicitly enabled n-cell in localrc

2017-02-20 14:11:47.510 INFO migrate.versioning.api [-] done
+lib/nova:init_nova:683recreate_database nova
+lib/database:recreate_database:112local db=nova
+lib/database:recreate_database:113recreate_database_mysql nova
+lib/databases/mysql:recreate_database_mysql:56  local db=nova
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova CHARACTER SET utf8;'
+lib/nova:init_nova:684recreate_database nova_cell0
+lib/database:recreate_database:112local db=nova_cell0
+lib/database:recreate_database:113recreate_database_mysql
nova_cell0
+lib/databases/mysql:recreate_database_mysql:56  local db=nova_cell0
+lib/databases/mysql:recreate_database_mysql:57  mysql -uroot -pvmware
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS nova_cell0;'
+lib/databases/mysql:recreate_database_mysql:58  mysql -uroot -pvmware
-h127.0.0.1 -e 'CREATE DATABASE nova_cell0 CHARACTER SET utf8;'
+lib/nova:init_nova:689/usr/local/bin/nova-manage
--config-file /etc/nova/nova.conf db sync
WARNING: cell0 mapping not found - not syncing cell0.
2017-02-20 14:11:50.846 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 215 -> 216...
2017-02-20 14:11:54.279 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done
2017-02-20 14:11:54.280 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] 216 -> 217...
2017-02-20 14:11:54.288 INFO migrate.versioning.api
[req-145fe57e-7751-412f-a1f6-06dfbd39b711 None None] done



Thanks,
Prashant

On Mon, Feb 20, 2017 at 8:21 PM, Jay Pipes <jaypi...@gmail.com> wrote:

> On 02/20/2017 09:33 AM, Prashant Shetty wrote:
>
>> Team,
>>
>> I have multi node devstack setup with single controller and multiple
>> computes running stable/ocata.
>>
>> On compute:
>> ENABLED_SERVICES=n-cpu,neutron,placement-api
>>
>> Both KVM and ESxi compute came up fine:
>> vmware@cntr11:~$ nova hypervisor-list
>>
>>   warnings.warn(msg)
>> +++-
>> --+-+
>> | ID | Hypervisor hostname| State |
>> Status  |
>> +++-
>> --+-+
>> | 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up|
>> enabled |
>> | 7  | kvm-1  | up|
>> enabled |
>> +++-
>> --+-+
>> vmware@cntr11:~$
>>
>> All services seems to run fine. When tried to launch instance I see
>> below errors in nova-conductor logs and instance stuck in "scheduling"
>> state forever.
>> I dont have any config related to n-cell in controller. Could someone
>> help me to identify why nova-conductor is complaining about cells.
>>
>> 2017-02-20 14:24:06.128 WARNING oslo_config.cfg
>> [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
>> "scheduler_default_filters" from group "DEFAULT" is deprecated. Use
>> option "enabled_filters" from group "filter_scheduler".
>> 2017-02-20 14:24:06.211 ERROR nova.conductor.manager
>> [req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to
>> schedule instances
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most
>> recent call last):
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
>> "/opt/stack/nova/nova/conductor/manager.py", line 866, in
>> schedule_and_build_instances
>> 2017-02-20 14:24:06.211 TRACE nova.conductor.manager
>> request_specs[0].to_legacy_filter_properties_dict())
&g

[openstack-dev] [stable/ocata] [nova] [devstack multi-node] nova-conductor complaining about "No cell mapping found for cell0"

2017-02-20 Thread Prashant Shetty
Team,

I have multi node devstack setup with single controller and multiple
computes running stable/ocata.

On compute:
ENABLED_SERVICES=n-cpu,neutron,placement-api

Both KVM and ESxi compute came up fine:
vmware@cntr11:~$ nova hypervisor-list

  warnings.warn(msg)
+++---+-+
| ID | Hypervisor hostname| State | Status
|
+++---+-+
| 4  | domain-c82529.2fb3c1d7-fe24-49ea-9096-fcf148576db8 | up| enabled
|
| 7  | kvm-1  | up| enabled
|
+++---+-+
vmware@cntr11:~$

All services seems to run fine. When tried to launch instance I see below
errors in nova-conductor logs and instance stuck in "scheduling" state
forever.
I dont have any config related to n-cell in controller. Could someone help
me to identify why nova-conductor is complaining about cells.

2017-02-20 14:24:06.128 WARNING oslo_config.cfg
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Option
"scheduler_default_filters" from group "DEFAULT" is deprecated. Use option
"enabled_filters" from group "filter_scheduler".
2017-02-20 14:24:06.211 ERROR nova.conductor.manager
[req-e17fda8d-0d53-4735-922e-dd635d2ab7c0 admin admin] Failed to schedule
instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most recent
call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/conductor/manager.py", line 866, in
schedule_and_build_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
request_specs[0].to_legacy_filter_properties_dict())
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/conductor/manager.py", line 597, in
_schedule_instances
2017-02-20 14:24:06.211 TRACE nova.conductor.manager hosts =
self.scheduler_client.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/utils.py", line 371, in wrapped
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return func(*args,
**kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 51, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.queryclient.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/__init__.py", line 37, in
__run_method
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
getattr(self.instance, __name)(*args, **kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/client/query.py", line 32, in
select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
self.scheduler_rpcapi.select_destinations(context, spec_obj)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/rpcapi.py", line 129, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return
cctxt.call(ctxt, 'select_destinations', **msg_args)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line
169, in call
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=self.retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line
97, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager timeout=timeout,
retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 458, in send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager retry=retry)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
line 449, in _send
2017-02-20 14:24:06.211 TRACE nova.conductor.manager raise result
2017-02-20 14:24:06.211 TRACE nova.conductor.manager NoValidHost_Remote: No
valid host was found. There are not enough hosts available.
2017-02-20 14:24:06.211 TRACE nova.conductor.manager Traceback (most recent
call last):
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line
218, in inner
2017-02-20 14:24:06.211 TRACE nova.conductor.manager return func(*args,
**kwargs)
2017-02-20 14:24:06.211 TRACE nova.conductor.manager
2017-02-20 14:24:06.211 TRACE nova.conductor.manager   File
"/opt/stack/nova/nova/scheduler/manager.py", line 98, in select_destinations
2017-02-20 14:24:06.211 TRACE nova.conductor.manager dests =
self.driver.select_destinations(ctxt, spec_obj)