Hi Fran,

Here is my cinder.conf file. Please help to analyze it.

Do i need to create volume group as mentioned in this link
http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html


[root@OSKVM1 ~]# grep -v "^#" /etc/cinder/cinder.conf|grep -v ^$

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.24.0.4

notification_driver = messagingv2

backup_ceph_conf = /etc/ceph/ceph.conf

backup_ceph_user = cinder-backup

backup_ceph_chunk_size = 134217728

backup_ceph_pool = backups

backup_ceph_stripe_unit = 0

backup_ceph_stripe_count = 0

restore_discard_excess_bytes = true

backup_driver = cinder.backup.drivers.ceph

glance_api_version = 2

enabled_backends = ceph

rbd_pool = volumes

rbd_user = cinder

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_secret_uuid = a536c85f-d660-4c25-a840-e321c09e7941

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

volume_driver = cinder.volume.drivers.rbd.RBDDriver

[BRCD_FABRIC_EXAMPLE]

[CISCO_FABRIC_EXAMPLE]

[cors]

[cors.subdomain]

[database]

connection = mysql://cinder:cinder@controller/cinder

[fc-zone-manager]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = cinder

[matchmaker_redis]

[matchmaker_ring]

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = XXXX

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[profiler]

On Thu, Jul 7, 2016 at 11:38 AM, Fran Barrera <franbarre...@gmail.com>
wrote:

> Hello,
>
> Are you configured these two paremeters in cinder.conf?
>
> rbd_user
> rbd_secret_uuid
>
> Regards.
>
> 2016-07-07 15:39 GMT+02:00 Gaurav Goyal <er.gauravgo...@gmail.com>:
>
>> Hello Mr. Kees,
>>
>> Thanks for your response!
>>
>> My setup is
>>
>> Openstack Node 1 -> controller + network + compute1 (Liberty Version)
>> Openstack node 2 --> Compute2
>>
>> Ceph version Hammer
>>
>> I am using dell storage with following status
>>
>> DELL SAN storage is attached to both hosts as
>>
>> [root@OSKVM1 ~]# iscsiadm -m node
>>
>> 10.35.0.3:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-47700000018575af-vol1
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-47700000018575af-vol1
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-7290000002157606-vol2
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-7290000002157606-vol2
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a000000245761a-vol3
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a000000245761a-vol3
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-927000000275761a-vol4
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-927000000275761a-vol4
>>
>>
>> Since in my setup same LUNs are MAPPED to both hosts
>>
>> i choose 2 LUNS on Openstack Node 1 and 2 on Openstack Node 2
>>
>>
>> *Node1 has *
>>
>> /dev/sdc1                2.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-0
>>
>> /dev/sdd1                2.0T  3.8G  2.0T   1% /var/lib/ceph/osd/ceph-1
>>
>> *Node 2 has *
>>
>> /dev/sdd1                2.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-2
>>
>> /dev/sde1                2.0T  3.5G  2.0T   1% /var/lib/ceph/osd/ceph-3
>>
>> [root@OSKVM1 ~]# ceph status
>>
>>     cluster 9f923089-a6c0-4169-ace8-ad8cc4cca116
>>
>>      health HEALTH_WARN
>>
>>             mon.OSKVM1 low disk space
>>
>>      monmap e1: 1 mons at {OSKVM1=10.24.0.4:6789/0}
>>
>>             election epoch 1, quorum 0 OSKVM1
>>
>>      osdmap e40: 4 osds: 4 up, 4 in
>>
>>       pgmap v1154: 576 pgs, 5 pools, 6849 MB data, 860 objects
>>
>>             13857 MB used, 8154 GB / 8168 GB avail
>>
>>              576 active+clean
>>
>> *Can you please help me to know if it is correct configuration as per my
>> setup?*
>>
>> After this setup, i am trying to configure Cinder and Glance to use RBD
>> for a backend.
>> Glance image is already stored in RBD.
>> Following this link http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>>
>> I have managed to install glance image in rbd. But i am finding some
>> issue in cinder configuration. Can you please help me on this?
>> As per link, i need to configure these parameters under [ceph] but i do
>> not have different section for [ceph]. infact i could find all these
>> parameters under [DEFAULT]. Is it ok to configure them under [DEFAULT].
>> CONFIGURING CINDER
>> <http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder>
>>
>> OpenStack requires a driver to interact with Ceph block devices. You must
>> also specify the pool name for the block device. On your OpenStack node,
>> edit/etc/cinder/cinder.conf by adding:
>>
>> [DEFAULT]
>> ...
>> enabled_backends = ceph
>> ...
>> [ceph]
>> volume_driver = cinder.volume.drivers.rbd.RBDDriver
>> rbd_pool = volumes
>> rbd_ceph_conf = /etc/ceph/ceph.conf
>> rbd_flatten_volume_from_snapshot = false
>> rbd_max_clone_depth = 5
>> rbd_store_chunk_size = 4
>> rados_connect_timeout = -1
>> glance_api_version = 2
>>
>> I find following error in cinder service status
>>
>> systemctl status openstack-cinder-volume.service
>>
>> Jul 07 09:37:01 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:01.058
>> 136259 ERROR cinder.service [-] Manager for service cinder-volume
>> OSKVM1@ceph is reporting problems, not sending heartbeat. Service will
>> appear "down".
>>
>> Jul 07 09:37:02 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:02.040
>> 136259 WARNING cinder.volume.manager
>> [req-561ddd3c-9560-4374-a958-7a2c103af7ee - - - - -] Update driver status
>> failed: (config name ceph) is uninitialized.
>>
>> Jul 07 09:37:11 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:11.059
>> 136259 ERROR cinder.service [-] Manager for service cinder-volume
>> OSKVM1@ceph is reporting problems, not sending heartbeat. Service will
>> appear "down".
>>
>>
>>
>> [root@OSKVM2 ~]# rbd -p images ls
>>
>> a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f
>>
>> [root@OSKVM2 ~]# rados df
>>
>> pool name                 KB      objects       clones     degraded
>> unfound           rd        rd KB           wr        wr KB
>>
>> backups                    0            0            0            0
>>     0            0            0            0            0
>>
>> images               7013377          860            0            0
>>     0         9486         7758         2580      7013377
>>
>> rbd                        0            0            0            0
>>     0            0            0            0            0
>>
>> vms                        0            0            0            0
>>     0            0            0            0            0
>>
>> volumes                    0            0            0            0
>>     0            0            0            0            0
>>
>>   total used        14190236          860
>>
>>   total avail     8550637828
>>
>>   total space     8564828064
>>
>>
>>
>>
>> [root@OSKVM2 ~]# ceph auth list
>>
>> installed auth entries:
>>
>>
>> mds.OSKVM1
>>
>> key: AQCK6XtXNBFdDBAAXmX73gBqK3lyakSxxP+XjA==
>>
>> caps: [mds] allow
>>
>> caps: [mon] allow profile mds
>>
>> caps: [osd] allow rwx
>>
>> osd.0
>>
>> key: AQAB4HtX7q27KBAAEqcuJXwXAJyD6a1Qu/MXqA==
>>
>> caps: [mon] allow profile osd
>>
>> caps: [osd] allow *
>>
>> osd.1
>>
>> key: AQC/4ntXFJGdFBAAADYH03iQTF4jWI1LnBZeJg==
>>
>> caps: [mon] allow profile osd
>>
>> caps: [osd] allow *
>>
>> osd.2
>>
>> key: AQCa43tXr12fDhAAzbq6FO2+8m9qg1B12/99Og==
>>
>> caps: [mon] allow profile osd
>>
>> caps: [osd] allow *
>>
>> osd.3
>>
>> key: AQA/5HtXDNfcLxAAJWawgxc1nd8CB+4uH/8fdQ==
>>
>> caps: [mon] allow profile osd
>>
>> caps: [osd] allow *
>>
>> client.admin
>>
>> key: AQBNknJXE/I2FRAA+caW02eje7GZ/uv1O6aUgA==
>>
>> caps: [mds] allow
>>
>> caps: [mon] allow *
>>
>> caps: [osd] allow *
>>
>> client.bootstrap-mds
>>
>> key: AQBOknJXjLloExAAGjMRfjp5okI1honz9Nx4wg==
>>
>> caps: [mon] allow profile bootstrap-mds
>>
>> client.bootstrap-osd
>>
>> key: AQBNknJXDUMFKBAAZ8/TfDkS0N7Q6CbaOG3DyQ==
>>
>> caps: [mon] allow profile bootstrap-osd
>>
>> client.bootstrap-rgw
>>
>> key: AQBOknJXQAUiABAA6IB4p4RyUmrsxXk+pv4u7g==
>>
>> caps: [mon] allow profile bootstrap-rgw
>>
>> client.cinder
>>
>> key: AQCIAHxX9ga8LxAAU+S3Vybdu+Cm2bP3lplGnA==
>>
>> caps: [mon] allow r
>>
>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>> pool=volumes, allow rwx pool=vms, allow rx pool=images
>>
>> client.cinder-backup
>>
>> key: AQCXAHxXAVSNKhAAV1d/ZRMsrriDOt+7pYgJIg==
>>
>> caps: [mon] allow r
>>
>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>> pool=backups
>>
>> client.glance
>>
>> key: AQCVAHxXupPdLBAA7hh1TJZnvSmFSDWbQiaiEQ==
>>
>> caps: [mon] allow r
>>
>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>> pool=images
>>
>>
>> Regards
>>
>> Gaurav Goyal
>>
>> On Thu, Jul 7, 2016 at 2:54 AM, Kees Meijs <k...@nefos.nl> wrote:
>>
>>> Hi Gaurav,
>>>
>>> Unfortunately I'm not completely sure about your setup, but I guess it
>>> makes sense to configure Cinder and Glance to use RBD for a backend. It
>>> seems to me, you're trying to store VM images directly on an OSD
>>> filesystem.
>>>
>>> Please refer to http://docs.ceph.com/docs/master/rbd/rbd-openstack/ for
>>> details.
>>>
>>> Regards,
>>> Kees
>>>
>>> On 06-07-16 23:03, Gaurav Goyal wrote:
>>> >
>>> > I am installing ceph hammer and integrating it with openstack Liberty
>>> > for the first time.
>>> >
>>> > My local disk has only 500 GB but i need to create 600 GB VM. SO i
>>> > have created a soft link to ceph filesystem as
>>> >
>>> > lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
>>> > /var/lib/ceph/osd/ceph-0/instances [root@OSKVM1 nova]# pwd
>>> > /var/lib/nova [root@OSKVM1 nova]#
>>> >
>>> > now when i am trying to create an instance it is giving the following
>>> > error as checked from nova-compute.log
>>> > I need your help to fix this issue.
>>> >
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to