Re: [ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-25 Thread Contacto

Hi Jens,

There's a bug in cinder that causes, at least, to get size wrong from 
cinder. If you search a little bit you will find it. I think it's still 
not solved.


El 21/08/14 a las #4, Jens-Christian Fischer escribió:

I am working with Cinder Multi Backends on an Icehouse installation and have 
added another backend (Quobyte) to a previously running Cinder/Ceph 
installation.

I can now create QuoByte volumes, but no longer any ceph volumes. The 
cinder-scheduler log get’s an incorrect number for the free size of the volumes 
pool and disregards the RBD backend as a viable storage system:

2014-08-21 16:42:49.847 1469 DEBUG 
cinder.openstack.common.scheduler.filters.capabilities_filter [r...] extra_spec 
requirement 'rbd' does not match 'quobyte' _satisfies_extra_specs 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:55
2014-08-21 16:42:49.848 1469 DEBUG 
cinder.openstack.common.scheduler.filters.capabilities_filter [r...] host 
'controller@quobyte': free_capacity_gb: 156395.931061 fails resource_type 
extra_specs requirements host_passes 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:68
2014-08-21 16:42:49.848 1469 WARNING cinder.scheduler.filters.capacity_filter 
[r...-] Insufficient free space for volume creation (requested / avail): 20/8.0
2014-08-21 16:42:49.849 1469 ERROR cinder.scheduler.flows.create_volume [r.] 
Failed to schedule_create_volume: No valid host was found.

here’s our /etc/cinder/cinder.conf

— cut —
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
# iscsi_helper = tgtadm
volume_name_template = volume-%s
# volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rabbit_host=10.2.0.10
use_syslog=False
api_paste_config=/etc/cinder/api-paste.ini
glance_num_retries=0
debug=True
storage_availability_zone=nova
glance_api_ssl_compression=False
glance_api_insecure=False
rabbit_userid=openstack
rabbit_use_ssl=False
log_dir=/var/log/cinder
osapi_volume_listen=0.0.0.0
glance_api_servers=1.2.3.4:9292
rabbit_virtual_host=/
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
default_availability_zone=nova
rabbit_hosts=10.2.0.10:5672
control_exchange=openstack
rabbit_ha_queues=False
glance_api_version=2
amqp_durable_queues=False
rabbit_password=secret
rabbit_port=5672
rpc_backend=cinder.openstack.common.rpc.impl_kombu
enabled_backends=quobyte,rbd
default_volume_type=rbd

[database]
idle_timeout=3600
connection=mysql://cinder:secret@10.2.0.10/cinder

[quobyte]
quobyte_volume_url=quobyte://hostname.cloud.example.com/openstack-volumes
volume_driver=cinder.volume.drivers.quobyte.QuobyteDriver

[rbd-volumes]
volume_backend_name=rbd-volumes
rbd_pool=volumes
rbd_flatten_volume_from_snapshot=False
rbd_user=cinder
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_secret_uuid=1234-5678-ABCD-…-DEF
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver

— cut ---

any ideas?

cheers
Jens-Christian



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-22 Thread Jens-Christian Fischer
Thanks Greg, a good nights sleep and your eyes made the difference: Here’s the 
relevant part from /etc/cinder/cinder.conf to make that happen

[DEFAULT]
...
enabled_backends=quobyte,rbd
default_volume_type=rbd



[quobyte]
volume_backend_name=quobyte
quobyte_volume_url=quobyte://host.example.com/openstack-volumes
volume_driver=cinder.volume.drivers.quobyte.QuobyteDriver

[rbd]
volume_backend_name=rbd
rbd_pool=volumes
rbd_flatten_volume_from_snapshot=False
rbd_user=cinder
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_secret_uuid=111-222-333-444-555
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver

cheers
jc

-- 
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch

http://www.switch.ch/stories

On 21.08.2014, at 17:55, Gregory Farnum  wrote:

> On Thu, Aug 21, 2014 at 8:29 AM, Jens-Christian Fischer
>  wrote:
>> I am working with Cinder Multi Backends on an Icehouse installation and have 
>> added another backend (Quobyte) to a previously running Cinder/Ceph 
>> installation.
>> 
>> I can now create QuoByte volumes, but no longer any ceph volumes. The 
>> cinder-scheduler log get’s an incorrect number for the free size of the 
>> volumes pool and disregards the RBD backend as a viable storage system:
> 
> I don't know much about Cinder, but given this output:
> 
>> 2014-08-21 16:42:49.847 1469 DEBUG 
>> cinder.openstack.common.scheduler.filters.capabilities_filter [r...] 
>> extra_spec requirement 'rbd' does not match 'quobyte' _satisfies_extra_specs 
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:55
>> 2014-08-21 16:42:49.848 1469 DEBUG 
>> cinder.openstack.common.scheduler.filters.capabilities_filter [r...] host 
>> 'controller@quobyte': free_capacity_gb: 156395.931061 fails resource_type 
>> extra_specs requirements host_passes 
>> /usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:68
>> 2014-08-21 16:42:49.848 1469 WARNING 
>> cinder.scheduler.filters.capacity_filter [r...-] Insufficient free space for 
>> volume creation (requested / avail): 20/8.0
>> 2014-08-21 16:42:49.849 1469 ERROR cinder.scheduler.flows.create_volume [r.] 
>> Failed to schedule_create_volume: No valid host was found.
> 
> I suspect you'll have better luck on the Openstack mailing list. :)
> 
> Although for a random quick guess, I think maybe you need to match the
> "rbd" and "rbd-volumes" (from your conf file) strings?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 
> 
>> 
>> here’s our /etc/cinder/cinder.conf
>> 
>> — cut —
>> [DEFAULT]
>> rootwrap_config = /etc/cinder/rootwrap.conf
>> api_paste_confg = /etc/cinder/api-paste.ini
>> # iscsi_helper = tgtadm
>> volume_name_template = volume-%s
>> # volume_group = cinder-volumes
>> verbose = True
>> auth_strategy = keystone
>> state_path = /var/lib/cinder
>> lock_path = /var/lock/cinder
>> volumes_dir = /var/lib/cinder/volumes
>> rabbit_host=10.2.0.10
>> use_syslog=False
>> api_paste_config=/etc/cinder/api-paste.ini
>> glance_num_retries=0
>> debug=True
>> storage_availability_zone=nova
>> glance_api_ssl_compression=False
>> glance_api_insecure=False
>> rabbit_userid=openstack
>> rabbit_use_ssl=False
>> log_dir=/var/log/cinder
>> osapi_volume_listen=0.0.0.0
>> glance_api_servers=1.2.3.4:9292
>> rabbit_virtual_host=/
>> scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
>> default_availability_zone=nova
>> rabbit_hosts=10.2.0.10:5672
>> control_exchange=openstack
>> rabbit_ha_queues=False
>> glance_api_version=2
>> amqp_durable_queues=False
>> rabbit_password=secret
>> rabbit_port=5672
>> rpc_backend=cinder.openstack.common.rpc.impl_kombu
>> enabled_backends=quobyte,rbd
>> default_volume_type=rbd
>> 
>> [database]
>> idle_timeout=3600
>> connection=mysql://cinder:secret@10.2.0.10/cinder
>> 
>> [quobyte]
>> quobyte_volume_url=quobyte://hostname.cloud.example.com/openstack-volumes
>> volume_driver=cinder.volume.drivers.quobyte.QuobyteDriver
>> 
>> [rbd-volumes]
>> volume_backend_name=rbd-volumes
>> rbd_pool=volumes
>> rbd_flatten_volume_from_snapshot=False
>> rbd_user=cinder
>> rbd_ceph_conf=/etc/ceph/ceph.conf
>> rbd_secret_uuid=1234-5678-ABCD-…-DEF
>> rbd_max_clone_depth=5
>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>> 
>> — cut ---
>> 
>> any ideas?
>> 
>> cheers
>> Jens-Christian
>> 
>> --
>> SWITCH
>> Jens-Christian Fischer, Peta Solutions
>> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
>> phone +41 44 268 15 15, direct +41 44 268 15 71
>> jens-christian.fisc...@switch.ch
>> http://www.switch.ch
>> 
>> http://www.switch.ch/stories
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
c

Re: [ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-21 Thread Gregory Farnum
On Thu, Aug 21, 2014 at 8:29 AM, Jens-Christian Fischer
 wrote:
> I am working with Cinder Multi Backends on an Icehouse installation and have 
> added another backend (Quobyte) to a previously running Cinder/Ceph 
> installation.
>
> I can now create QuoByte volumes, but no longer any ceph volumes. The 
> cinder-scheduler log get’s an incorrect number for the free size of the 
> volumes pool and disregards the RBD backend as a viable storage system:

I don't know much about Cinder, but given this output:

> 2014-08-21 16:42:49.847 1469 DEBUG 
> cinder.openstack.common.scheduler.filters.capabilities_filter [r...] 
> extra_spec requirement 'rbd' does not match 'quobyte' _satisfies_extra_specs 
> /usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:55
> 2014-08-21 16:42:49.848 1469 DEBUG 
> cinder.openstack.common.scheduler.filters.capabilities_filter [r...] host 
> 'controller@quobyte': free_capacity_gb: 156395.931061 fails resource_type 
> extra_specs requirements host_passes 
> /usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:68
> 2014-08-21 16:42:49.848 1469 WARNING cinder.scheduler.filters.capacity_filter 
> [r...-] Insufficient free space for volume creation (requested / avail): 
> 20/8.0
> 2014-08-21 16:42:49.849 1469 ERROR cinder.scheduler.flows.create_volume [r.] 
> Failed to schedule_create_volume: No valid host was found.

I suspect you'll have better luck on the Openstack mailing list. :)

Although for a random quick guess, I think maybe you need to match the
"rbd" and "rbd-volumes" (from your conf file) strings?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


>
> here’s our /etc/cinder/cinder.conf
>
> — cut —
> [DEFAULT]
> rootwrap_config = /etc/cinder/rootwrap.conf
> api_paste_confg = /etc/cinder/api-paste.ini
> # iscsi_helper = tgtadm
> volume_name_template = volume-%s
> # volume_group = cinder-volumes
> verbose = True
> auth_strategy = keystone
> state_path = /var/lib/cinder
> lock_path = /var/lock/cinder
> volumes_dir = /var/lib/cinder/volumes
> rabbit_host=10.2.0.10
> use_syslog=False
> api_paste_config=/etc/cinder/api-paste.ini
> glance_num_retries=0
> debug=True
> storage_availability_zone=nova
> glance_api_ssl_compression=False
> glance_api_insecure=False
> rabbit_userid=openstack
> rabbit_use_ssl=False
> log_dir=/var/log/cinder
> osapi_volume_listen=0.0.0.0
> glance_api_servers=1.2.3.4:9292
> rabbit_virtual_host=/
> scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
> default_availability_zone=nova
> rabbit_hosts=10.2.0.10:5672
> control_exchange=openstack
> rabbit_ha_queues=False
> glance_api_version=2
> amqp_durable_queues=False
> rabbit_password=secret
> rabbit_port=5672
> rpc_backend=cinder.openstack.common.rpc.impl_kombu
> enabled_backends=quobyte,rbd
> default_volume_type=rbd
>
> [database]
> idle_timeout=3600
> connection=mysql://cinder:secret@10.2.0.10/cinder
>
> [quobyte]
> quobyte_volume_url=quobyte://hostname.cloud.example.com/openstack-volumes
> volume_driver=cinder.volume.drivers.quobyte.QuobyteDriver
>
> [rbd-volumes]
> volume_backend_name=rbd-volumes
> rbd_pool=volumes
> rbd_flatten_volume_from_snapshot=False
> rbd_user=cinder
> rbd_ceph_conf=/etc/ceph/ceph.conf
> rbd_secret_uuid=1234-5678-ABCD-…-DEF
> rbd_max_clone_depth=5
> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>
> — cut ---
>
> any ideas?
>
> cheers
> Jens-Christian
>
> --
> SWITCH
> Jens-Christian Fischer, Peta Solutions
> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
> phone +41 44 268 15 15, direct +41 44 268 15 71
> jens-christian.fisc...@switch.ch
> http://www.switch.ch
>
> http://www.switch.ch/stories
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-21 Thread Jens-Christian Fischer
I am working with Cinder Multi Backends on an Icehouse installation and have 
added another backend (Quobyte) to a previously running Cinder/Ceph 
installation.

I can now create QuoByte volumes, but no longer any ceph volumes. The 
cinder-scheduler log get’s an incorrect number for the free size of the volumes 
pool and disregards the RBD backend as a viable storage system:

2014-08-21 16:42:49.847 1469 DEBUG 
cinder.openstack.common.scheduler.filters.capabilities_filter [r...] extra_spec 
requirement 'rbd' does not match 'quobyte' _satisfies_extra_specs 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:55
2014-08-21 16:42:49.848 1469 DEBUG 
cinder.openstack.common.scheduler.filters.capabilities_filter [r...] host 
'controller@quobyte': free_capacity_gb: 156395.931061 fails resource_type 
extra_specs requirements host_passes 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py:68
2014-08-21 16:42:49.848 1469 WARNING cinder.scheduler.filters.capacity_filter 
[r...-] Insufficient free space for volume creation (requested / avail): 20/8.0
2014-08-21 16:42:49.849 1469 ERROR cinder.scheduler.flows.create_volume [r.] 
Failed to schedule_create_volume: No valid host was found.

here’s our /etc/cinder/cinder.conf

— cut —
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
# iscsi_helper = tgtadm
volume_name_template = volume-%s
# volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rabbit_host=10.2.0.10
use_syslog=False
api_paste_config=/etc/cinder/api-paste.ini
glance_num_retries=0
debug=True
storage_availability_zone=nova
glance_api_ssl_compression=False
glance_api_insecure=False
rabbit_userid=openstack
rabbit_use_ssl=False
log_dir=/var/log/cinder
osapi_volume_listen=0.0.0.0
glance_api_servers=1.2.3.4:9292
rabbit_virtual_host=/
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
default_availability_zone=nova
rabbit_hosts=10.2.0.10:5672
control_exchange=openstack
rabbit_ha_queues=False
glance_api_version=2
amqp_durable_queues=False
rabbit_password=secret
rabbit_port=5672
rpc_backend=cinder.openstack.common.rpc.impl_kombu
enabled_backends=quobyte,rbd
default_volume_type=rbd

[database]
idle_timeout=3600
connection=mysql://cinder:secret@10.2.0.10/cinder

[quobyte]
quobyte_volume_url=quobyte://hostname.cloud.example.com/openstack-volumes
volume_driver=cinder.volume.drivers.quobyte.QuobyteDriver

[rbd-volumes]
volume_backend_name=rbd-volumes
rbd_pool=volumes
rbd_flatten_volume_from_snapshot=False
rbd_user=cinder
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_secret_uuid=1234-5678-ABCD-…-DEF
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver

— cut ---

any ideas?

cheers
Jens-Christian

-- 
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch

http://www.switch.ch/stories

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com