Re: [Openstack] Integrating Ceph Jewel as Storage Backend for OpenStack Newton

2017-03-09 Thread Esteban Freire
Hi Evan, all

I can confirm that integrating Ceph (Jewel) as Storage Backend for Glance, 
Cinder and Nova OpenStack (Newton) services can be done by following the steps 
described on, http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/ 

Just a warning about it, the right Ceph configuration to apply on Glance, 
Cinder and Nova services is the one referring to Juno OpenStack release. I 
guess this should be updated and refer to Newton release or otherwise, it can 
be a bit confusing or at least, it was for me.

Also, in the example for the section about create a secret in virsh so KVM can 
access to Ceph pools, it is mentioned:

    client.cinder secret

I think "name" should be a single name according to 
https://libvirt.org/formatsecret.html. I think it works on both ways but in my 
opinion the best practice is to put a single name in here.   

I think the above is the only relevant to comment :)

Best regards,
Esteban
  
|  
|   |  
libvirt: Secret XML format
 libvirt, virtualization, virtualization API  |  |

  |

 
 

El Martes 7 de marzo de 2017 8:57, Esteban Freire  
escribió:
 

 Hi Evan,

Thanks for your answer :) 

Yes, I have defined similar permissions and users. 

After performing some tests, I realized that Glance was not able to read 
ceph.conf file because I did not have the right variables defined.

Now, I can confirm that the Glance (Newton) integration with Ceph (Jewel) works 
following the steps defined on 
http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/ and with the following 
configuration (at least for my use case and in a fresh installation on CentOS 
7):

    [glance_store]
    stores = file,http,rbd
    default_store = rbd
    rbd_store_pool = images
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf
    rbd_store_chunk_size = 8

Currently, I am working on Cinder and Nova integration with Ceph. I will let 
you know when I manage to get it working. 

Cheers,
Esteban 

El Lunes 6 de marzo de 2017 18:08, Evan Bollig PhD  
escribió:
 

 Hey Esteban,
I got that same error at one point. Check your file permissions on the
/etc/ceph directory and contents. In particular, make sure the glance
user can access its keyring and the ceph.conf is readable for the
group as well. Here's an example:

drwxr-xr-x.  2 root    root    4.0K Jan 17 12:25 .
drwxr-xr-x. 93 root    root    8.0K Mar  3 14:27 ..
-rw---.  1 root    root      63 Jan 11 14:00 ceph.client.admin.keyring
-rw---.  1 cinder  cinder    71 Jan 11 13:59
ceph.client.cinder-backup.keyring
-r+  1 cinder  cinder    64 Jan 11 13:59 ceph.client.cinder.keyring
-rw---.  1 glance  glance    64 Jan 11 13:59 ceph.client.glance.keyring
-r+  1 gnocchi gnocchi  65 Jan 17 12:25 ceph.client.gnocchi.keyring
-rw-r--r--.  1 root    root    220 Jan 11 13:59 ceph.conf

Let me know how things in Newton are working; we're only as far as
Mitaka at the moment.

Cheers,
-E


--
Evan F. Bollig, PhD
Scientific Computing Consultant, Application Developer | Scientific
Computing Solutions (SCS)
Minnesota Supercomputing Institute | msi.umn.edu
University of Minnesota | umn.edu
boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556


On Thu, Mar 2, 2017 at 3:13 AM, Esteban Freire  wrote:
> Hello all,
>
> I am testing OpenStack Newton on CentOS 7 and now I already have a OpenStack
> cloud infrastructure working, I would like to integrate Ceph with Cinder,
> Glance and Nova services.
>
> I have found some information about how to performance this on previous
> releases for Openstack and Ceph:
>
>    http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/
>
> And I tried to update the variables according to
> https://docs.openstack.org/newton/config-reference/block-storage/drivers/ceph-rbd-volume-driver.html
>
> This is my current glance-api.conf and which is working:
>
>    [glance_store]
>    stores = file,http
>    default_store = file
>    filesystem_store_datadir = /var/lib/glance/images/
>
>
> At this is what I tried (at the moment, I have only tried it with the glance
> service but I did not get success):
>
>    * Install python-rbd and python-rados from centos-ceph-jewel repo on the
> controller node.
>    * Create a ceph user and add it to sudoers.
>    * On ceph admin node:
>
>        sudo ceph osd pool create images 150
>        sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow
> class-read object_prefix rbd_children, allow rwx pool=images'
>        sudo ceph auth get-or-create client.glance | ssh
> ceph@controller-node1 sudo tee /etc/ceph/ceph.client.glance.keyring
>        ssh ceph@controller-node1 sudo chown glance:glance
> /etc/ceph/ceph.client.glance.keyring
>
>    * On the controller node, I edited the glance-api.conf file with the
> following variables:
>
>        [glance_store]
>        stores = file,http,rbd
>        default_store = rbd
>        rbd_pool = images
>        rbd_user = glance
>        rbd_ceph_conf = 

Re: [Openstack] Integrating Ceph Jewel as Storage Backend for OpenStack Newton

2017-03-07 Thread Esteban Freire
Hi Evan,

Thanks for your answer :) 

Yes, I have defined similar permissions and users. 

After performing some tests, I realized that Glance was not able to read 
ceph.conf file because I did not have the right variables defined.

Now, I can confirm that the Glance (Newton) integration with Ceph (Jewel) works 
following the steps defined on 
http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/ and with the following 
configuration (at least for my use case and in a fresh installation on CentOS 
7):

    [glance_store]
    stores = file,http,rbd
    default_store = rbd
    rbd_store_pool = images
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf
    rbd_store_chunk_size = 8

Currently, I am working on Cinder and Nova integration with Ceph. I will let 
you know when I manage to get it working. 

Cheers,
Esteban 

El Lunes 6 de marzo de 2017 18:08, Evan Bollig PhD  
escribió:
 

 Hey Esteban,
I got that same error at one point. Check your file permissions on the
/etc/ceph directory and contents. In particular, make sure the glance
user can access its keyring and the ceph.conf is readable for the
group as well. Here's an example:

drwxr-xr-x.  2 root    root    4.0K Jan 17 12:25 .
drwxr-xr-x. 93 root    root    8.0K Mar  3 14:27 ..
-rw---.  1 root    root      63 Jan 11 14:00 ceph.client.admin.keyring
-rw---.  1 cinder  cinder    71 Jan 11 13:59
ceph.client.cinder-backup.keyring
-r+  1 cinder  cinder    64 Jan 11 13:59 ceph.client.cinder.keyring
-rw---.  1 glance  glance    64 Jan 11 13:59 ceph.client.glance.keyring
-r+  1 gnocchi gnocchi  65 Jan 17 12:25 ceph.client.gnocchi.keyring
-rw-r--r--.  1 root    root    220 Jan 11 13:59 ceph.conf

Let me know how things in Newton are working; we're only as far as
Mitaka at the moment.

Cheers,
-E


--
Evan F. Bollig, PhD
Scientific Computing Consultant, Application Developer | Scientific
Computing Solutions (SCS)
Minnesota Supercomputing Institute | msi.umn.edu
University of Minnesota | umn.edu
boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556


On Thu, Mar 2, 2017 at 3:13 AM, Esteban Freire  wrote:
> Hello all,
>
> I am testing OpenStack Newton on CentOS 7 and now I already have a OpenStack
> cloud infrastructure working, I would like to integrate Ceph with Cinder,
> Glance and Nova services.
>
> I have found some information about how to performance this on previous
> releases for Openstack and Ceph:
>
>    http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/
>
> And I tried to update the variables according to
> https://docs.openstack.org/newton/config-reference/block-storage/drivers/ceph-rbd-volume-driver.html
>
> This is my current glance-api.conf and which is working:
>
>    [glance_store]
>    stores = file,http
>    default_store = file
>    filesystem_store_datadir = /var/lib/glance/images/
>
>
> At this is what I tried (at the moment, I have only tried it with the glance
> service but I did not get success):
>
>    * Install python-rbd and python-rados from centos-ceph-jewel repo on the
> controller node.
>    * Create a ceph user and add it to sudoers.
>    * On ceph admin node:
>
>        sudo ceph osd pool create images 150
>        sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow
> class-read object_prefix rbd_children, allow rwx pool=images'
>        sudo ceph auth get-or-create client.glance | ssh
> ceph@controller-node1 sudo tee /etc/ceph/ceph.client.glance.keyring
>        ssh ceph@controller-node1 sudo chown glance:glance
> /etc/ceph/ceph.client.glance.keyring
>
>    * On the controller node, I edited the glance-api.conf file with the
> following variables:
>
>        [glance_store]
>        stores = file,http,rbd
>        default_store = rbd
>        rbd_pool = images
>        rbd_user = glance
>        rbd_ceph_conf = /etc/ceph/ceph.conf
>        rbd_store_chunk_size = 8
>
>        (!) I also have tried with stores = rbd but without success.
>
>    * And restart the service, systemctl restart openstack-glance-api
>
> But when I try to create a new image, I get the following issue:
>
>    [openstackadmin@controller-node1 ~]$ openstack image create "cirros
> ceph" --file /home/openstackadmin/cirros-0.3.4-x86_64-disk.raw --disk-format
> raw --container-format bare --public
>    500 Internal Server Error
>    The server has either erred or is incapable of performing the requested
> operation.
>        (HTTP 500)
>
> Is there any documentation about how to integrate Ceph jewel with OpenStack
> newton (Cinder, Glance and Nova services)? If it is so, could you please
> provide me the link?
>
> On the other hand, Is there any way to chose the store when creating an
> image? I mean, to choose for example if save the image on
> /var/lib/glance/images/ or on Ceph.
>
> I would appreciate if you could help me to set up this integration.
>
> This is the most relevant info I can see on the logs and as far as I see
> from it, I have a permissions 

Re: [Openstack] Integrating Ceph Jewel as Storage Backend for OpenStack Newton

2017-03-06 Thread Evan Bollig PhD
Hey Esteban,
I got that same error at one point. Check your file permissions on the
/etc/ceph directory and contents. In particular, make sure the glance
user can access its keyring and the ceph.conf is readable for the
group as well. Here's an example:

drwxr-xr-x.  2 rootroot4.0K Jan 17 12:25 .
drwxr-xr-x. 93 rootroot8.0K Mar  3 14:27 ..
-rw---.  1 rootroot  63 Jan 11 14:00 ceph.client.admin.keyring
-rw---.  1 cinder  cinder71 Jan 11 13:59
ceph.client.cinder-backup.keyring
-r+  1 cinder  cinder64 Jan 11 13:59 ceph.client.cinder.keyring
-rw---.  1 glance  glance64 Jan 11 13:59 ceph.client.glance.keyring
-r+  1 gnocchi gnocchi   65 Jan 17 12:25 ceph.client.gnocchi.keyring
-rw-r--r--.  1 rootroot 220 Jan 11 13:59 ceph.conf

Let me know how things in Newton are working; we're only as far as
Mitaka at the moment.

Cheers,
-E


--
Evan F. Bollig, PhD
Scientific Computing Consultant, Application Developer | Scientific
Computing Solutions (SCS)
Minnesota Supercomputing Institute | msi.umn.edu
University of Minnesota | umn.edu
boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556


On Thu, Mar 2, 2017 at 3:13 AM, Esteban Freire  wrote:
> Hello all,
>
> I am testing OpenStack Newton on CentOS 7 and now I already have a OpenStack
> cloud infrastructure working, I would like to integrate Ceph with Cinder,
> Glance and Nova services.
>
> I have found some information about how to performance this on previous
> releases for Openstack and Ceph:
>
> http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/
>
> And I tried to update the variables according to
> https://docs.openstack.org/newton/config-reference/block-storage/drivers/ceph-rbd-volume-driver.html
>
> This is my current glance-api.conf and which is working:
>
> [glance_store]
> stores = file,http
> default_store = file
> filesystem_store_datadir = /var/lib/glance/images/
>
>
> At this is what I tried (at the moment, I have only tried it with the glance
> service but I did not get success):
>
> * Install python-rbd and python-rados from centos-ceph-jewel repo on the
> controller node.
> * Create a ceph user and add it to sudoers.
> * On ceph admin node:
>
> sudo ceph osd pool create images 150
> sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow
> class-read object_prefix rbd_children, allow rwx pool=images'
> sudo ceph auth get-or-create client.glance | ssh
> ceph@controller-node1 sudo tee /etc/ceph/ceph.client.glance.keyring
> ssh ceph@controller-node1 sudo chown glance:glance
> /etc/ceph/ceph.client.glance.keyring
>
> * On the controller node, I edited the glance-api.conf file with the
> following variables:
>
> [glance_store]
> stores = file,http,rbd
> default_store = rbd
> rbd_pool = images
> rbd_user = glance
> rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_store_chunk_size = 8
>
> (!) I also have tried with stores = rbd but without success.
>
> * And restart the service, systemctl restart openstack-glance-api
>
> But when I try to create a new image, I get the following issue:
>
> [openstackadmin@controller-node1 ~]$ openstack image create "cirros
> ceph" --file /home/openstackadmin/cirros-0.3.4-x86_64-disk.raw --disk-format
> raw --container-format bare --public
> 500 Internal Server Error
> The server has either erred or is incapable of performing the requested
> operation.
> (HTTP 500)
>
> Is there any documentation about how to integrate Ceph jewel with OpenStack
> newton (Cinder, Glance and Nova services)? If it is so, could you please
> provide me the link?
>
> On the other hand, Is there any way to chose the store when creating an
> image? I mean, to choose for example if save the image on
> /var/lib/glance/images/ or on Ceph.
>
> I would appreciate if you could help me to set up this integration.
>
> This is the most relevant info I can see on the logs and as far as I see
> from it, I have a permissions error but I am not sure what I need to modify.
> It is my first installation with OpenStack by the way and I am trying at
> home to see how it works :)
>
> {{{
> /var/log/glance/api.log
>
> 2017-03-01 23:43:41.419 4197 INFO eventlet.wsgi.server
> [req-eea55909-c963-4158-b30c-3f2779fd78c6 c41043a1ddc14ffba1b45c0a3287e0bf
> b2d1547f0e734f87a84feea75ccd6453 - default default] 192.168.56.2 - -
> [01/Mar/2017 23:43:41] "GET /v2/schemas/image HTTP/1.1" 200 4352 0.421281
> 2017-03-01 23:43:41.570 4197 INFO eventlet.wsgi.server
> [req-f5ef0b43-0ddf-4698-8459-2f75fb1822a3 c41043a1ddc14ffba1b45c0a3287e0bf
> b2d1547f0e734f87a84feea75ccd6453 - default default] 192.168.56.2 - -
> [01/Mar/2017 23:43:41] "POST /v2/images HTTP/1.1" 201 859 0.110681
> 2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data
> [req-98c406fb-7bc0-47de-bace-2b0a2097b699 c41043a1ddc14ffba1b45c0a3287e0bf
> 

[Openstack] Integrating Ceph Jewel as Storage Backend for OpenStack Newton

2017-03-02 Thread Esteban Freire
Hello all,

I am testing OpenStack Newton on CentOS 7 and now I already have a OpenStack 
cloud infrastructure working, I would like to integrate Ceph with Cinder, 
Glance and Nova services.

I have found some information about how to performance this on previous 
releases for Openstack and Ceph:

http://docs.ceph.com/docs/jewel/rbd/rbd-openstack/

And I tried to update the variables according to 
https://docs.openstack.org/newton/config-reference/block-storage/drivers/ceph-rbd-volume-driver.html

This is my current glance-api.conf and which is working:

    [glance_store]
    stores = file,http
    default_store = file
    filesystem_store_datadir = /var/lib/glance/images/


At this is what I tried (at the moment, I have only tried it with the glance 
service but I did not get success):

    * Install python-rbd and python-rados from centos-ceph-jewel repo on the 
controller node.
    * Create a ceph user and add it to sudoers.
    * On ceph admin node:
        sudo ceph osd pool create images 150
        sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow 
class-read object_prefix rbd_children, allow rwx pool=images'
        sudo ceph auth get-or-create client.glance | ssh ceph@controller-node1 
sudo tee /etc/ceph/ceph.client.glance.keyring
        ssh ceph@controller-node1 sudo chown glance:glance 
/etc/ceph/ceph.client.glance.keyring
    
    * On the controller node, I edited the glance-api.conf file with the 
following variables:

        [glance_store]
        stores = file,http,rbd
        default_store = rbd
        rbd_pool = images
        rbd_user = glance
        rbd_ceph_conf = /etc/ceph/ceph.conf
        rbd_store_chunk_size = 8
    (!) I also have tried with stores = rbd but without success. 
    
    * And restart the service, systemctl restart openstack-glance-api

But when I try to create a new image, I get the following issue:

    [openstackadmin@controller-node1 ~]$ openstack image create "cirros ceph" 
--file /home/openstackadmin/cirros-0.3.4-x86_64-disk.raw --disk-format raw 
--container-format bare --public
    500 Internal Server Error
    The server has either erred or is incapable of performing the requested 
operation.
    (HTTP 500)

Is there any documentation about how to integrate Ceph jewel with OpenStack 
newton (Cinder, Glance and Nova services)? If it is so, could you please 
provide me the link?     

On the other hand, Is there any way to chose the store when creating an image? 
I mean, to choose for example if save the image on /var/lib/glance/images/ or 
on Ceph. 

I would appreciate if you could help me to set up this integration. 

This is the most relevant info I can see on the logs and as far as I see from 
it, I have a permissions error but I am not sure what I need to modify. It is 
my first installation with OpenStack by the way and I am trying at home to see 
how it works :)

{{{
/var/log/glance/api.log

2017-03-01 23:43:41.419 4197 INFO eventlet.wsgi.server 
[req-eea55909-c963-4158-b30c-3f2779fd78c6 c41043a1ddc14ffba1b45c0a3287e0bf 
b2d1547f0e734f87a84feea75ccd6453 - default default] 192.168.56.2 - - 
[01/Mar/2017 23:43:41] "GET /v2/schemas/image HTTP/1.1" 200 4352 0.421281
2017-03-01 23:43:41.570 4197 INFO eventlet.wsgi.server 
[req-f5ef0b43-0ddf-4698-8459-2f75fb1822a3 c41043a1ddc14ffba1b45c0a3287e0bf 
b2d1547f0e734f87a84feea75ccd6453 - default default] 192.168.56.2 - - 
[01/Mar/2017 23:43:41] "POST /v2/images HTTP/1.1" 201 859 0.110681
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data 
[req-98c406fb-7bc0-47de-bace-2b0a2097b699 c41043a1ddc14ffba1b45c0a3287e0bf 
b2d1547f0e734f87a84feea75ccd6453 - default default] Failed to upload image data 
due to internal error
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data Traceback (most 
recent call last):
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data   File 
"/usr/lib/python2.7/site-packages/glance/api/v2/image_data.py", line 115, in 
upload
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data 
image.set_data(data, size)
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data   File 
"/usr/lib/python2.7/site-packages/glance/domain/proxy.py", line 195, in set_data
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data 
self.base.set_data(data, size)
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data   File 
"/usr/lib/python2.7/site-packages/glance/notifier.py", line 479, in set_data
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data 
_send_notification(notify_error, 'image.upload', msg)
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data 
self.force_reraise()
2017-03-01 23:43:41.651 4197 ERROR glance.api.v2.image_data   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-03-01 23:43:41.651 4197