Hi Javier

We have modified the ceph monitor to take into account the datastore ceph pool and also the ceph user. This is a generic solution that could be useful for other datacenters as well, we have created a pull request in github if you are agree about this change and you want to include it in the next release.

https://github.com/OpenNebula/one/pull/27

We only have modified these lines in /var/lib/one/remotes/datastore/ceph/monitor:

--- monitor.orig.190614    2014-06-19 14:35:24.022755989 +0200
+++ monitor    2014-06-19 14:49:34.043187892 +0200
@@ -46,10 +46,12 @@
 while IFS= read -r -d '' element; do
     XPATH_ELEMENTS[i++]="$element"
 done < <($XPATH /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST \
- /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME)
+ /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME \
+ /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/CEPH_USER)

 BRIDGE_LIST="${XPATH_ELEMENTS[j++]}"
 POOL_NAME="${XPATH_ELEMENTS[j++]:-$POOL_NAME}"
+CEPH_USER="${XPATH_ELEMENTS[j++]}"

 HOST=`get_destination_host`

@@ -61,7 +63,7 @@
 # ------------ Compute datastore usage -------------

 MONITOR_SCRIPT=$(cat <<EOF
-$RADOS df | $AWK '{
+$RADOS df -p ${POOL_NAME} --id ${CEPH_USER}| $AWK '{
     if (\$1 == "total") {

         space = int(\$3/1024)

CEPH_USER and POOL_NAME should be mandatory to create the ceph datastore.

Cheers
Alvaro


Hola Javi

Thanks a lot for your feedback. Yes we will modify the current
monitoring scripts to take into account this. This feature could be
useful as well if you want to monitor several datastores that are using
different ceph pools and users ids. You only have to include the id and
pool info into the ONE datastore template and the monitoring script will
use one or another depending on the DS conf.


Cheers and thanks!
Alvaro
On 2014-06-17 14:55, Javier Fontan wrote:
CEPH_USER is used when generating the libvirt/kvm deployment file but
not for DS monitoring:

* Deployment file generation:
https://github.com/OpenNebula/one/blob/one-4.6/src/vmm/LibVirtDriverKVM.cc#L461
* Monitoring: 
https://github.com/OpenNebula/one/blob/one-4.6/src/datastore_mad/remotes/ceph/monitor#L64

Ceph is not may area of expertise but you may need to add those
parameters to the monitor script a maybe to other scripts that use the
"rados" command. It may also be possible to modify the RADOS command
to have those parameters instead of modifying all the scripts:

https://github.com/OpenNebula/one/blob/one-4.6/src/mad/sh/scripts_common.sh#L40

As I said I don't know much about Ceph and it may be those credentials
could be set in a config file or so.

On Tue, Jun 17, 2014 at 11:19 AM, Alvaro Simon Garcia
<alvaro.simongar...@ugent.be> wrote:
Hi

We have included the admin keyring instead of libvirt user and it works...
that means that we can run rbd or qemu-img wihtout the libvirt id, but is
not the best solution. We have included the user into datastore conf:

CEPH_USER="libvirt"

but it seems that is not used by opennebula at the end

Cheers
Alvaro


On 2014-06-17 10:09, Alvaro Simon Garcia wrote:
Hi all


We have included our ONE nodes into Ceph cluster, cephx auth is working
but OpenNebula is not able to detect the free space:



$ onedatastore show 103
DATASTORE 103 INFORMATION
ID             : 103
NAME           : ceph
USER           : oneadmin
GROUP          : oneadmin
CLUSTER        : -
TYPE           : IMAGE
DS_MAD         : ceph
TM_MAD         : ceph
BASE PATH      : /var/lib/one//datastores/103
DISK_TYPE      : RBD

DATASTORE CAPACITY
TOTAL:         : 0M
FREE:          : 0M
USED:          : 0M
LIMIT:         : -

PERMISSIONS
OWNER          : um-
GROUP          : u--
OTHER          : ---

$ onedatastore list
     ID NAME                SIZE AVAIL CLUSTER      IMAGES TYPE DS       TM
      0 system            114.8G 85%   -                 0 sys -
shared
      1 default           114.9G 84%   -                 2 img fs       ssh
      2 files             114.9G 84%   -                 0 fil fs       ssh
    103 ceph                  0M -     -                 0 img ceph
ceph

but if we run rados as oneadmin user:

$  rados df -p one --id libvirt
pool name       category                 KB      objects clones
degraded      unfound           rd        rd KB           wr        wr KB
one             -                          0 0            0
0           0            0 0            0            0
     total used         1581852           37
     total avail   140846865180
     total space   140848447032

It's working correctly (we are using one pool and libvirt ceph id)

the oned.log only shows this info:
Tue Jun 17 10:06:37 2014 [InM][D]: Monitoring datastore default (1)
Tue Jun 17 10:06:37 2014 [InM][D]: Monitoring datastore files (2)
Tue Jun 17 10:06:37 2014 [InM][D]: Monitoring datastore ceph (103)
Tue Jun 17 10:06:37 2014 [ImM][D]: Datastore default (1) successfully
monitored.
Tue Jun 17 10:06:37 2014 [ImM][D]: Datastore files (2) successfully
monitored.
Tue Jun 17 10:06:37 2014 [ImM][D]: Datastore ceph (103) successfully
monitored.

Any clue about how to debug this issue?

Thanks in advance!
Alvaro



_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

Reply via email to