[ceph-users] Environment variable to configure rbd "-c" parameter and "--keyfile" parameter?

2017-08-21 Thread Stéphane Klein
Hi,

I look for environment variable to configure rbd "-c" parameter and
"--keyfile" parameter.

I found nothing in http://docs.ceph.com/docs/master/man/8/rbd/

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Is it possible to get IO usage (read / write bandwidth) by client or RBD image?

2017-07-20 Thread Stéphane Klein
Hi,

is it possible to get IO stats (read / write bandwidth) by client or image?

I see this thread
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-August/042030.html
and this script
https://github.com/cernceph/ceph-scripts/blob/master/tools/rbd-io-stats.pl

There are better tools / method now?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
2017-06-26 11:48 GMT+02:00 Ashley Merrick :

> Your going across host’s so each replication will be on a different host.
>

Thanks :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
2017-06-26 11:15 GMT+02:00 Ashley Merrick :

> Will need to see a full export of your crush map rules.
>

This is my crush map rules:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ceph-storage-rbx-1 {
id -2 # do not change unnecessarily
# weight 10.852
alg straw
hash 0 # rjenkins1
item osd.0 weight 3.617
item osd.2 weight 3.617
item osd.4 weight 3.617
}
host ceph-storage-rbx-2 {
id -3 # do not change unnecessarily
# weight 10.852
alg straw
hash 0 # rjenkins1
item osd.1 weight 3.617
item osd.3 weight 3.617
item osd.5 weight 3.617
}
root default {
id -1 # do not change unnecessarily
# weight 21.704
alg straw
hash 0 # rjenkins1
item ceph-storage-rbx-1 weight 10.852
item ceph-storage-rbx-2 weight 10.852
}

# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
Hi,

I have this OSD:

root@ceph-storage-rbx-1:~# ceph osd tree
ID WEIGHT   TYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.70432 root default
-2 10.85216 host ceph-storage-rbx-1
 0  3.61739 osd.0up  1.0  1.0
 2  3.61739 osd.2up  1.0  1.0
 4  3.61739 osd.4up  1.0  1.0
-3 10.85216 host ceph-storage-rbx-2
 1  3.61739 osd.1up  1.0  1.0
 3  3.61739 osd.3up  1.0  1.0
 5  3.61739 osd.5up  1.0  1.0

with:

  osd_pool_default_size: 2
  osd_pool_default_min_size: 1

Question: does Ceph always write data in one osd on host1 and replica on
host2?
I fear that Ceph sometime write data on osd.0 and replica on osd.2.

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
2017-06-23 20:44 GMT+02:00 David Turner :

> I doubt the ceph version from 10.2.5 to 10.2.7 makes that big of a
> difference.  Read through the release notes since 10.2.5 to see if it
> mentions anything about cephfs quotas.
>

Yes, same error with 10.2.7 :(
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
2017-06-23 17:59 GMT+02:00 David Turner :

> It might be possible that it doesn't want an absolute path and wants a
> relative path for setfattr, although my version doesn't seem to care.  I
> mention that based on the getfattr response.
>
>
I did the test with relative path and I have the same error:

root@ceph-test-1:/mnt/cephfs# getfattr -n ceph.quota.max_bytes foo
foo: ceph.quota.max_bytes: No such attribute
root@ceph-test-1:/mnt/cephfs# setfattr -n ceph.quota.max_bytes -v
10 foo
setfattr: foo: Invalid argument

Maybe that the difference is the ceph version ?

you 10.2.7
me 10.2.5

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
2017-06-23 18:06 GMT+02:00 John Spray :

> I can't immediately remember which version we enabled quota by default
> in -- you might also need to set "client quota = true" in the client's
> ceph.conf.
>
>
I need to set this option only on host where I want to mount volume? or on
all mds hosts?

What I did:

* umount /mnt/cephfs/
* add this lines:

[client]
client quota = true

to /etc/ceph/ceph.conf

* ceph-fuse /mnt/cephfs/

And I have always this errors:

root@ceph-test-1:/mnt/cephfs# getfattr -n ceph.quota.max_bytes
/mnt/cephfs/foo
/mnt/cephfs/foo: ceph.quota.max_bytes: No such attribute
root@ceph-test-1:/mnt/cephfs# cd /mnt/cephfs/
root@ceph-test-1:/mnt/cephfs# getfattr -n ceph.quota.max_bytes foo
foo: ceph.quota.max_bytes: No such attribute
root@ceph-test-1:/mnt/cephfs# setfattr -n ceph.quota.max_bytes -v
10 foo
setfattr: foo: Invalid argument

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
Hi,

I have a CephFS cluster based on Ceph version: 10.2.5
(c461ee19ecbc0c5c330aca20f7392c9a00730367)

I use ceph-fuse to mount CephFS volume on Debian with Ceph version 10.2.5

I would like set quota on CephFS folder:

# setfattr -n ceph.quota.max_bytes -v 10 /mnt/cephfs/foo
setfattr: /mnt/cephfs/foo: Invalid argument

I don't understand, where is my mistake?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CephFS support SELinux?

2017-06-22 Thread Stéphane Klein
2017-06-22 11:48 GMT+02:00 John Spray <jsp...@redhat.com>:

> On Thu, Jun 22, 2017 at 10:25 AM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
> > Hi,
> >
> > Does CephFS support SELinux?
> >
> > I have this issue with OpenShift (with SELinux) + CephFS:
> > http://lists.openshift.redhat.com/openshift-archives/users/
> 2017-June/msg00116.html
>
> We do test running CephFS server and client bits on machines where
> selinux is enabled, but we don't test doing selinux stuff inside the
> filesystem (setting labels etc).  As far as I know, the comments in
> http://tracker.ceph.com/issues/13231 are still relevant.
>
>
# mount -t ceph ceph-test-1:6789:/ /mnt/mycephfs -o
name=admin,secretfile=/etc/ceph/admin.secret
# touch /mnt/mycephfs/foo
# ls /mnt/mycephfs/ -lZ
-rw-r--r-- root root ?foo
# chcon system_u:object_r:admin_home_t:s0 /mnt/mycephfs/foo
chcon: failed to change context of ‘/mnt/mycephfs/foo’ to
‘system_u:object_r:admin_home_t:s0’: Operation not supported

Then SELinux isn't supported with CephFS volume :(
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Does CephFS support SELinux?

2017-06-22 Thread Stéphane Klein
Hi,

Does CephFS support SELinux?

I have this issue with OpenShift (with SELinux) + CephFS:
http://lists.openshift.redhat.com/openshift-archives/users/2017-June/msg00116.html

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] What package I need to install to have CephFS kernel support on CentOS?

2017-06-16 Thread Stéphane Klein
Hi,

I would like to use CephFS kernel module on CentOS 7

I use Atomic version of CentOS.

I don't know where is the CephFS kermel module rpm package.

I have installed: libcephfs1-10.2.7-0.el7.x86_64 package and I have this:

-bash-4.2# rpm -qvl libcephfs1
lrwxrwxrwx1 rootroot   18 Apr 11 03:51
/usr/lib64/libcephfs.so.1 -> libcephfs.so.1.0.0
-rwxr-xr-x1 rootroot  6237376 Apr 11 04:32
/usr/lib64/libcephfs.so.1.0.0

I use this repos:
https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-ceph-jewel/master/CentOS-Ceph-Jewel.repo

Question: what package I need to install to have CephFS kernel support on
CentOS?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] What is "up:standby"? in ceph mds stat => e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby

2017-06-16 Thread Stéphane Klein
2017-06-16 13:07 GMT+02:00 Daniel Carrasco :

> On MDS nodes, by default only the first you add is active: The others
> joins the cluster as standby MDS daemons. When the active fails, then an
> standby MDS becomes active and continues with the work.
>
>
Thanks, it is possible to add this information here
http://docs.ceph.com/docs/master/cephfs/createfs/ to improve the
documentation?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] What is "up:standby"? in ceph mds stat => e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby

2017-06-16 Thread Stéphane Klein
Hi,

I have installed mdss role with Ansible.

Now, I have this:

root@ceph-test-1:/home/vagrant# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
root@ceph-test-1:/home/vagrant# ceph mds stat
e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby
root@ceph-test-1:/home/vagrant# ceph status
cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
 health HEALTH_OK
 monmap e1: 3 mons at {ceph-test-1=
172.28.128.3:6789/0,ceph-test-2=172.28.128.4:6789/0,ceph-test-3=172.28.128.5:6789/0
}
election epoch 10, quorum 0,1,2
ceph-test-1,ceph-test-2,ceph-test-3
  fsmap e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby
 osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
  pgmap v36: 164 pgs, 3 pools, 2068 bytes data, 20 objects
102 MB used, 10652 MB / 10754 MB avail
 164 active+clean

What is up:standby"? in

# ceph mds stat
e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
And now :

ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
 health HEALTH_OK
 monmap e1: 2 mons at {ceph-storage-rbx-1=
172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
election epoch 4, quorum 0,1
ceph-storage-rbx-1,ceph-storage-rbx-2
 osdmap e21: 6 osds: 6 up, 6 in
flags sortbitwise,require_jewel_osds
  pgmap v60: 160 pgs, 1 pools, 0 bytes data, 0 objects
30924 MB used, 22194 GB / 5 GB avail
 160 active+clean

Thanks all is perfect !

2017-06-14 17:00 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> And now:
>
> 2017-06-14 17:00 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>> Ok, I missed:
>>
>>  ceph osd pool set rbd pgp_num 160
>>
>> Now I have:
>>
>>  ceph status
>> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
>>  health HEALTH_ERR
>> 9 pgs are stuck inactive for more than 300 seconds
>> 9 pgs stuck inactive
>> 9 pgs stuck unclean
>>  monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.
>> 30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
>> election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storag
>> e-rbx-2
>>  osdmap e21: 6 osds: 6 up, 6 in
>> flags sortbitwise,require_jewel_osds
>>   pgmap v50: 160 pgs, 1 pools, 0 bytes data, 0 objects
>> 30925 MB used, 22194 GB / 5 GB avail
>>  143 active+clean
>>   17 activating
>>
>> 2017-06-14 16:56 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:
>>
>>> 2017-06-14 16:40 GMT+02:00 David Turner <drakonst...@gmail.com>:
>>>
>>>> Once those PG's have finished creating and the cluster is back to normal
>>>>
>>>
>>> How can I see Cluster migration progression?
>>>
>>> Now I have:
>>>
>>> # ceph status
>>> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
>>>  health HEALTH_WARN
>>> pool rbd pg_num 160 > pgp_num 64
>>>  monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.
>>> 30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
>>> election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storag
>>> e-rbx-2
>>>  osdmap e19: 6 osds: 6 up, 6 in
>>> flags sortbitwise,require_jewel_osds
>>>   pgmap v45: 160 pgs, 1 pools, 0 bytes data, 0 objects
>>> 30923 MB used, 22194 GB / 5 GB avail
>>>  160 active+clean
>>>
>>>
>>
>>
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
And now:

2017-06-14 17:00 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> Ok, I missed:
>
>  ceph osd pool set rbd pgp_num 160
>
> Now I have:
>
>  ceph status
> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
>  health HEALTH_ERR
> 9 pgs are stuck inactive for more than 300 seconds
> 9 pgs stuck inactive
> 9 pgs stuck unclean
>  monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.
> 30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
> election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-
> storage-rbx-2
>  osdmap e21: 6 osds: 6 up, 6 in
> flags sortbitwise,require_jewel_osds
>   pgmap v50: 160 pgs, 1 pools, 0 bytes data, 0 objects
> 30925 MB used, 22194 GB / 5 GB avail
>  143 active+clean
>   17 activating
>
> 2017-06-14 16:56 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>> 2017-06-14 16:40 GMT+02:00 David Turner <drakonst...@gmail.com>:
>>
>>> Once those PG's have finished creating and the cluster is back to normal
>>>
>>
>> How can I see Cluster migration progression?
>>
>> Now I have:
>>
>> # ceph status
>> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
>>  health HEALTH_WARN
>> pool rbd pg_num 160 > pgp_num 64
>>  monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.
>> 30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
>> election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storag
>> e-rbx-2
>>  osdmap e19: 6 osds: 6 up, 6 in
>>     flags sortbitwise,require_jewel_osds
>>   pgmap v45: 160 pgs, 1 pools, 0 bytes data, 0 objects
>> 30923 MB used, 22194 GB / 5 GB avail
>>      160 active+clean
>>
>>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
Ok, I missed:

 ceph osd pool set rbd pgp_num 160

Now I have:

 ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
 health HEALTH_ERR
9 pgs are stuck inactive for more than 300 seconds
9 pgs stuck inactive
9 pgs stuck unclean
 monmap e1: 2 mons at {ceph-storage-rbx-1=
172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
election epoch 4, quorum 0,1
ceph-storage-rbx-1,ceph-storage-rbx-2
 osdmap e21: 6 osds: 6 up, 6 in
flags sortbitwise,require_jewel_osds
  pgmap v50: 160 pgs, 1 pools, 0 bytes data, 0 objects
30925 MB used, 22194 GB / 5 GB avail
 143 active+clean
  17 activating

2017-06-14 16:56 GMT+02:00 Stéphane Klein <cont...@stephane-klein.info>:

> 2017-06-14 16:40 GMT+02:00 David Turner <drakonst...@gmail.com>:
>
>> Once those PG's have finished creating and the cluster is back to normal
>>
>
> How can I see Cluster migration progression?
>
> Now I have:
>
> # ceph status
> cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
>  health HEALTH_WARN
> pool rbd pg_num 160 > pgp_num 64
>  monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.
> 30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
> election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-
> storage-rbx-2
>  osdmap e19: 6 osds: 6 up, 6 in
> flags sortbitwise,require_jewel_osds
>   pgmap v45: 160 pgs, 1 pools, 0 bytes data, 0 objects
> 30923 MB used, 22194 GB / 5 GB avail
>  160 active+clean
>
>


-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
2017-06-14 16:40 GMT+02:00 David Turner :

> Once those PG's have finished creating and the cluster is back to normal
>

How can I see Cluster migration progression?

Now I have:

# ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
 health HEALTH_WARN
pool rbd pg_num 160 > pgp_num 64
 monmap e1: 2 mons at {ceph-storage-rbx-1=
172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
election epoch 4, quorum 0,1
ceph-storage-rbx-1,ceph-storage-rbx-2
 osdmap e19: 6 osds: 6 up, 6 in
flags sortbitwise,require_jewel_osds
  pgmap v45: 160 pgs, 1 pools, 0 bytes data, 0 objects
30923 MB used, 22194 GB / 5 GB avail
 160 active+clean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
Hi,

I have this parameter in my Ansible configuration:

pool_default_pg_num: 300 # (100 * 6) / 2 = 300

But I have this error:

# ceph status
cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
 health HEALTH_ERR
73 pgs are stuck inactive for more than 300 seconds
22 pgs degraded
9 pgs peering
64 pgs stale
22 pgs stuck degraded
9 pgs stuck inactive
64 pgs stuck stale
31 pgs stuck unclean
22 pgs stuck undersized
22 pgs undersized
too few PGs per OSD (16 < min 30)
 monmap e1: 2 mons at {ceph-storage-rbx-1=
172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
election epoch 4, quorum 0,1
ceph-storage-rbx-1,ceph-storage-rbx-2
 osdmap e41: 12 osds: 6 up, 6 in; 8 remapped pgs
flags sortbitwise,require_jewel_osds
  pgmap v79: 64 pgs, 1 pools, 0 bytes data, 0 objects
30919 MB used, 22194 GB / 5 GB avail
  33 stale+active+clean
  22 stale+active+undersized+degraded
   9 stale+peering

I have 2 hosts with 3 partitions, then 3 x 2 OSD ?

Why 16 < min 30 ? I set 300 pg_num

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
2017-02-27 20:53 GMT+01:00 Roger Brown :

> replace "master" with the release codename, eg. http://docs.ceph.com/docs/
> kraken/
>
>
Thanks

I suggest to add the doc version list on http://docs.ceph.com
 page.

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
Hi,

how can I read old Ceph version documentation?

http://docs.ceph.com I see only "master" documentation.

I look for 0.94.5 documentation.

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Stéphane Klein
I see my mistake:

```
 osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs
flags sortbitwise,require_jewel_osds
```
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Stéphane Klein
2017-01-16 12:24 GMT+01:00 Loris Cuoghi <l...@stella-telecom.fr>:

> Hello,
>
> Le 16/01/2017 à 11:50, Stéphane Klein a écrit :
>
>> Hi,
>>
>> I have two OSD and Mon nodes.
>>
>> I'm going to add third osd and mon on this cluster but before I want to
>> fix this error:
>>
> >
> > [SNIP SNAP]
>
> You've just created your cluster.
>
> With the standard CRUSH rules you need one OSD on three different hosts
> for an active+clean cluster.
>
>
With this parameters:

```
# cat /etc/ceph/ceph.conf
[global]
mon initial members = ceph-rbx-1,ceph-rbx-2
cluster network = 172.29.20.0/24
mon host = 172.29.20.10,172.29.20.11
osd_pool_default_size = 2
osd_pool_default_min_size = 1
public network = 172.29.20.0/24
max open files = 131072
fsid = 

[client.libvirt]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be
writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and
allowed by SELinux or AppArmor

[osd]
osd mkfs options xfs = -f -i size=2048
osd mkfs type = xfs
osd journal size = 5120
osd mount options xfs = noatime,largeio,inode64,swalloc
```
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to update osd pool default size at runtime?

2017-01-16 Thread Stéphane Klein
2017-01-16 12:47 GMT+01:00 Jay Linux :

> Hello Stephane,
>
> Try this .
>
> $ceph osd pool get  size   -->> it will prompt the "
>  osd_pool_default_size "
> $ceph osd pool get  min_size-->> it will prompt the "
>  osd_pool_default_min_size "
>
> if you want to change in runtime, trigger below command
>
> $ceph osd pool set  size 
> $ceph osd pool set  min_size 
>
>
Ok thanks, it's work:

# ceph osd pool get rbd size
size: 2
# ceph osd pool get rbd min_size
min_size: 1

It's possible to add a href in the doc to redirect user in the good
documentation section?

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-23 13:03 GMT+01:00 Henrik Korkuc <li...@kirneh.eu>:

> On 16-12-23 12:43, Stéphane Klein wrote:
>
>
> 2016-12-23 11:35 GMT+01:00 Wido den Hollander <w...@42on.com>:
>
>>
>> > Op 23 december 2016 om 10:31 schreef Stéphane Klein <
>> cont...@stephane-klein.info>:
>> >
>> >
>> > 2016-12-22 18:09 GMT+01:00 Wido den Hollander <w...@42on.com>:
>> >
>> > >
>> > > > Op 22 december 2016 om 17:55 schreef Stéphane Klein <
>> > > cont...@stephane-klein.info>:
>> > > >
>> > > >
>> > > > I have this status:
>> > > >
>> > > > bash-4.2# ceph status
>> > > > cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
>> > > >  health HEALTH_WARN
>> > > > pauserd,pausewr,sortbitwise,require_jewel_osds flag(s)
>> set
>> > > >  monmap e1: 3 mons at {ceph-mon-1=
>> > > > 172.28.128.2:6789/0,ceph-mon-2=172.28.128.3:6789/0,ceph-
>> > > mon-3=172.28.128.4:6789/0
>> > > > }
>> > > > election epoch 12, quorum 0,1,2
>> ceph-mon-1,ceph-mon-2,ceph-
>> > > mon-3
>> > > >  osdmap e49: 4 osds: 4 up, 4 in
>> > > > flags pauserd,pausewr,sortbitwise,require_jewel_osds
>> > > >   pgmap v263: 64 pgs, 1 pools, 77443 kB data, 35 objects
>> > > > 281 MB used, 1978 GB / 1979 GB avail
>> > > >   64 active+clean
>> > > >
>> > > > where can I found document about:
>> > > >
>> > > > * pauserd ?
>> > > > * pausewr ?
>> > > >
>> > >
>> > > pauserd: Pause reads
>> > > pauserw: Pause writes
>> > >
>> > > When you set the 'pause' flag it sets both pauserd and pauserw.
>> > >
>> > > When these flags are set all I/O (RD and/or RW) is blocked to clients.
>> >
>> > More information here:
>> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-
>> December/015281.html
>> >
>> > But, I found nothing about Pause in documentation. Why my Ceph Cluster
>> have
>> > switched to pause?
>> >
>>
>> Somebody has set the flag. That doesn't happen automatically.
>>
>
>
> It is a test cluster in Vagrant, there are only one admin: me :)
>
>
>>
>> A admin typed the command to set that flag.
>
>
> I didn't do that, I don't understand.
>
>
> did you run "ceph osd pause" by any chance?
>

No
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Why I don't see "mon osd min down reports" in "config show" report result?

2016-12-23 Thread Stéphane Klein
Hi,

when I execute:

```
root@ceph-mon-1:/home/vagrant# ceph --admin-daemon
/var/run/ceph/ceph-mon.ceph-mon-1.asok config show | grep "down"
"mon_osd_adjust_down_out_interval": "true",
"mon_osd_down_out_interval": "300",
"mon_osd_down_out_subtree_limit": "rack",
"mon_pg_check_down_all_threshold": "0.5",
"mon_warn_on_osd_down_out_interval_zero": "true",
"mon_osd_min_down_reporters": "2",
"mds_shutdown_check": "0",
"mds_mon_shutdown_timeout": "5",
"osd_max_markdown_period": "600",
"osd_max_markdown_count": "5",
"osd_mon_shutdown_timeout": "5",
```

I don't see:

mon osd min down reports

Why? This field is present here:
http://docs.ceph.com/docs/hammer/rados/configuration/mon-osd-interaction/

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-23 11:35 GMT+01:00 Wido den Hollander <w...@42on.com>:

>
> > Op 23 december 2016 om 10:31 schreef Stéphane Klein <
> cont...@stephane-klein.info>:
> >
> >
> > 2016-12-22 18:09 GMT+01:00 Wido den Hollander <w...@42on.com>:
> >
> > >
> > > > Op 22 december 2016 om 17:55 schreef Stéphane Klein <
> > > cont...@stephane-klein.info>:
> > > >
> > > >
> > > > I have this status:
> > > >
> > > > bash-4.2# ceph status
> > > > cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
> > > >  health HEALTH_WARN
> > > > pauserd,pausewr,sortbitwise,require_jewel_osds flag(s)
> set
> > > >  monmap e1: 3 mons at {ceph-mon-1=
> > > > 172.28.128.2:6789/0,ceph-mon-2=172.28.128.3:6789/0,ceph-
> > > mon-3=172.28.128.4:6789/0
> > > > }
> > > > election epoch 12, quorum 0,1,2
> ceph-mon-1,ceph-mon-2,ceph-
> > > mon-3
> > > >  osdmap e49: 4 osds: 4 up, 4 in
> > > > flags pauserd,pausewr,sortbitwise,require_jewel_osds
> > > >   pgmap v263: 64 pgs, 1 pools, 77443 kB data, 35 objects
> > > > 281 MB used, 1978 GB / 1979 GB avail
> > > >   64 active+clean
> > > >
> > > > where can I found document about:
> > > >
> > > > * pauserd ?
> > > > * pausewr ?
> > > >
> > >
> > > pauserd: Pause reads
> > > pauserw: Pause writes
> > >
> > > When you set the 'pause' flag it sets both pauserd and pauserw.
> > >
> > > When these flags are set all I/O (RD and/or RW) is blocked to clients.
> >
> > More information here:
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/
> 2016-December/015281.html
> >
> > But, I found nothing about Pause in documentation. Why my Ceph Cluster
> have
> > switched to pause?
> >
>
> Somebody has set the flag. That doesn't happen automatically.
>


It is a test cluster in Vagrant, there are only one admin: me :)


>
> A admin typed the command to set that flag.


I didn't do that, I don't understand.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Why mon_osd_min_down_reporters isn't set to 1 like the default value in documentation? It is a bug?

2016-12-23 Thread Stéphane Klein
Hi,

in documentation
http://docs.ceph.com/docs/hammer/rados/configuration/mon-osd-interaction/,
I see:

* mon osd min down reporters
Description:The minimum number of Ceph OSD Daemons required to report a
down Ceph OSD Daemon.
Type:32-bit Integer
Default:1

I have used https://github.com/ceph/ceph-ansible installation and I have:

root@ceph-mon-1:/home/vagrant# ceph --admin-daemon
/var/run/ceph/ceph-mon.ceph-mon-1.asok config show | grep "repor"
"mon_osd_report_timeout": "900",
"mon_osd_min_down_reporters": "2",
"mon_osd_reporter_subtree_level": "host",
"osd_mon_report_interval_max": "600",
"osd_mon_report_interval_min": "5",
"osd_mon_report_max_in_flight": "2",
"osd_pg_stat_report_interval_max": "500",

Why mon_osd_min_down_reporters isn't set to 1, the default value of
documentation?
It is a bug?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-23 Thread Stéphane Klein
2016-12-23 2:17 GMT+01:00 Jie Wang :

> OPTION(mon_osd_min_down_reporters, OPT_INT, 2)   // number of OSDs from
> different subtrees who need to report a down OSD for it to count
>
>
Yes, it is that:

# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-1.asok config show |
grep "repor"
"mon_osd_report_timeout": "900",
"mon_osd_min_down_reporters": "2",

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-23 Thread Stéphane Klein
Very interesting documentation about this subject is here:
http://docs.ceph.com/docs/hammer/rados/configuration/mon-osd-interaction/

2016-12-22 12:26 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> I have:
>
> * 3 mon
> * 3 osd
>
> When I shutdown one osd, I work great:
>
> cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
>  health HEALTH_WARN
> 43 pgs degraded
> 43 pgs stuck unclean
> 43 pgs undersized
> recovery 24/70 objects degraded (34.286%)
> too few PGs per OSD (28 < min 30)
> 1/3 in osds are down
>  monmap e1: 3 mons at {ceph-mon-1=172.28.128.2:6789/
> 0,ceph-mon-2=172.28.128.3:6789/0,ceph-mon-3=172.28.128.4:6789/0}
> election epoch 10, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-
> mon-3
>  osdmap e22: 3 osds: 2 up, 3 in; 43 remapped pgs
> flags sortbitwise,require_jewel_osds
>   pgmap v169: 64 pgs, 1 pools, 77443 kB data, 35 objects
> 252 MB used, 1484 GB / 1484 GB avail
> 24/70 objects degraded (34.286%)
>   43 active+undersized+degraded
>   21 active+clean
>
> But, when I shutdown 2 osd, Ceph Cluster don't see that second osd node is
> down :(
>
> root@ceph-mon-1:/home/vagrant# ceph status
> cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
>  health HEALTH_WARN
> clock skew detected on mon.ceph-mon-2
> pauserd,pausewr,sortbitwise,require_jewel_osds flag(s) set
> Monitor clock skew detected
>  monmap e1: 3 mons at {ceph-mon-1=172.28.128.2:6789/
> 0,ceph-mon-2=172.28.128.3:6789/0,ceph-mon-3=172.28.128.4:6789/0}
> election epoch 10, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-
> mon-3
>  osdmap e26: 3 osds: 2 up, 2 in
> flags pauserd,pausewr,sortbitwise,require_jewel_osds
>   pgmap v203: 64 pgs, 1 pools, 77443 kB data, 35 objects
> 219 MB used, 989 GB / 989 GB avail
>   64 active+clean
>
> 2 osd up ! why ?
>
> root@ceph-mon-1:/home/vagrant# ping ceph-osd-1 -c1
> --- ceph-osd-1 ping statistics ---
> 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
>
> root@ceph-mon-1:/home/vagrant# ping ceph-osd-2 -c1
> --- ceph-osd-2 ping statistics ---
> 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
>
> root@ceph-mon-1:/home/vagrant# ping ceph-osd-3 -c1
> --- ceph-osd-3 ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms
>
> My configuration:
>
> ceph_conf_overrides:
>global:
>   osd_pool_default_size: 2
>   osd_pool_default_min_size: 1
>
> Full Ansible configuration is here: https://github.com/harobed/
> poc-ceph-ansible/blob/master/vagrant-3mons-3osd/hosts/
> group_vars/all.yml#L11
>
> What is my mistake? Is it Ceph bug?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-22 18:09 GMT+01:00 Wido den Hollander <w...@42on.com>:

>
> > Op 22 december 2016 om 17:55 schreef Stéphane Klein <
> cont...@stephane-klein.info>:
> >
> >
> > Hi,
> >
> > I have this status:
> >
> > bash-4.2# ceph status
> > cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
> >  health HEALTH_WARN
> > pauserd,pausewr,sortbitwise,require_jewel_osds flag(s) set
> >  monmap e1: 3 mons at {ceph-mon-1=
> > 172.28.128.2:6789/0,ceph-mon-2=172.28.128.3:6789/0,ceph-
> mon-3=172.28.128.4:6789/0
> > }
> > election epoch 12, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-
> mon-3
> >  osdmap e49: 4 osds: 4 up, 4 in
> > flags pauserd,pausewr,sortbitwise,require_jewel_osds
> >   pgmap v263: 64 pgs, 1 pools, 77443 kB data, 35 objects
> > 281 MB used, 1978 GB / 1979 GB avail
> >   64 active+clean
> >
> > where can I found document about:
> >
> > * pauserd ?
> > * pausewr ?
> >
>
> pauserd: Pause reads
> pauserw: Pause writes
>
> When you set the 'pause' flag it sets both pauserd and pauserw.
>
> When these flags are set all I/O (RD and/or RW) is blocked to clients.


Thanks.

More information here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/015281.html

But, I found nothing about Pause in documentation. Why my Ceph Cluster have
switched to pause?

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How can I debug "rbd list" hang?

2016-12-22 Thread Stéphane Klein
2016-12-22 18:07 GMT+01:00 Nick Fisk :

> I think you have probably just answered your previous question. I would
> guess pauserd and pausewr, pauses read and write IO, hence your command to
> list is being blocked on reads.
>
>
>

How can I fix that? Where is the documentation about this two flags status?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How can I debug "rbd list" hang?

2016-12-22 Thread Stéphane Klein
Hi,

I have this status:

root@ceph-mon-1:/home/vagrant# ceph status
cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
 health HEALTH_WARN
pauserd,pausewr,sortbitwise,require_jewel_osds flag(s) set
 monmap e1: 3 mons at {ceph-mon-1=
172.28.128.2:6789/0,ceph-mon-2=172.28.128.3:6789/0,ceph-mon-3=172.28.128.4:6789/0
}
election epoch 12, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-mon-3
 osdmap e49: 4 osds: 4 up, 4 in
flags pauserd,pausewr,sortbitwise,require_jewel_osds
  pgmap v266: 64 pgs, 1 pools, 77443 kB data, 35 objects
281 MB used, 1978 GB / 1979 GB avail
  64 active+clean

Why "rbd list" command hang?

How can I debug that?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] What is pauserd and pausewr status?

2016-12-22 Thread Stéphane Klein
Hi,

I have this status:

bash-4.2# ceph status
cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
 health HEALTH_WARN
pauserd,pausewr,sortbitwise,require_jewel_osds flag(s) set
 monmap e1: 3 mons at {ceph-mon-1=
172.28.128.2:6789/0,ceph-mon-2=172.28.128.3:6789/0,ceph-mon-3=172.28.128.4:6789/0
}
election epoch 12, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-mon-3
 osdmap e49: 4 osds: 4 up, 4 in
flags pauserd,pausewr,sortbitwise,require_jewel_osds
  pgmap v263: 64 pgs, 1 pools, 77443 kB data, 35 objects
281 MB used, 1978 GB / 1979 GB avail
  64 active+clean

where can I found document about:

* pauserd ?
* pausewr ?

Nothing in documentation search engine.

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-22 Thread Stéphane Klein
2016-12-22 12:30 GMT+01:00 Henrik Korkuc :

> try waiting a little longer. Mon needs multiple down reports to take OSD
> down. And as your cluster is very small there is small amount (1 in this
> case) of OSDs to report that others are down.
>
>
Why this limitation? because my rbd mount on ceph-client-1 host is hang
since 10 minutes already :(
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-22 Thread Stéphane Klein
Hi,

I have:

* 3 mon
* 3 osd

When I shutdown one osd, I work great:

cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
 health HEALTH_WARN
43 pgs degraded
43 pgs stuck unclean
43 pgs undersized
recovery 24/70 objects degraded (34.286%)
too few PGs per OSD (28 < min 30)
1/3 in osds are down
 monmap e1: 3 mons at {ceph-mon-1=
172.28.128.2:6789/0,ceph-mon-2=172.28.128.3:6789/0,ceph-mon-3=172.28.128.4:6789/0
}
election epoch 10, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-mon-3
 osdmap e22: 3 osds: 2 up, 3 in; 43 remapped pgs
flags sortbitwise,require_jewel_osds
  pgmap v169: 64 pgs, 1 pools, 77443 kB data, 35 objects
252 MB used, 1484 GB / 1484 GB avail
24/70 objects degraded (34.286%)
  43 active+undersized+degraded
  21 active+clean

But, when I shutdown 2 osd, Ceph Cluster don't see that second osd node is
down :(

root@ceph-mon-1:/home/vagrant# ceph status
cluster 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
 health HEALTH_WARN
clock skew detected on mon.ceph-mon-2
pauserd,pausewr,sortbitwise,require_jewel_osds flag(s) set
Monitor clock skew detected
 monmap e1: 3 mons at {ceph-mon-1=
172.28.128.2:6789/0,ceph-mon-2=172.28.128.3:6789/0,ceph-mon-3=172.28.128.4:6789/0
}
election epoch 10, quorum 0,1,2 ceph-mon-1,ceph-mon-2,ceph-mon-3
 osdmap e26: 3 osds: 2 up, 2 in
flags pauserd,pausewr,sortbitwise,require_jewel_osds
  pgmap v203: 64 pgs, 1 pools, 77443 kB data, 35 objects
219 MB used, 989 GB / 989 GB avail
  64 active+clean

2 osd up ! why ?

root@ceph-mon-1:/home/vagrant# ping ceph-osd-1 -c1
--- ceph-osd-1 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

root@ceph-mon-1:/home/vagrant# ping ceph-osd-2 -c1
--- ceph-osd-2 ping statistics ---
1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms

root@ceph-mon-1:/home/vagrant# ping ceph-osd-3 -c1
--- ceph-osd-3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms

My configuration:

ceph_conf_overrides:
   global:
  osd_pool_default_size: 2
  osd_pool_default_min_size: 1

Full Ansible configuration is here:
https://github.com/harobed/poc-ceph-ansible/blob/master/vagrant-3mons-3osd/hosts/group_vars/all.yml#L11

What is my mistake? Is it Ceph bug?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] When I shutdown one osd node, where can I see the block movement?

2016-12-22 Thread Stéphane Klein
HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized;
recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 < min
30); 1/3 in osds are down;

Here Ceph say there are 24 objects to move?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How can I ask to Ceph Cluster to move blocks now when osd is down?

2016-12-22 Thread Stéphane Klein
Hi,

How can I ask to Ceph Cluster to move blocks now when osd is down?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] When I shutdown one osd node, where can I see the block movement?

2016-12-22 Thread Stéphane Klein
Hi,

When I shutdown one osd node, where can I see the block movement?
Where can I see percentage progression?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-22 Thread Stéphane Klein
2016-12-21 23:33 GMT+01:00 Ilya Dryomov :

>
> What if you boot ceph-client-3 with >512M memory, say 2G?
>
>
With:

* 512 M memory => failed
* 1000 M memory => failed
* 1500 M memory => success
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:39 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

>
>
> 2016-12-21 23:33 GMT+01:00 Ilya Dryomov <idryo...@gmail.com>:
>
>> What if you boot ceph-client-3 with >512M memory, say 2G?
>>
>
> Success !
>


It is possible to add a warning message in rbd to say if memory is too low?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:33 GMT+01:00 Ilya Dryomov :

> What if you boot ceph-client-3 with >512M memory, say 2G?
>

Success !

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:33 GMT+01:00 Ilya Dryomov <idryo...@gmail.com>:

> On Wed, Dec 21, 2016 at 11:10 PM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
> >
> > 2016-12-21 23:06 GMT+01:00 Ilya Dryomov <idryo...@gmail.com>:
> >>
> >> What's the output of "cat /proc/$(pidof rm)/stack?
> >
> >
> > root@ceph-client-3:/home/vagrant# cat /proc/2315/stack
> > [] sleep_on_page+0xe/0x20
> > [] wait_on_page_bit+0x7f/0x90
> > [] truncate_inode_pages_range+0x2fe/0x5a0
> > [] truncate_inode_pages+0x15/0x20
> > [] ext4_evict_inode+0x12e/0x510
> > [] evict+0xb0/0x1b0
> > [] iput+0xf5/0x180
> > [] do_unlinkat+0x18e/0x2b0
> > [] SyS_unlinkat+0x1b/0x40
> > [] system_call_fastpath+0x1a/0x1f
> > [] 0x
> >
> >>
> >>
> >> Can you do "echo w >/proc/sysrq-trigger", "echo t >/proc/sysrq-trigger"
> >> on the ceph-client VM and attach dmesg output?
> >
> >
> > https://gist.github.com/harobed/37e23ced839f17d91a0e43435348205a
>
> [   73.086952] libceph: loaded (mon/osd proto 15/24)
> [   73.089990] rbd: loaded rbd (rados block device)
> [   73.091532] libceph: mon1 172.28.128.3:6789 feature set mismatch,
> my 4a042a42 < server's 2004a042a42, missing 200
> [   73.092582] libceph: mon1 172.28.128.3:6789 socket error on read
>
> Where is this coming from - I thought you said you set tunables to
> legacy?


This is old message in dmesg.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:06 GMT+01:00 Ilya Dryomov :

> What's the output of "cat /proc/$(pidof rm)/stack?
>

root@ceph-client-3:/home/vagrant# cat /proc/2315/stack
[] sleep_on_page+0xe/0x20
[] wait_on_page_bit+0x7f/0x90
[] truncate_inode_pages_range+0x2fe/0x5a0
[] truncate_inode_pages+0x15/0x20
[] ext4_evict_inode+0x12e/0x510
[] evict+0xb0/0x1b0
[] iput+0xf5/0x180
[] do_unlinkat+0x18e/0x2b0
[] SyS_unlinkat+0x1b/0x40
[] system_call_fastpath+0x1a/0x1f
[] 0x


>
> Can you do "echo w >/proc/sysrq-trigger", "echo t >/proc/sysrq-trigger"
> on the ceph-client VM and attach dmesg output?
>

https://gist.github.com/harobed/37e23ced839f17d91a0e43435348205a

Thanks :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
> Not sure what's going on here.  Using firefly version of the rbd CLI
> tool isn't recommended of course, but doesn't seem to be _the_ problem.
> Can you try some other distro with an equally old ceph - ubuntu trusty
> perhaps?


Same error with:

* Ubuntu trusty

root@ceph-client-3:/home/vagrant# uname --all
Linux ceph-client-3 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
root@ceph-client-3:/home/vagrant# rbd --version
ceph version 0.80.11 (8424145d49264624a3b0a204aedb127835161070)

Note, with this OS, I need to configure Ceph Cluster in legacy mode with:

# ceph osd crush tunables legacy

Can I explore other solution ?

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 19:51 GMT+01:00 Ilya Dryomov <idryo...@gmail.com>:

> On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
> >>
> > 2016-12-21 18:47 GMT+01:00 Ilya Dryomov <idryo...@gmail.com>:
> >>
> >> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
> >> <cont...@stephane-klein.info> wrote:
> >> > I have configured:
> >> >
> >> > ```
> >> > ceph osd crush tunables firefly
> >> > ```
> >>
> >> If it gets to rm, then it's probably not tunables.  Are you running
> >> these commands by hand?
> >
> >
> > Yes, I have executed this command on my mon-1 host
> >
> >>
> >>
> >> Anything in dmesg?
> >
> >
> >
> > This:
> >
> > ```
> > [  614.278589] SELinux: initialized (dev tmpfs, type tmpfs), uses
> transition
> > SIDs
> > [  910.797793] SELinux: initialized (dev tmpfs, type tmpfs), uses
> transition
> > SIDs
> > [ 1126.251705] SELinux: initialized (dev tmpfs, type tmpfs), uses
> transition
> > SIDs
> > [ 1214.030659] Key type dns_resolver registered
> > [ 1214.042308] Key type ceph registered
> > [ 1214.043852] libceph: loaded (mon/osd proto 15/24)
> > [ 1214.045944] rbd: loaded (major 252)
> > [ 1214.053449] libceph: client4200 fsid 7ecb6ebd-2e7a-44c3-bf0d-
> ff8d193e03ac
> > [ 1214.056406] libceph: mon0 172.28.128.2:6789 session established
> > [ 1214.066596]  rbd0: unknown partition table
> > [ 1214.066875] rbd: rbd0: added with size 0x1900
> > [ 1219.120342] EXT4-fs (rbd0): mounted filesystem with ordered data mode.
> > Opts: (null)
> > [ 1219.120754] SELinux: initialized (dev rbd0, type ext4), uses xattr
> > ```
> >
> > If I reboot my client host, and remount this disk then I can delete the
> > folder with "rm -rf" with success.
>
> I'm not following - you are running 7 VMs - 3 mon VMs, 3 osd VMs and
> a "ceph-client" VM.  If ceph-client is debian the test case works


Yes


> if it's something RHEL-based it doesn't, correct?
>

Yes


>
> Are they all on the same bare metal host?  Which of these are you
> rebooting?
>

this is this host in Vagrantfile
https://github.com/harobed/poc-ceph-ansible/blob/master/vagrant-3mons-3osd/Vagrantfile#L61



>
> Try dumping /sys/kernel/debug/ceph//osdc on the
> ceph-client VM when rm hangs.



osdc file is empty:

# cat
/sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/osdc
# cat
/sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/client_options
name=admin,secret=
# cat
/sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/monc
have osdmap 19
# cat
/sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/monmap
epoch 1
mon0172.28.128.2:6789
mon1172.28.128.3:6789
mon2172.28.128.4:6789
# cat
/sys/kernel/debug/ceph/7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac.client4200/osdmap
epoch 19
flags
pool 0 pg_num 64 (63) read_tier -1 write_tier -1
osd0172.28.128.6:6800100%(exists, up)100%
osd1172.28.128.5:6800100%(exists, up)100%
osd2172.28.128.7:6800100%(exists, up)100%

Version informations:

# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)
# cat /etc/centos-release-upstream
Derived from Red Hat Enterprise Linux 7.2 (Source)
# uname --all
Linux ceph-client-1 3.10.0-327.36.1.el7.x86_64 #1 SMP Sun Sep 18 13:04:29
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
# ceph --version
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)

Thanks for your help.

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 18:47 GMT+01:00 Ilya Dryomov <idryo...@gmail.com>:

> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein
> <cont...@stephane-klein.info> wrote:
> > I have configured:
> >
> > ```
> > ceph osd crush tunables firefly
> > ```
>
> If it gets to rm, then it's probably not tunables.  Are you running
> these commands by hand?
>

Yes, I have executed this command on my mon-1 host


>
> Anything in dmesg?



This:

```
[  614.278589] SELinux: initialized (dev tmpfs, type tmpfs), uses
transition SIDs
[  910.797793] SELinux: initialized (dev tmpfs, type tmpfs), uses
transition SIDs
[ 1126.251705] SELinux: initialized (dev tmpfs, type tmpfs), uses
transition SIDs
[ 1214.030659] Key type dns_resolver registered
[ 1214.042308] Key type ceph registered
[ 1214.043852] libceph: loaded (mon/osd proto 15/24)
[ 1214.045944] rbd: loaded (major 252)
[ 1214.053449] libceph: client4200 fsid 7ecb6ebd-2e7a-44c3-bf0d-ff8d193e03ac
[ 1214.056406] libceph: mon0 172.28.128.2:6789 session established
[ 1214.066596]  rbd0: unknown partition table
[ 1214.066875] rbd: rbd0: added with size 0x1900
[ 1219.120342] EXT4-fs (rbd0): mounted filesystem with ordered data mode.
Opts: (null)
[ 1219.120754] SELinux: initialized (dev rbd0, type ext4), uses xattr
```

If I reboot my client host, and remount this disk then I can delete the
folder with "rm -rf" with success.

Best regards,
Stéphane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
Same error with rbd image create with --image-format 1

2016-12-21 14:51 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> I use this Ansible installation: https://github.com/harobed/
> poc-ceph-ansible/tree/master/vagrant-3mons-3osd
>
> I have:
>
> * 3 osd
> * 3 mons
>
> ```
> root@ceph-test-1:/home/vagrant# ceph version
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
> ```
>
> ```
> bash-4.2# rbd --version
> ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
> ```
>
> ```
> # uname --all
> Linux ceph-client-1 3.10.0-327.36.1.el7.x86_64 #1 SMP Sun Sep 18 13:04:29
> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> ```
>
> What I do to freeze rbd mounted folder:
>
> ```
> $ vagrant ssh ceph-client-1
> $ sudo su
> # rbd map rbd/image2 --id admin
> # mkdir -p /mnt/image2
> # mount /dev/rbd0 /mnt/image2
> # cd /mnt/image2/
> # curl https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tar.xz -o
> Python-2.7.13.tar.xz
> # tar xf Python-2.7.13.tar.xz
> # rm Python-2.7.13 -rf
> rm ^C^C^C^C^C^C^C^C^C^C
> ```
>
> here, nothing, I can't kill "rm" process.
>
> What can I do? How can I debug that?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
I have configured:

```
ceph osd crush tunables firefly
```

on cluster. After that, same error :(

2016-12-21 15:23 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> No problem with Debian:
>
> ```
> root@ceph-client-2:/mnt/image2# rbd --version
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
> root@ceph-client-2:/mnt/image2# uname --all
> Linux ceph-client-2 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt9-2
> (2015-04-13) x86_64 GNU/Linux
> ```
>
> I need to upgrade rbd on AtomicProject.
>
> 2016-12-21 14:51 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:
>
>> Hi,
>>
>> I use this Ansible installation: https://github.com/harobed/poc
>> -ceph-ansible/tree/master/vagrant-3mons-3osd
>>
>> I have:
>>
>> * 3 osd
>> * 3 mons
>>
>> ```
>> root@ceph-test-1:/home/vagrant# ceph version
>> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
>> ```
>>
>> ```
>> bash-4.2# rbd --version
>> ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
>> ```
>>
>> ```
>> # uname --all
>> Linux ceph-client-1 3.10.0-327.36.1.el7.x86_64 #1 SMP Sun Sep 18 13:04:29
>> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>> ```
>>
>> What I do to freeze rbd mounted folder:
>>
>> ```
>> $ vagrant ssh ceph-client-1
>> $ sudo su
>> # rbd map rbd/image2 --id admin
>> # mkdir -p /mnt/image2
>> # mount /dev/rbd0 /mnt/image2
>> # cd /mnt/image2/
>> # curl https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tar.xz -o
>> Python-2.7.13.tar.xz
>> # tar xf Python-2.7.13.tar.xz
>> # rm Python-2.7.13 -rf
>> rm ^C^C^C^C^C^C^C^C^C^C
>> ```
>>
>> here, nothing, I can't kill "rm" process.
>>
>> What can I do? How can I debug that?
>>
>> Best regards,
>> Stéphane
>> --
>> Stéphane Klein <cont...@stephane-klein.info>
>> blog: http://stephane-klein.info
>> cv : http://cv.stephane-klein.info
>> Twitter: http://twitter.com/klein_stephane
>>
>
>
>
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Question: can I use rbd 0.80.7 with ceph cluster version 10.2.5?

2016-12-21 Thread Stéphane Klein
Hi,

I have this issue:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/015216.html

Question: can I use rbd 0.80.7 with ceph cluster version 10.2.5?

Why I use this old version? Because I use Atomic Project
http://www.projectatomic.io/

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
No problem with Debian:

```
root@ceph-client-2:/mnt/image2# rbd --version
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
root@ceph-client-2:/mnt/image2# uname --all
Linux ceph-client-2 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt9-2 (2015-04-13)
x86_64 GNU/Linux
```

I need to upgrade rbd on AtomicProject.

2016-12-21 14:51 GMT+01:00 Stéphane Klein <cont...@stephane-klein.info>:

> Hi,
>
> I use this Ansible installation: https://github.com/harobed/
> poc-ceph-ansible/tree/master/vagrant-3mons-3osd
>
> I have:
>
> * 3 osd
> * 3 mons
>
> ```
> root@ceph-test-1:/home/vagrant# ceph version
> ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
> ```
>
> ```
> bash-4.2# rbd --version
> ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
> ```
>
> ```
> # uname --all
> Linux ceph-client-1 3.10.0-327.36.1.el7.x86_64 #1 SMP Sun Sep 18 13:04:29
> UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> ```
>
> What I do to freeze rbd mounted folder:
>
> ```
> $ vagrant ssh ceph-client-1
> $ sudo su
> # rbd map rbd/image2 --id admin
> # mkdir -p /mnt/image2
> # mount /dev/rbd0 /mnt/image2
> # cd /mnt/image2/
> # curl https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tar.xz -o
> Python-2.7.13.tar.xz
> # tar xf Python-2.7.13.tar.xz
> # rm Python-2.7.13 -rf
> rm ^C^C^C^C^C^C^C^C^C^C
> ```
>
> here, nothing, I can't kill "rm" process.
>
> What can I do? How can I debug that?
>
> Best regards,
> Stéphane
> --
> Stéphane Klein <cont...@stephane-klein.info>
> blog: http://stephane-klein.info
> cv : http://cv.stephane-klein.info
> Twitter: http://twitter.com/klein_stephane
>



-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
Hi,

I use this Ansible installation:
https://github.com/harobed/poc-ceph-ansible/tree/master/vagrant-3mons-3osd

I have:

* 3 osd
* 3 mons

```
root@ceph-test-1:/home/vagrant# ceph version
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
```

```
bash-4.2# rbd --version
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
```

```
# uname --all
Linux ceph-client-1 3.10.0-327.36.1.el7.x86_64 #1 SMP Sun Sep 18 13:04:29
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
```

What I do to freeze rbd mounted folder:

```
$ vagrant ssh ceph-client-1
$ sudo su
# rbd map rbd/image2 --id admin
# mkdir -p /mnt/image2
# mount /dev/rbd0 /mnt/image2
# cd /mnt/image2/
# curl https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tar.xz -o
Python-2.7.13.tar.xz
# tar xf Python-2.7.13.tar.xz
# rm Python-2.7.13 -rf
rm ^C^C^C^C^C^C^C^C^C^C
```

here, nothing, I can't kill "rm" process.

What can I do? How can I debug that?

Best regards,
Stéphane
-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd showmapped -p and --image options missing in rbd version 10.2.4, why?

2016-12-09 Thread Stéphane Klein
Hi,

with: rbd version 0.80.7, `rbd showmapped` have this options:

*   -p, --pool   source pool name
*  --imageimage name

This options missing in rdb version 10.2.4

Why ? It is a regression ? Is there another command to list map by pool
name and image name ?

Best regards,
Stéphane

-- 
Stéphane Klein <cont...@stephane-klein.info>
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com