Re: [ceph-users] How to change setting for tunables "require_feature_tunables5"

2016-05-12 Thread Andrey Shevel
Great!

You are marvelos ceph expert !

I did

[ceph@ceph-client ~]$ rbd feature disable mycephrbd
deep-flatten,fast-diff,object-map,exclusive-lock
[ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile
/etc/ceph/admin.key
/dev/rbd0

Also I added the recommended line in client ceph.conf.





On Thu, May 12, 2016 at 5:47 PM, Ilya Dryomov  wrote:
> On Thu, May 12, 2016 at 4:37 PM, Andrey Shevel  
> wrote:
>> Thanks a lot.
>>
>> I tried
>>
>> [ceph@ceph-client ~]$ ceph osd crush tunables hammer.
>>
>> Now I have
>>
>>
>> [ceph@ceph-client ~]$ ceph osd crush show-tunables
>> {
>> "choose_local_tries": 0,
>> "choose_local_fallback_tries": 0,
>> "choose_total_tries": 50,
>> "chooseleaf_descend_once": 1,
>> "chooseleaf_vary_r": 1,
>> "chooseleaf_stable": 0,
>> "straw_calc_version": 1,
>> "allowed_bucket_algs": 54,
>> "profile": "hammer",
>> "optimal_tunables": 0,
>> "legacy_tunables": 0,
>> "minimum_required_version": "firefly",
>> "require_feature_tunables": 1,
>> "require_feature_tunables2": 1,
>> "has_v2_rules": 0,
>> "require_feature_tunables3": 1,
>> "has_v3_rules": 0,
>> "has_v4_buckets": 0,
>> "require_feature_tunables5": 0,
>> "has_v5_rules": 0
>> }
>>
>>
>>
>> and the mount command for cephfs does work now fine
>>
>> sudo mount -t ceph 10.10.1.11:6789:/ /mnt/mycephfs -o
>> name=admin,secretfile=/etc/ceph/admin.key
>>
>> [ceph@ceph-client ~]$ mount | grep ceph
>> 10.10.1.11:6789:/ on /mnt/mycephfs type ceph
>> (rw,relatime,name=admin,secret=)
>>
>> It was NOT working before!
>>
>>
>> However,  still :-(
>>
>> [ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile
>> /etc/ceph/admin.key
>> rbd: sysfs write failed
>> rbd: map failed: (6) No such device or address
>
> Do
>
> $ rbd feature disable mycephrbd
> deep-flatten,fast-diff,object-map,exclusive-lock
>
> to disable features unsupported by the kernel client.  If you are using the
> kernel client, you should create your images with
>
> $ rbd create --size  --image-feature layering 
>
> or add
>
> rbd default features = 3
>
> to ceph.conf on the client side.  (Setting rbd default features on the
> OSDs will have no effect.)
>
> Thanks,
>
> Ilya



-- 
Andrey Y Shevel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to change setting for tunables "require_feature_tunables5"

2016-05-12 Thread Ilya Dryomov
On Thu, May 12, 2016 at 4:37 PM, Andrey Shevel  wrote:
> Thanks a lot.
>
> I tried
>
> [ceph@ceph-client ~]$ ceph osd crush tunables hammer.
>
> Now I have
>
>
> [ceph@ceph-client ~]$ ceph osd crush show-tunables
> {
> "choose_local_tries": 0,
> "choose_local_fallback_tries": 0,
> "choose_total_tries": 50,
> "chooseleaf_descend_once": 1,
> "chooseleaf_vary_r": 1,
> "chooseleaf_stable": 0,
> "straw_calc_version": 1,
> "allowed_bucket_algs": 54,
> "profile": "hammer",
> "optimal_tunables": 0,
> "legacy_tunables": 0,
> "minimum_required_version": "firefly",
> "require_feature_tunables": 1,
> "require_feature_tunables2": 1,
> "has_v2_rules": 0,
> "require_feature_tunables3": 1,
> "has_v3_rules": 0,
> "has_v4_buckets": 0,
> "require_feature_tunables5": 0,
> "has_v5_rules": 0
> }
>
>
>
> and the mount command for cephfs does work now fine
>
> sudo mount -t ceph 10.10.1.11:6789:/ /mnt/mycephfs -o
> name=admin,secretfile=/etc/ceph/admin.key
>
> [ceph@ceph-client ~]$ mount | grep ceph
> 10.10.1.11:6789:/ on /mnt/mycephfs type ceph
> (rw,relatime,name=admin,secret=)
>
> It was NOT working before!
>
>
> However,  still :-(
>
> [ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile
> /etc/ceph/admin.key
> rbd: sysfs write failed
> rbd: map failed: (6) No such device or address

Do

$ rbd feature disable mycephrbd
deep-flatten,fast-diff,object-map,exclusive-lock

to disable features unsupported by the kernel client.  If you are using the
kernel client, you should create your images with

$ rbd create --size  --image-feature layering 

or add

rbd default features = 3

to ceph.conf on the client side.  (Setting rbd default features on the
OSDs will have no effect.)

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to change setting for tunables "require_feature_tunables5"

2016-05-12 Thread Andrey Shevel
Thanks a lot.

I tried

[ceph@ceph-client ~]$ ceph osd crush tunables hammer.

Now I have


[ceph@ceph-client ~]$ ceph osd crush show-tunables
{
"choose_local_tries": 0,
"choose_local_fallback_tries": 0,
"choose_total_tries": 50,
"chooseleaf_descend_once": 1,
"chooseleaf_vary_r": 1,
"chooseleaf_stable": 0,
"straw_calc_version": 1,
"allowed_bucket_algs": 54,
"profile": "hammer",
"optimal_tunables": 0,
"legacy_tunables": 0,
"minimum_required_version": "firefly",
"require_feature_tunables": 1,
"require_feature_tunables2": 1,
"has_v2_rules": 0,
"require_feature_tunables3": 1,
"has_v3_rules": 0,
"has_v4_buckets": 0,
"require_feature_tunables5": 0,
"has_v5_rules": 0
}



and the mount command for cephfs does work now fine

sudo mount -t ceph 10.10.1.11:6789:/ /mnt/mycephfs -o
name=admin,secretfile=/etc/ceph/admin.key

[ceph@ceph-client ~]$ mount | grep ceph
10.10.1.11:6789:/ on /mnt/mycephfs type ceph
(rw,relatime,name=admin,secret=)

It was NOT working before!


However,  still :-(

[ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile
/etc/ceph/admin.key
rbd: sysfs write failed
rbd: map failed: (6) No such device or address



in spite of /var/log/messages is fine now

==
[ceph@ceph-client ~]$ sudo tail /var/log/messages
May 12 17:28:34 ceph-client kernel: libceph: client1454730 fsid
65b8080e-d813-45ca-9cc1-ecb242967694
May 12 17:28:34 ceph-client kernel: libceph: mon0 10.10.1.11:6789
session established
May 12 17:30:01 ceph-client systemd: Created slice user-0.slice.
May 12 17:30:01 ceph-client systemd: Starting user-0.slice.
May 12 17:30:01 ceph-client systemd: Started Session 202 of user root.
May 12 17:30:01 ceph-client systemd: Starting Session 202 of user root.
May 12 17:30:01 ceph-client systemd: Removed slice user-0.slice.
May 12 17:30:01 ceph-client systemd: Stopping user-0.slice.
May 12 17:32:58 ceph-client kernel: libceph: client1464821 fsid
65b8080e-d813-45ca-9cc1-ecb242967694
May 12 17:32:58 ceph-client kernel: libceph: mon1 10.10.1.12:6789
session established


and

ceph@ceph-client ~]$ ceph -s
cluster 65b8080e-d813-45ca-9cc1-ecb242967694
 health HEALTH_OK
 monmap e21: 5 mons at
{osd1=10.10.1.11:6789/0,osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=10.10.1.41:6789/0}
election epoch 7050, quorum 0,1,2,3,4 osd1,osd2,osd3,osd4,stor
  fsmap e1251: 1/1/1 up {1:0=osd3=up:active}, 3 up:standby
 osdmap e20535: 22 osds: 22 up, 22 in
  pgmap v258523: 1056 pgs, 5 pools, 259 kB data, 33 objects
1875 MB used, 81900 GB / 81902 GB avail
1056 active+clean

On Thu, May 12, 2016 at 2:01 PM, Xusangdi <xu.san...@h3c.com> wrote:
> Hi Andrey,
>
> You may change your cluster to a previous version of crush profile (e.g. 
> hammer) by command:
> `ceph osd crush tunables hammer`
>
> Or, if you want to only switch off the tunables5, do as the following steps 
> (not sure if there is a
> simpler way :<)
> 1. `ceph osd getcrushmap -o crushmap`
> 2. `crushtool -d crushmap -o decrushmap`
> 3. edit `decrushmap`, delete the `tunable chooseleaf_stable 1` line
> 4. `crushtool -c decrushmap -o crushmap`
> 5. `ceph osd setcrushmap -i crushmap`
>
> Please note either way would cause heavy pg migrations, so choose a proper 
> time to do it :O
>
> Regards,
> ---Sandy
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
>> Andrey Shevel
>> Sent: Thursday, May 12, 2016 3:55 PM
>> To: ceph-us...@ceph.com
>> Subject: Re: [ceph-users] How to change setting for tunables 
>> "require_feature_tunables5"
>>
>> Hello,
>>
>> I am still working with the issue, however no success yet.
>>
>>
>> Any ideas would be helpful.
>>
>> The problem is:
>>
>> [ceph@ceph-client ~]$ ceph -s
>> cluster 65b8080e-d813-45ca-9cc1-ecb242967694
>>  health HEALTH_OK
>>  monmap e21: 5 mons at
>> {osd1=10.10.1.11:6789/0,osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=1
>> 0.10.1.41:6789/0}
>> election epoch 6844, quorum 0,1,2,3,4 osd1,osd2,osd3,osd4,stor
>>  osdmap e20510: 22 osds: 22 up, 22 in
>>   pgmap v251215: 400 pgs, 2 pools, 128 kB data, 6 objects
>> 2349 MB used, 81900 GB / 81902 GB avail
>>  400 active+clean
>>   client io 657 B/s rd, 1 op/s rd, 0 op/s wr
>>
>>
>>
>> [ceph@ceph-client ~]$ ceph -

Re: [ceph-users] How to change setting for tunables "require_feature_tunables5"

2016-05-12 Thread Xusangdi
Hi Andrey,

You may change your cluster to a previous version of crush profile (e.g. 
hammer) by command:
`ceph osd crush tunables hammer`

Or, if you want to only switch off the tunables5, do as the following steps 
(not sure if there is a
simpler way :<)
1. `ceph osd getcrushmap -o crushmap`
2. `crushtool -d crushmap -o decrushmap`
3. edit `decrushmap`, delete the `tunable chooseleaf_stable 1` line
4. `crushtool -c decrushmap -o crushmap`
5. `ceph osd setcrushmap -i crushmap`

Please note either way would cause heavy pg migrations, so choose a proper time 
to do it :O

Regards,
---Sandy

> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> Andrey Shevel
> Sent: Thursday, May 12, 2016 3:55 PM
> To: ceph-us...@ceph.com
> Subject: Re: [ceph-users] How to change setting for tunables 
> "require_feature_tunables5"
>
> Hello,
>
> I am still working with the issue, however no success yet.
>
>
> Any ideas would be helpful.
>
> The problem is:
>
> [ceph@ceph-client ~]$ ceph -s
> cluster 65b8080e-d813-45ca-9cc1-ecb242967694
>  health HEALTH_OK
>  monmap e21: 5 mons at
> {osd1=10.10.1.11:6789/0,osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=1
> 0.10.1.41:6789/0}
> election epoch 6844, quorum 0,1,2,3,4 osd1,osd2,osd3,osd4,stor
>  osdmap e20510: 22 osds: 22 up, 22 in
>   pgmap v251215: 400 pgs, 2 pools, 128 kB data, 6 objects
> 2349 MB used, 81900 GB / 81902 GB avail
>  400 active+clean
>   client io 657 B/s rd, 1 op/s rd, 0 op/s wr
>
>
>
> [ceph@ceph-client ~]$ ceph -v
> ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
>
>
> [ceph@ceph-client ~]$ rbd ls --long --pool rbd
> NAME   SIZE PARENT FMT PROT LOCK
> mycephrbd 2048G  2
> newTest   4096M  2
>
>
> [ceph@ceph-client ~]$ lsmod | grep rbd
> rbd73208  0
> libceph   244999  1 rbd
>
>
> [ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile 
> /etc/ceph/admin.key; sudo tail
> /var/log/messages
> rbd: sysfs write failed
> rbd: map failed: (5) Input/output error
> May 12 10:01:51 ceph-client kernel: libceph: mon2 10.10.1.13:6789 missing 
> required protocol features
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [tcpext_tcploss_percentage] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [tcp_retrans_percentage] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [tcp_outsegs] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [tcp_insegs] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [udp_indatagrams] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [udp_outdatagrams] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [udp_inerrors] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [tcpext_listendrops] in the python module [netstats].
> May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call the 
> metric handler function for
> [tcp_attemptfails] in the python module [netstats].
>
> [ceph@ceph-client ~]$ ls -l /dev/rbd*
> ls: cannot access /dev/rbd*: No such file or directory
>
>
> and in addition
>
> [ceph@ceph-client ~]$ cat /etc/*release
> NAME="Scientific Linux"
> VERSION="7.2 (Nitrogen)"
> ID="rhel"
> ID_LIKE="fedora"
> VERSION_ID="7.2"
> PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
> ANSI_COLOR="0;31"
> CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
> HOME_URL="http://www.scientificlinux.org//;
> BUG_REPORT_URL="mailto:scientific-linux-de...@listserv.fnal.gov;
>
> REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
> REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
> REDHAT_SUPPORT_PRODUCT="Scientific Linux"
> REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
> Scientific Linux release 7.2 (Nitrogen)
> Scientific Linux release 7.2 (Nitrogen)
> Scientific Linux release 7.2 (Nitrogen)
>
>

Re: [ceph-users] How to change setting for tunables "require_feature_tunables5"

2016-05-12 Thread Andrey Shevel
Hello,

I am still working with the issue, however no success yet.


Any ideas would be helpful.

The problem is:

[ceph@ceph-client ~]$ ceph -s
cluster 65b8080e-d813-45ca-9cc1-ecb242967694
 health HEALTH_OK
 monmap e21: 5 mons at
{osd1=10.10.1.11:6789/0,osd2=10.10.1.12:6789/0,osd3=10.10.1.13:6789/0,osd4=10.10.1.14:6789/0,stor=10.10.1.41:6789/0}
election epoch 6844, quorum 0,1,2,3,4 osd1,osd2,osd3,osd4,stor
 osdmap e20510: 22 osds: 22 up, 22 in
  pgmap v251215: 400 pgs, 2 pools, 128 kB data, 6 objects
2349 MB used, 81900 GB / 81902 GB avail
 400 active+clean
  client io 657 B/s rd, 1 op/s rd, 0 op/s wr



[ceph@ceph-client ~]$ ceph -v
ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)


[ceph@ceph-client ~]$ rbd ls --long --pool rbd
NAME   SIZE PARENT FMT PROT LOCK
mycephrbd 2048G  2
newTest   4096M  2


[ceph@ceph-client ~]$ lsmod | grep rbd
rbd73208  0
libceph   244999  1 rbd


[ceph@ceph-client ~]$ sudo rbd map rbd/mycephrbd --id admin --keyfile
/etc/ceph/admin.key; sudo tail /var/log/messages
rbd: sysfs write failed
rbd: map failed: (5) Input/output error
May 12 10:01:51 ceph-client kernel: libceph: mon2 10.10.1.13:6789
missing required protocol features
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [tcpext_tcploss_percentage] in the
python module [netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [tcp_retrans_percentage] in the python
module [netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [tcp_outsegs] in the python module
[netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [tcp_insegs] in the python module
[netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [udp_indatagrams] in the python module
[netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [udp_outdatagrams] in the python
module [netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [udp_inerrors] in the python module
[netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [tcpext_listendrops] in the python
module [netstats].
May 12 10:01:55 ceph-client /usr/sbin/gmond[1890]: [PYTHON] Can't call
the metric handler function for [tcp_attemptfails] in the python
module [netstats].

[ceph@ceph-client ~]$ ls -l /dev/rbd*
ls: cannot access /dev/rbd*: No such file or directory


and in addition

[ceph@ceph-client ~]$ cat /etc/*release
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//;
BUG_REPORT_URL="mailto:scientific-linux-de...@listserv.fnal.gov;

REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)


[ceph@ceph-client ~]$ cat /proc/version
Linux version 3.10.0-327.13.1.el7.x86_64
(mockbu...@sl7-uefisign.fnal.gov) (gcc version 4.8.5 20150623 (Red Hat
4.8.5-4) (GCC) ) #1 SMP Thu Mar 31 11:10:31 CDT 2016

[ceph@ceph-client ~]$ uname -a
Linux ceph-client.pnpi.spb.ru 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu
Mar 31 11:10:31 CDT 2016 x86_64 x86_64 x86_64 GNU/Linux

and

[ceph@ceph-client Sys-Detect-Virtualization-0.107]$ script/virtdetect
Multiple possible virtualization systems detected:
Linux KVM
Linux lguest



Many thanks in advance for any info.




On Fri, May 6, 2016 at 10:36 PM, Andrey Shevel  wrote:
> Hello,
>
> I met the message with ceph 10.2.0 in following situation
>
>
> My details
> 
> [ceph@osd1 ~]$ date;ceph -v; ceph osd crush show-tunables
> Fri May  6 22:29:56 MSK 2016
> ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
> {
> "choose_local_tries": 0,
> "choose_local_fallback_tries": 0,
> "choose_total_tries": 50,
> "chooseleaf_descend_once": 1,
> "chooseleaf_vary_r": 1,
> "chooseleaf_stable": 1,
> "straw_calc_version": 1,
> "allowed_bucket_algs": 54,
> "profile": "jewel",
> "optimal_tunables": 1,
> "legacy_tunables": 0,
> "minimum_required_version": "jewel",
> "require_feature_tunables": 1,
> "require_feature_tunables2": 1,
> "has_v2_rules": 0,
> "require_feature_tunables3": 1,
> "has_v3_rules": 0,
> "has_v4_buckets": 1,
> "require_feature_tunables5": 1,
>