[ceph-users] kernel:rbd:rbd0: encountered watch error: -10

2018-11-09 Thread xiang . dai
Hi ! I meet a confused case: When write to cephfs and rbd at same time, after a while, rbd process is hang and i find: kernel:rbd:rbd0: encountered watch error: -10 I try to reproduce with below action and succeed: - run 2 dd process to write to cephfs - do file write action on rbd I

[ceph-users] can not start osd service by systemd

2018-11-09 Thread xiang . dai
Hi! I find a confused question about start/stop ceph cluster by systemd: - when cluster is on, restart ceph.target can restart all osd service - when cluster is down, start ceph.target or start ceph-osd.target can not start osd service I have google this issue, seems the workaround is

Re: [ceph-users] [bug] mount.ceph man description is wrong

2018-11-08 Thread xiang . dai
Sure. Seems that there is a test itself bug: https://jenkins.ceph.com/job/ceph-pull-requests-arm64/25498/console Best Wishes - Original Message - From: "Ilya Dryomov" To: "xiang.dai" Cc: "ceph-users" Sent: Wednesday, November 7, 2018 10:40:13 PM Subject: Re: [ceph-users] [bug]

[ceph-users] [bug] mount.ceph man description is wrong

2018-11-07 Thread xiang . dai
Hi! I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable) and i want to call `ls -ld` to read whole dir size in cephfs: When i man mount.ceph: rbytes Report the recursive size of the directory contents for st_size on directories. Default: on But without

[ceph-users] why set pg_num do not update pgp_num

2018-10-18 Thread xiang . dai
Hi! I use ceph 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable), and find that: When expand whole cluster, i update pg_num, all succeed, but the status is as below: cluster: id: 41ef913c-2351-4794-b9ac-dd340e3fbc75 health: HEALTH_WARN 3 pools have pg_num > pgp_num Then

[ceph-users] how can i config pg_num

2018-10-16 Thread xiang . dai
I install a ceph cluster with 8 osds, 3 pools and 1 replication(as osd_pool_default_size) in 2 machines. I follow formula in http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/#choosing-the-number-of-placement-groups to count pg_nu. Then get total pg_num equal to 192, i set

[ceph-users] ceph promethus monitor

2018-09-17 Thread xiang . dai
Hi! I want to user promethus+grafana to monitor ceph, and I find below url: http: //docs.ceph.com/docs/master/mgr/prometheus/ Then i download ceph dashboard in grafana: https://grafana.com/dashboards/7056 It is so cool But some metrices do not work for ceph 13( Mimic ), like

[ceph-users] issues about module promethus

2018-09-13 Thread xiang . dai
Hi! I want to user promethus+grafana to monitor ceph, and I find below url: http: //docs.ceph.com/docs/master/mgr/prometheus/ Then i download ceph dashboard in grafana: https://grafana.com/dashboards/7056 It is so cool But some metrices do not work for ceph 13( Mimic ), like

[ceph-users] issues about module promethus

2018-09-13 Thread xiang . dai
Hi! I want to user promethus+grafana to monitor ceph, and I find below url: http: //docs.ceph.com/docs/master/mgr/prometheus/ Then i download ceph dashboard in grafana: https://grafana.com/dashboards/7056 It is so cool But some metrices do not work for ceph 13( Mimic ), like

[ceph-users] stat file size is 0

2018-08-13 Thread xiang . dai
I mount cephfs in /cephfs and create a dir in it: [root@test-0 guhailin]# ll -h drwxr-xr-x 1 guhailin ghlclass 0 8月 13 15:01 a scp a file into it: [root@test-0 guhailin]# ls a/ hadoop.tar.gz [root@test-0 guhailin]# pwd /cephfs/user/guhailin [root@test-0 guhailin]# stat a/

[ceph-users] different size of rbd

2018-08-02 Thread xiang . dai
I create a rbd named dx-app with 500G, and map as rbd0. But i find the size is different with different cmd: [root@dx-app docker]# rbd info dx-app rbd image 'dx-app': size 32000 GB in 8192000 objects < order 22 (4096 kB objects) block_name_prefix: rbd_data.1206643c9869 format: 2

[ceph-users] qustions about rbdmap service

2018-08-02 Thread xiang . dai
Hi! I find a rbd map service issue: [root@dx-test ~]# systemctl status rbdmap ● rbdmap.service - Map RBD devices Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset: disabled) Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST; 11min ago

[ceph-users] questions about rbd used percentage

2018-08-02 Thread xiang . dai
Hi! I want to monitor rbd image size to enable enlager size when use percentage above 80%. I find a way with `rbd du`: total=$(rbd du $rbd_name | grep $rbd_name | awk '{print $2}') used=$(rbd du $rbd_name | grep $rbd_name | awk '{print $3}') percentage=((used/total)) But in this way,

[ceph-users] rbdmap service issue

2018-08-01 Thread xiang . dai
Hi! I find a rbd map service issue: [root@dx-test ~]# systemctl status rbdmap ● rbdmap.service - Map RBD devices Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset: disabled) Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST; 11min ago

[ceph-users] rbdmap service failed but exit 1

2018-07-28 Thread xiang . dai
Hi! I find a rbd map service issue: [root@dx-test ~]# systemctl status rbdmap ● rbdmap.service - Map RBD devices Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset: disabled) Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST; 11min ago

[ceph-users] init mon fail since use service rather than systemctl

2018-06-21 Thread xiang . dai
I met below issue: INFO: initialize ceph mon ... [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.25): /usr/bin/ceph-deploy --overwrite-conf mon create-initial [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts

[ceph-users] how can i remove rbd0

2018-06-18 Thread xiang . dai
Hi,all! I found a confused question: [root@test]# rbd ls [root@test]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot ├─sda2 8:2 0 200G 0 part │ ├─root 253:0 0 50G 0 lvm / │ └─swap 253:1 0 8G 0 lvm [SWAP] └─sda3 8:3 0 186.3G 0 part

[ceph-users] which kernel support object-map, fast-diff

2018-05-15 Thread xiang . dai
Hi, all! I use Centos 7.4 and want to use ceph rbd. I found that object-map, fast-diff can not work. rbd image 'app': size 500 GB in 128000 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.10a2643c9869 format: 2 features: layering, exclusive-lock, object-map, fast-diff

[ceph-users] rbd feature map fail

2018-05-15 Thread xiang . dai
Hi, all! I use rbd to do something and find below issue: when i create a rbd image with feature: layering,exclusive-lock,object-map,fast-diff failed to map: rbd: sysfs write failed RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature