Re: [ceph-users] init mon fail since use service rather than systemctl

2018-06-21 Thread xiang....@sky-data.cn
Thanks very much

- Original Message -
From: "Alfredo Deza" 
To: "xiang dai" 
Cc: "ceph-users" 
Sent: Thursday, June 21, 2018 8:42:34 PM
Subject: Re: [ceph-users] init mon fail since use service rather than systemctl

On Thu, Jun 21, 2018 at 8:41 AM,   wrote:
> I met below issue:
>
> INFO: initialize ceph mon ...
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.25): /usr/bin/ceph-deploy
> --overwrite-conf mon create-initial
> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts dx-storage
> [ceph_deploy.mon][DEBUG ] detecting platform for host dx-storage ...
> [dx-storage][DEBUG ] connected to host: dx-storage
> [dx-storage][DEBUG ] detect platform information from remote host
> [dx-storage][DEBUG ] detect machine type
> [ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.4.1708 Core
> [dx-storage][DEBUG ] determining if provided host has same hostname in
> remote
> [dx-storage][DEBUG ] get remote short hostname
> [dx-storage][DEBUG ] deploying mon to dx-storage
> [dx-storage][DEBUG ] get remote short hostname
> [dx-storage][DEBUG ] remote hostname: dx-storage
> [dx-storage][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [dx-storage][DEBUG ] create the mon path if it does not exist
> [dx-storage][DEBUG ] checking for done path:
> /var/lib/ceph/mon/ceph-dx-storage/done
> [dx-storage][DEBUG ] done path does not exist:
> /var/lib/ceph/mon/ceph-dx-storage/done
> [dx-storage][INFO  ] creating keyring file:
> /var/lib/ceph/tmp/ceph-dx-storage.mon.keyring
> [dx-storage][DEBUG ] create the monitor keyring file
> [dx-storage][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i
> dx-storage --keyring /var/lib/ceph/tmp/ceph-dx-storage.mon.keyring
> [dx-storage][INFO  ] unlinking keyring file
> /var/lib/ceph/tmp/ceph-dx-storage.mon.keyring
> [dx-storage][DEBUG ] create a done file to avoid re-doing the mon deployment
> [dx-storage][DEBUG ] create the init path if it does not exist
> [dx-storage][DEBUG ] locating the `service` executable...
> [dx-storage][INFO  ] Running command: /usr/sbin/service ceph -c
> /etc/ceph/ceph.conf start mon.dx-storage
> [dx-storage][WARNING] The service command supports only basic LSB actions
> (start, stop, restart, try-restart, reload, force-reload, status). For other
> actions, please try to use systemctl.
> [dx-storage][ERROR ] RuntimeError: command returned non-zero exit status: 2
> [ceph_deploy.mon][ERROR ] Failed to execute command: /usr/sbin/service ceph
> -c /etc/ceph/ceph.conf start mon.dx-storage
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
>
> My test host is centos7.4, and i think it should call systemctl rather than
> service, but it still call service and failed.
>
> My systemctl status is running, why ceph choose service rather than
> systemctl?

You are using ceph-deploy version 1.5.25, that has a bug that was
fixed in 1.5.38 regarding systemd detection for CentOS:

http://docs.ceph.com/ceph-deploy/docs/changelog.html#id6


>
> Could anyone tell me details?
>
> Thanks in advance.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 答复: how can i remove rbd0

2018-06-18 Thread xiang....@sky-data.cn
stop rbdmap service will unmap all rbd, and rbd showmapped shows none. 


From: "许雪寒"  
To: "xiang dai" , "ceph-users" 
 
Sent: Tuesday, June 19, 2018 11:01:03 AM 
Subject: 答复: how can i remove rbd0 



rbd unmap [dev-path] 




发件人 : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 
xiang@sky-data.cn 
发送时间 : 2018 年 6 月 19 日 10:52 
收件人 : ceph-users 
主题 : [ceph-users] how can i remove rbd0 





Hi,all! 





I found a confused question: 





[root@test]# rbd ls 
[root@test]# lsblk 
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 
sda 8:0 0 931.5G 0 disk 
├─sda1 8:1 0 1G 0 part /boot 
├─sda2 8:2 0 200G 0 part 
│ ├─root 253:0 0 50G 0 lvm / 
│ └─swap 253:1 0 8G 0 lvm [SWAP] 
└─sda3 8:3 0 186.3G 0 part 
sr0 11:0 1 1024M 0 rom 
rbd0 252:0 0 500G 0 disk <=== 





I have stopped rbdmap service. 


I do not want to reboot, how can i rm rbd0? 


-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] which kernel support object-map, fast-diff

2018-05-15 Thread xiang....@sky-data.cn
Could give a list about enable or not?

- Original Message -
From: "Konstantin Shalygin" <k0...@k0ste.ru>
To: "ceph-users" <ceph-users@lists.ceph.com>
Cc: "xiang dai" <xiang@sky-data.cn>
Sent: Tuesday, May 15, 2018 4:57:00 PM
Subject: Re: [ceph-users] which kernel support object-map, fast-diff

> So which kernel version support those feature?


No one kernel support this features yet.



k
-- 
戴翔 
南京天数信息科技有限公司 
电话: +86 1 3382776490 
公司官网: www.sky-data.cn 
免费使用天数润科智能计算平台 SkyDiscovery
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mount failed since failed to load ceph kernel module

2017-11-13 Thread xiang....@sky-data.cn
Hi! 

I got a confused issue in docker as below: 

After install ceph successfully, i want to mount cephfs but failed: 

[root@dbffa72704e4 ~]$ /bin/mount 172.17.0.4:/ /cephfs -t ceph -o 
name=admin,secretfile=/etc/ceph/admin.secret -v 
failed to load ceph kernel module (1) 
parsing options: rw,name=admin,secretfile=/etc/ceph/admin.secret 
mount error 5 = Input/output error 

But ceph related kernel modules have existed: 

[root@dbffa72704e4 ~]$ lsmod | grep ceph 
ceph 327687 0 
libceph 287066 1 ceph 
dns_resolver 13140 2 nfsv4,libceph 
libcrc32c 12644 3 xfs,libceph,dm_persistent_data 

Check the ceph state(i only set data disk for osd): 

[root@dbffa72704e4 ~]$ ceph -s 
cluster: 
id: 20f51975-303e-446f-903f-04e1feaff7d0 
health: HEALTH_WARN 
Reduced data availability: 128 pgs inactive 
Degraded data redundancy: 128 pgs unclean 

services: 
mon: 2 daemons, quorum dbffa72704e4,5807d12f920e 
mgr: dbffa72704e4(active), standbys: 5807d12f920e 
mds: cephfs-1/1/1 up {0=5807d12f920e=up:creating}, 1 up:standby 
osd: 0 osds: 0 up, 0 in 

data: 
pools: 2 pools, 128 pgs 
objects: 0 objects, 0 bytes 
usage: 0 kB used, 0 kB / 0 kB avail 
pgs: 100.000% pgs unknown 
128 unknown 

[root@dbffa72704e4 ~]$ ceph version 
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous 
(stable) 

My container is based on centos:centos7.2.1511, kernel is 3e0728877e22 
3.10.0-514.el7.x86_64. 

I saw some ceph related images on docker hub so that i think above 
operation is ok, did i miss something important? 

-- 
Best Regards 
Dai Xiang 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com