Hi !
I meet a confused case:
When write to cephfs and rbd at same time, after a while, rbd process is hang
and i find:
kernel:rbd:rbd0: encountered watch error: -10
I try to reproduce with below action and succeed:
- run 2 dd process to write to cephfs
- do file write action on rbd
I
Hi!
I find a confused question about start/stop ceph cluster by systemd:
- when cluster is on, restart ceph.target can restart all osd service
- when cluster is down, start ceph.target or start ceph-osd.target can not
start osd service
I have google this issue, seems the workaround is
Sure.
Seems that there is a test itself bug:
https://jenkins.ceph.com/job/ceph-pull-requests-arm64/25498/console
Best Wishes
- Original Message -
From: "Ilya Dryomov"
To: "xiang.dai"
Cc: "ceph-users"
Sent: Wednesday, November 7, 2018 10:40:13 PM
Subject: Re: [ceph-users] [bug]
Hi!
I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic
(stable) and i want to call `ls -ld` to read whole dir size in cephfs:
When i man mount.ceph:
rbytes Report the recursive size of the directory contents for st_size on
directories. Default: on
But without
Hi!
I use ceph 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable),
and find that:
When expand whole cluster, i update pg_num, all succeed, but the status is as
below:
cluster:
id: 41ef913c-2351-4794-b9ac-dd340e3fbc75
health: HEALTH_WARN
3 pools have pg_num > pgp_num
Then
I install a ceph cluster with 8 osds, 3 pools and 1 replication(as
osd_pool_default_size) in 2 machines.
I follow formula in
http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/#choosing-the-number-of-placement-groups
to count pg_nu.
Then get total pg_num equal to 192, i set
Hi!
I want to user promethus+grafana to monitor ceph, and I find below url:
http: //docs.ceph.com/docs/master/mgr/prometheus/
Then i download ceph dashboard in grafana:
https://grafana.com/dashboards/7056
It is so cool
But some metrices do not work for ceph 13( Mimic ), like
Hi!
I want to user promethus+grafana to monitor ceph, and I find below url:
http: //docs.ceph.com/docs/master/mgr/prometheus/
Then i download ceph dashboard in grafana:
https://grafana.com/dashboards/7056
It is so cool
But some metrices do not work for ceph 13( Mimic ), like
Hi!
I want to user promethus+grafana to monitor ceph, and I find below url:
http: //docs.ceph.com/docs/master/mgr/prometheus/
Then i download ceph dashboard in grafana:
https://grafana.com/dashboards/7056
It is so cool
But some metrices do not work for ceph 13( Mimic ), like
I mount cephfs in /cephfs and create a dir in it:
[root@test-0 guhailin]# ll -h
drwxr-xr-x 1 guhailin ghlclass 0 8月 13 15:01 a
scp a file into it:
[root@test-0 guhailin]# ls a/
hadoop.tar.gz
[root@test-0 guhailin]# pwd
/cephfs/user/guhailin
[root@test-0 guhailin]# stat a/
I create a rbd named dx-app with 500G, and map as rbd0.
But i find the size is different with different cmd:
[root@dx-app docker]# rbd info dx-app
rbd image 'dx-app':
size 32000 GB in 8192000 objects <
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1206643c9869
format: 2
Hi!
I find a rbd map service issue:
[root@dx-test ~]# systemctl status rbdmap
● rbdmap.service - Map RBD devices
Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset:
disabled)
Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST;
11min ago
Hi!
I want to monitor rbd image size to enable enlager size when use percentage
above 80%.
I find a way with `rbd du`:
total=$(rbd du $rbd_name | grep $rbd_name | awk '{print $2}')
used=$(rbd du $rbd_name | grep $rbd_name | awk '{print $3}')
percentage=((used/total))
But in this way,
Hi!
I find a rbd map service issue:
[root@dx-test ~]# systemctl status rbdmap
● rbdmap.service - Map RBD devices
Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset:
disabled)
Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST;
11min ago
Hi!
I find a rbd map service issue:
[root@dx-test ~]# systemctl status rbdmap
● rbdmap.service - Map RBD devices
Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor preset:
disabled)
Active: active (exited) (Result: exit-code) since 六 2018-07-28 13:55:01 CST;
11min ago
I met below issue:
INFO: initialize ceph mon ...
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.25): /usr/bin/ceph-deploy
--overwrite-conf mon create-initial
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts
Hi,all!
I found a confused question:
[root@test]# rbd ls
[root@test]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 200G 0 part
│ ├─root 253:0 0 50G 0 lvm /
│ └─swap 253:1 0 8G 0 lvm [SWAP]
└─sda3 8:3 0 186.3G 0 part
Hi, all!
I use Centos 7.4 and want to use ceph rbd.
I found that object-map, fast-diff can not work.
rbd image 'app':
size 500 GB in 128000 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.10a2643c9869
format: 2
features: layering, exclusive-lock, object-map, fast-diff
Hi, all!
I use rbd to do something and find below issue:
when i create a rbd image with feature:
layering,exclusive-lock,object-map,fast-diff
failed to map:
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the
kernel with "rbd feature
19 matches
Mail list logo