[ceph-users] slow 4k writes, Luminous with bluestore backend

2017-12-26 Thread kevin parrikar
Hi All, I upgraded my cluster from Hammer to Jewel and then to Luminous , changed from filestore to bluestore backend. on a KVM vm with 4 cpu /2 Gb RAM i have attached a 20gb rbd volume as vdc and performed following test. dd if=/dev/zero of=/dev/vdc bs=4k count=1000 oflag=direct 1000+0 records

[ceph-users] pass through commands via ceph-mgr restful plugin's request endpoint

2017-12-26 Thread zhenhua.zhang
Hi, all By the comment of ceph-mgr restful plugin's /request post method, it should take ceph commands and fetch result back. However, I am having trouble of writing a curl example. Running Ceph 12.2.2 and havn't found any documentation on posting to this endpoint. Can someone shed some light

[ceph-users] rbd map failed when ms_public_type=async+rdma

2017-12-26 Thread Yang, Liang
Hi all, Rbd map will fail when ms_public_type=async+rdma, and network of ceph cluster is blocked. Is this cause by that rbd in kernel does not support rdma? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] How to evict a client in rbd

2017-12-26 Thread Karun Josy
Any help is really appreciated. Karun Josy On Sun, Dec 24, 2017 at 2:18 AM, Karun Josy wrote: > Hello, > > The image is not mapped. > > # ceph --version > ceph version 12.2.1 luminous (stable) > # uname -r > 4.14.0-1.el7.elrepo.x86_64 > > > Karun Josy > > On Sat, Dec 23,

Re: [ceph-users] How to evict a client in rbd

2017-12-26 Thread Hamid EDDIMA
Hello try : ceph osd blacklist add 10.255.0.17:0/3495340192 Hamid. Le 26/12/2017 à 12:16, Karun Josy a écrit : Any help is really appreciated. Karun Josy On Sun, Dec 24, 2017 at 2:18 AM, Karun Josy >

Re: [ceph-users] Ceph as an Alternative to HDFS for Hadoop

2017-12-26 Thread Aristeu Gil Alves Jr
In a recent thread on the list, I received various important answers to my questions on hadoop plugin. Maybe this thread will help you. https://www.spinics.net/lists/ceph-users/msg40790.html One of the most important answers is about data locality. The last message lead me to this article.

Re: [ceph-users] Bluestore: inaccurate disk usage statistics problem?

2017-12-26 Thread Sage Weil
On Tue, 26 Dec 2017, Zhi Zhang wrote: > Hi, > > We recently started to test bluestore with huge amount of small files > (only dozens of bytes per file). We have 22 OSDs in a test cluster > using ceph-12.2.1 with 2 replicas and each OSD disk is 2TB size. After > we wrote about 150 million files

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-26 Thread kevin parrikar
It was a firewall issue on the controller nodes.After allowing ceph-mgr port in iptables everything is displaying correctly.Thanks to people on IRC. Thanks alot, Kevin On Thu, Dec 21, 2017 at 5:24 PM, kevin parrikar wrote: > accidently removed mailing list email > >

[ceph-users] Cache tiering on Erasure coded pools

2017-12-26 Thread Karun Josy
Hi, We are using Erasure coded pools in a ceph cluster for RBD images. Ceph version is 12.2.2 Luminous. - http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/ - Here it says we can use a Cache tiering infront of ec pools. To use erasure code with RBD we have a replicated

Re: [ceph-users] Cache tiering on Erasure coded pools

2017-12-26 Thread David Turner
Please use the version of the docs for your installed version of ceph. Now the Jewel in your URL and the Luminous in mine. In Luminous you no longer need a cache tier to use EC with RBDs. http://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/ On Tue, Dec 26, 2017, 4:21 PM Karun

Re: [ceph-users] 答复: 答复: Can't delete file in cephfs with "No space left on device"

2017-12-26 Thread Yan, Zheng
On Tue, Dec 26, 2017 at 2:28 PM, 周 威 wrote: > We don't use hardlink. > I reduced the mds_cache_size from 1000 to 200. > After that, the num_strays reduce to about 100k > The cluster is normal now. I think there is some bug about it. > Anyway, thanks for your reply! > This

[ceph-users] How to monitor slow request?

2017-12-26 Thread shadow_lin
I am building a ceph moninter dashboard and I want to moniter how many slow requests are on each node. But I find the ceph.log some time only log like below: 2017-12-27 14:59:47.852396 mon.mnc000 mon.0 192.168.99.80:6789/0 2147 : cluster [WRN] Health check failed: 4 slow requests are blocked >

[ceph-users] 答复: 答复: 答复: Can't delete file in cephfs with "No space left on device"

2017-12-26 Thread 周 威
The client they are using is mainly fuse(10.2.9 and 0.94.9) -邮件原件- 发件人: Yan, Zheng [mailto:uker...@gmail.com] 发送时间: 2017年12月27日 10:32 收件人: 周 威 抄送: Cary ; ceph-users@lists.ceph.com 主题: Re: [ceph-users] 答复: 答复: Can't delete file in cephfs with "No

Re: [ceph-users] Bluestore: inaccurate disk usage statistics problem?

2017-12-26 Thread Zhi Zhang
Hi Sage, Thanks for the quick reply. I read the code and our test also proved that disk space was wasted due to min_alloc_size. Very look forward to the "inline" data feature for small objects. We will also look into this feature and hopefully work with community on it. Regards, Zhi Zhang

Re: [ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related)

2017-12-26 Thread shadow_lin
I have disabled scrub before the test. 2017-12-27 shadow_lin 发件人:Webert de Souza Lima 发送时间:2017-12-22 20:37 主题:Re: [ceph-users] [luminous 12.2.2] Cluster write performance degradation problem(possibly tcmalloc related) 收件人:"ceph-users" 抄送: