[ceph-users] Ceph-fuse single read limitation?‏‏

2015-11-20 Thread Z Zhang
Hi Guys, Now we have a very small cluster with 3 OSDs but using 40Gb NIC. We use ceph-fuse as cephfs client and enable readahead, but testing single reading a large file from cephfs via fio, dd or cp can only achieve ~70+MB/s, even if fio or dd's block size is set to 1MB or 4MB. From the

[ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
Hi Guys, I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with rocksdb 3.11 as OSD backend. I use rbd to test performance and following is my cluster info. [ceph@xxx ~]$ ceph -s     cluster b74f3944-d77f-4401-a531-fa5282995808      health HEALTH_OK      monmap e1: 1 mons at

Re: [ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
dont provide with this option now > > On Tue, Oct 20, 2015 at 9:22 PM, Z Zhang <zhangz.da...@outlook.com> wrote: > > Thanks, Sage, for pointing out the PR and ceph branch. I will take a closer > > look. Yes, I am trying KVStore backend. The reason we are trying it is that >

Re: [ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
.com > CC: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org > Subject: Re: [ceph-users] Write performance issue under rocksdb kvstore > > On Tue, 20 Oct 2015, Z Zhang wrote: > > Thanks, Sage, for pointing out the PR and ceph branch. I will take a > > closer look.

Re: [ceph-users] Write performance issue under rocksdb kvstore

2015-10-20 Thread Z Zhang
...@outlook.com CC: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org Subject: Re: [ceph-users] Write performance issue under rocksdb kvstore On Tue, 20 Oct 2015, Z Zhang wrote: > Hi Guys, > > I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with > rocksdb 3.11 as OSD backen

[ceph-users] FW: Long tail latency due to journal aio io_submit takes long time to return

2015-08-25 Thread Z Zhang
FW to ceph-user Thanks. Zhi Zhang (David) From: zhangz.da...@outlook.com To: ceph-de...@vger.kernel.org Subject: Long tail latency due to journal aio io_submit takes long time to return Date: Tue, 25 Aug 2015 18:46:34 +0800 Hi Ceph-devel,

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-08-06 Thread Z Zhang
On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da...@outlook.com wrote: Date: Thu, 30 Jul 2015 11:37:37 +0300 Subject: Re: [ceph-users] which kernel version can help avoid kernel client deadlock From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: chaofa...@owtware.com; ceph

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang
Date: Thu, 30 Jul 2015 13:11:11 +0300 Subject: Re: [ceph-users] which kernel version can help avoid kernel client deadlock From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: chaofa...@owtware.com; ceph-users@lists.ceph.com On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang
Date: Thu, 30 Jul 2015 11:37:37 +0300 Subject: Re: [ceph-users] which kernel version can help avoid kernel client deadlock From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: chaofa...@owtware.com; ceph-users@lists.ceph.com On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang zhangz.da

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-29 Thread Z Zhang
We also hit the similar issue from time to time on centos with 3.10.x kernel. By iostat, we can see kernel rbd client's util is 100%, but no r/w io, and we can't umount/unmap this rbd client. After restarting OSDs, it will become normal. @Ilya, could you pls point us the possible fixes on

[ceph-users] Timeout mechanism in ceph client tick

2015-07-02 Thread Z Zhang
Hi Guys, By reading through ceph client codes, there is timeout mechanism in tick when doing mount. Recently we met some client requests to mds spending long time to reply when doing massive test to cephfs. And if we want cephfs user to know the timeout instead of waiting for the reply, can we

Re: [ceph-users] krbd splitting large IO's into smaller IO's

2015-06-29 Thread Z Zhang
into smaller IO's From: idryo...@gmail.com To: zhangz.da...@outlook.com CC: ceph-users@lists.ceph.com On Fri, Jun 26, 2015 at 3:17 PM, Z Zhang zhangz.da...@outlook.com wrote: Hi Ilya, I am seeing your recent email talking about krbd splitting large IO's into smaller IO's, see below link

[ceph-users] krbd splitting large IO's into smaller IO's

2015-06-26 Thread Z Zhang
Hi Ilya, I am seeing your recent email talking about krbd splitting large IO's into smaller IO's, see below link. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg20587.html I just tried it on my ceph cluster using kernel 3.10.0-1. I adjust both max_sectors_kb and max_hw_sectors_kb of