Hi Guys,
Now we have a very small cluster with 3 OSDs but using 40Gb NIC. We use
ceph-fuse as cephfs client and enable readahead,
but testing single reading a large file from cephfs via fio, dd or cp can only
achieve ~70+MB/s, even if fio or dd's block size is set to 1MB or 4MB.
From the
Hi Guys,
I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with rocksdb 3.11
as OSD backend. I use rbd to test performance and following is my cluster info.
[ceph@xxx ~]$ ceph -s
cluster b74f3944-d77f-4401-a531-fa5282995808
health HEALTH_OK
monmap e1: 1 mons at
dont provide with this option now
>
> On Tue, Oct 20, 2015 at 9:22 PM, Z Zhang <zhangz.da...@outlook.com> wrote:
> > Thanks, Sage, for pointing out the PR and ceph branch. I will take a closer
> > look. Yes, I am trying KVStore backend. The reason we are trying it is that
>
.com
> CC: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org
> Subject: Re: [ceph-users] Write performance issue under rocksdb kvstore
>
> On Tue, 20 Oct 2015, Z Zhang wrote:
> > Thanks, Sage, for pointing out the PR and ceph branch. I will take a
> > closer look.
...@outlook.com
CC: ceph-users@lists.ceph.com; ceph-de...@vger.kernel.org
Subject: Re: [ceph-users] Write performance issue under rocksdb kvstore
On Tue, 20 Oct 2015, Z Zhang wrote:
> Hi Guys,
>
> I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with
> rocksdb 3.11 as OSD backen
FW to ceph-user
Thanks.
Zhi Zhang (David)
From: zhangz.da...@outlook.com
To: ceph-de...@vger.kernel.org
Subject: Long tail latency due to journal aio io_submit takes long time to
return
Date: Tue, 25 Aug 2015 18:46:34 +0800
Hi Ceph-devel,
On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da...@outlook.com wrote:
Date: Thu, 30 Jul 2015 11:37:37 +0300
Subject: Re: [ceph-users] which kernel version can help avoid kernel
client deadlock
From: idryo...@gmail.com
To: zhangz.da...@outlook.com
CC: chaofa...@owtware.com; ceph
Date: Thu, 30 Jul 2015 13:11:11 +0300
Subject: Re: [ceph-users] which kernel version can help avoid kernel client
deadlock
From: idryo...@gmail.com
To: zhangz.da...@outlook.com
CC: chaofa...@owtware.com; ceph-users@lists.ceph.com
On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang zhangz.da
Date: Thu, 30 Jul 2015 11:37:37 +0300
Subject: Re: [ceph-users] which kernel version can help avoid kernel client
deadlock
From: idryo...@gmail.com
To: zhangz.da...@outlook.com
CC: chaofa...@owtware.com; ceph-users@lists.ceph.com
On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang zhangz.da
We also hit the similar issue from time to time on centos with 3.10.x kernel.
By iostat, we can see kernel rbd client's util is 100%, but no r/w io, and we
can't umount/unmap this rbd client. After restarting OSDs, it will become
normal.
@Ilya, could you pls point us the possible fixes on
Hi Guys,
By reading through ceph client codes, there is timeout mechanism in tick when
doing mount. Recently we met some client requests to mds spending long time to
reply when doing massive test to cephfs. And if we want cephfs user to know the
timeout instead of waiting for the reply, can we
into smaller IO's
From: idryo...@gmail.com
To: zhangz.da...@outlook.com
CC: ceph-users@lists.ceph.com
On Fri, Jun 26, 2015 at 3:17 PM, Z Zhang zhangz.da...@outlook.com wrote:
Hi Ilya,
I am seeing your recent email talking about krbd splitting large IO's into
smaller IO's, see below link
Hi Ilya,
I am seeing your recent email talking about krbd splitting large IO's into
smaller IO's, see below link.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg20587.html
I just tried it on my ceph cluster using kernel 3.10.0-1. I adjust both
max_sectors_kb and max_hw_sectors_kb of
13 matches
Mail list logo