olc, I think you haven't posted in the ceph-users list.
On 31/12/2015 15:39, olc wrote:
> Same model _and_ same firmware (`smartctl -i /dev/sdX | grep Firmware`)? As
> far as I've been told, this can make huge differences.
Good idea indeed. I have checked, the versions are the same. Finally,
Hi,
On 31/12/2015 15:30, Robert LeBlanc wrote:
> Because Ceph is not perfectly distributed there will be more PGs/objects in
> one drive than others. That drive will become a bottleneck for the entire
> cluster. The current IO scheduler poses some challenges in this regard.
> I've implemented a
On Tue, Dec 29, 2015 at 5:20 PM, Francois Lafont wrote:
> Hi,
>
> On 28/12/2015 09:04, Yan, Zheng wrote:
>
>>> Ok, so in a client node, I have mounted cephfs (via ceph-fuse) and a rados
>>> block device formatted in XFS. If I have well understood, cephfs uses sync
>>> IO (not
Hi,
On 28/12/2015 09:04, Yan, Zheng wrote:
>> Ok, so in a client node, I have mounted cephfs (via ceph-fuse) and a rados
>> block device formatted in XFS. If I have well understood, cephfs uses sync
>> IO (not async IO) and, with ceph-fuse, cephfs can't make O_DIRECT IO. So, I
>> have tested
On Mon, Dec 28, 2015 at 1:24 PM, Francois Lafont wrote:
> Hi,
>
> Sorry for my late answer.
>
> On 23/12/2015 03:49, Yan, Zheng wrote:
>
fio tests AIO performance in this case. cephfs does not handle AIO
properly, AIO is actually SYNC IO. that's why cephfs is so slow
Hi,
Sorry for my late answer.
On 23/12/2015 03:49, Yan, Zheng wrote:
>>> fio tests AIO performance in this case. cephfs does not handle AIO
>>> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
>>> this case.
>>
>> Ah ok, thanks for this very interesting information.
>>
>> So,
On Tue, Dec 22, 2015 at 9:29 PM, Francois Lafont wrote:
> Hello,
>
> On 21/12/2015 04:47, Yan, Zheng wrote:
>
>> fio tests AIO performance in this case. cephfs does not handle AIO
>> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
>> this case.
>
> Ah ok,
On 21 December 2015 at 22:07, Yan, Zheng wrote:
>
> > OK, so i changed fio engine to 'sync' for the comparison of a single
> > underlying osd vs the cephfs.
> >
> > the cephfs w/ sync is ~ 115iops / ~500KB/s.
>
> This is normal because you were doing single thread sync IO. If
Hello,
On 21/12/2015 04:47, Yan, Zheng wrote:
> fio tests AIO performance in this case. cephfs does not handle AIO
> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
> this case.
Ah ok, thanks for this very interesting information.
So, in fact, the question I ask myself is:
On Tue, Dec 22, 2015 at 7:18 PM, Don Waterloo wrote:
> On 21 December 2015 at 22:07, Yan, Zheng wrote:
>>
>>
>> > OK, so i changed fio engine to 'sync' for the comparison of a single
>> > underlying osd vs the cephfs.
>> >
>> > the cephfs w/ sync is ~
On Mon, Dec 21, 2015 at 11:46 PM, Don Waterloo wrote:
> On 20 December 2015 at 22:47, Yan, Zheng wrote:
>>
>> >> ---
>> >>
>>
>>
>> fio tests AIO performance in this case. cephfs does not
On 20 December 2015 at 22:47, Yan, Zheng wrote:
> >> ---
> >>
>
>
> fio tests AIO performance in this case. cephfs does not handle AIO
> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
> this case.
On 20/12/2015 21:06, Francois Lafont wrote:
> Ok. Please, can you give us your configuration?
> How many nodes, osds, ceph version, disks (SSD or not, HBA/controller), RAM,
> CPU, network (1Gb/10Gb) etc.?
And I add this: with cephfs-fuse, did you have some specific conf in the client
side?
On 20 December 2015 at 19:23, Francois Lafont wrote:
> On 20/12/2015 22:51, Don Waterloo wrote:
>
> > All nodes have 10Gbps to each other
>
> Even the link client node <---> cluster nodes?
>
> > OSD:
> > $ ceph osd tree
> > ID WEIGHT TYPE NAMEUP/DOWN REWEIGHT
On Fri, Dec 18, 2015 at 11:16 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 18 Dec 2015 03:36:12 +0100 Francois Lafont wrote:
>
>> Hi,
>>
>> I have ceph cluster currently unused and I have (to my mind) very low
>> performances. I'm not an expert in benchs, here an example of
On 20/12/2015 22:51, Don Waterloo wrote:
> All nodes have 10Gbps to each other
Even the link client node <---> cluster nodes?
> OSD:
> $ ceph osd tree
> ID WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 5.48996 root default
> -2 0.8 host nubo-1
> 0 0.8
On 20 December 2015 at 08:35, Francois Lafont wrote:
> Hello,
>
> On 18/12/2015 23:26, Don Waterloo wrote:
>
> > rbd -p mypool create speed-test-image --size 1000
> > rbd -p mypool bench-write speed-test-image
> >
> > I get
> >
> > bench-write io_size 4096 io_threads 16
Hello,
On 18/12/2015 23:26, Don Waterloo wrote:
> rbd -p mypool create speed-test-image --size 1000
> rbd -p mypool bench-write speed-test-image
>
> I get
>
> bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern seq
> SEC OPS OPS/SEC BYTES/SEC
> 1 79053
Hi,
On 20/12/2015 19:47, Don Waterloo wrote:
> I did a bit more work on this.
>
> On cephfs-fuse, I get ~700 iops.
> On cephfs kernel, I get ~120 iops.
> These were both on 4.3 kernel
>
> So i backed up to 3.16 kernel on the client. And observed the same results.
>
> So ~20K iops w/ rbd,
Hi Christian,
On 18/12/2015 04:16, Christian Balzer wrote:
>> It seems to me very bad.
> Indeed.
> Firstly let me state that I don't use CephFS and have no clues how this
> influences things and can/should be tuned.
Ok, no problem. Anyway, thanks for your answer. ;)
> That being said, the
On 17 December 2015 at 21:36, Francois Lafont wrote:
> Hi,
>
> I have ceph cluster currently unused and I have (to my mind) very low
> performances.
> I'm not an expert in benchs, here an example of quick bench:
>
>
On 18 December 2015 at 15:48, Don Waterloo wrote:
>
>
> On 17 December 2015 at 21:36, Francois Lafont wrote:
>
>> Hi,
>>
>> I have ceph cluster currently unused and I have (to my mind) very low
>> performances.
>> I'm not an expert in benchs, here an
Hello,
On Fri, 18 Dec 2015 03:36:12 +0100 Francois Lafont wrote:
> Hi,
>
> I have ceph cluster currently unused and I have (to my mind) very low
> performances. I'm not an expert in benchs, here an example of quick
> bench:
>
> ---
>
Hi,
I have ceph cluster currently unused and I have (to my mind) very low
performances.
I'm not an expert in benchs, here an example of quick bench:
---
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
24 matches
Mail list logo