Hello,
in a nutshell, I can confirm that write amplification, see inline.
On Mon, 20 Oct 2014 10:43:51 -0500 Mark Nelson wrote:
On 10/20/2014 09:28 AM, Mark Wu wrote:
2014-10-20 21:04 GMT+08:00 Mark Nelson mark.nel...@inktank.com
mailto:mark.nel...@inktank.com:
On 10/20/2014
Test result Update:
Number of Hosts Maximum single volume IOPS Maximum aggregated IOPS
SSD Disk IOPS SSD Disk Utilization
7 14k45k
9800+90%
8
On 10/20/2014 06:27 AM, Mark Wu wrote:
Test result Update:
Number of Hosts Maximum single volume IOPS Maximum aggregated IOPS
SSD Disk IOPS SSD Disk Utilization
7 14k 45k 9800+90%
8 21k
50k
2014-10-20 21:04 GMT+08:00 Mark Nelson mark.nel...@inktank.com:
On 10/20/2014 06:27 AM, Mark Wu wrote:
Test result Update:
Number of Hosts Maximum single volume IOPS Maximum aggregated IOPS
SSD Disk IOPS SSD Disk Utilization
7 14k 45k 9800+
On 10/20/2014 09:28 AM, Mark Wu wrote:
2014-10-20 21:04 GMT+08:00 Mark Nelson mark.nel...@inktank.com
mailto:mark.nel...@inktank.com:
On 10/20/2014 06:27 AM, Mark Wu wrote:
Test result Update:
Number of Hosts Maximum single volume IOPS Maximum
aggregated
I assume you added more clients and checked that it didn't scale past
that?
Yes, correct.
You might look through the list archives; there are a number of
discussions about how and how far you can scale SSD-backed cluster
performance.
I have look at those discussions before, particular the one
...@gmail.com
À: Gregory Farnum g...@inktank.com
Cc: ceph-users ceph-us...@ceph.com
Envoyé: Vendredi 17 Octobre 2014 10:52:44
Objet: Re: [ceph-users] Performance doesn't scale well on a full ssd cluster.
I assume you added more clients and checked that it didn't scale past
that?
Yes, correct.
You
@lists.ceph.com
*Subject:* [ceph-users] Performance doesn't scale well on a full ssd
cluster.
Hi list,
During my test, I found ceph doesn't scale as I expected on a 30 osds
cluster.
The following is the information of my setup:
HW configuration:
15 Dell R720 servers, and each server has
: [ceph-users] Performance doesn't scale well on a full ssd
cluster.
Thanks for the detailed information. but I am already using fio with rbd
engine. Almost 4 volumes can reach the peak.
2014 年 10 月 17 日 上午 1:03于 wud...@gmail.com 写道:
Thanks for the detailed information. but I am already
mailto:daniel.schwa...@dtnet.de
Cc: ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
Envoyé: Jeudi 16 Octobre 2014 19:19:17
Objet: Re: [ceph-users] Performance doesn't scale well on a full ssd
cluster.
Thanks for the detailed information. but I am already using fio
forgot to cc the list
-- 转发的邮件 --
发件人:Mark Wu wud...@gmail.com
日期:2014 年 10 月 17 日 上午 12:51
主题:Re: [ceph-users] Performance doesn't scale well on a full ssd cluster.
收件人:Gregory Farnum g...@inktank.com
抄送:
Thanks for the reply. I am not using single client. Writing 5 rbd volumes
[Re-added the list.]
I assume you added more clients and checked that it didn't scale past
that? You might look through the list archives; there are a number of
discussions about how and how far you can scale SSD-backed cluster
performance.
Just scanning through the config options you set, you
Mark, please read this:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html
On 16 Oct 2014, at 19:19, Mark Wu wud...@gmail.com wrote:
Thanks for the detailed information. but I am already using fio with rbd
engine. Almost 4 volumes can reach the peak.
2014 年 10 月 17 日
scale well on a full ssd cluster.
Hi list,
During my test, I found ceph doesn't scale as I expected on a 30 osds cluster.
The following is the information of my setup:
HW configuration:
15 Dell R720 servers, and each server has:
Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 20 cores and hyper
Hello (Greg in particular),
On Thu, 16 Oct 2014 10:06:58 -0700 Gregory Farnum wrote:
[Re-added the list.]
I assume you added more clients and checked that it didn't scale past
that? You might look through the list archives; there are a number of
discussions about how and how far you can
15 matches
Mail list logo