than 20k iops with more nodes.
but clearly, the cpu is the limit.
- Mail original -
De: Christian Balzer ch...@gol.com
À: ceph-users@lists.ceph.com
Envoyé: Jeudi 25 Septembre 2014 06:50:31
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On Wed, 24
: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
What about writes with Giant?
On 18 Sep 2014, at 08:12, Zhang, Jian jian.zh...@intel.com wrote:
Have anyone ever testing multi volume performance on a *FULL* SSD setup?
We are able to get ~18K IOPS for 4K random read
...@intel.com
Cc: Alexandre DERUMIER aderum...@odiso.com, ceph-users@lists.ceph.com
Envoyé: Mardi 23 Septembre 2014 17:41:38
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
What about writes with Giant?
On 18 Sep 2014, at 08:12, Zhang, Jian jian.zh...@intel.com wrote
...@lists.ceph.com] On Behalf
Of Sebastien Han
Sent: Tuesday, September 16, 2014 9:33 PM
To: Alexandre DERUMIER
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go
over 3, 2K IOPS
Hi,
Thanks for keeping us updated on this subject.
dsync
Cc: Alexandre DERUMIER aderum...@odiso.com,
ceph-users@lists.ceph.com Envoyé: Mardi 23 Septembre 2014 17:41:38
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
2K IOPS
What about writes with Giant?
On 18 Sep 2014, at 08:12, Zhang, Jian jian.zh...@intel.com wrote
] On Behalf Of
Sebastien Han
Sent: Tuesday, September 16, 2014 9:33 PM
To: Alexandre DERUMIER
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi,
Thanks for keeping us updated on this subject.
dsync is definitely killing
DERUMIER
aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 18 Septembre 2014 08:12:32
Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Have anyone ever testing multi volume performance on a *FULL* SSD setup?
We are able to get ~18K IOPS for 4K
- Mail original -
De: Jian Zhang jian.zh...@intel.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 19 Septembre 2014 10:21:38
Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Thanks for this great
...@intel.com
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 18 Septembre 2014 15:36:48
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Have anyone ever testing multi volume performance on a *FULL* SSD setup?
I known that Stefan Priebe run full ssd clusters
: Alexandre DERUMIER aderum...@odiso.com
À: Jian Zhang jian.zh...@intel.com
Cc: ceph-users@lists.ceph.com
Envoyé: Jeudi 18 Septembre 2014 15:36:48
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Have anyone ever testing multi volume performance on a *FULL* SSD setup
-
De: Jian Zhang jian.zh...@intel.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 19 Septembre 2014 10:21:38
Objet: RE: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Thanks for this great information.
We are using
@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi,
Thanks for keeping us updated on this subject.
dsync is definitely killing the ssd.
I don't have much to add, I'm just surprised that you're only getting 5299 with
0.85 since I've been able to get
:33 PM
To: Alexandre DERUMIER
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi,
Thanks for keeping us updated on this subject.
dsync is definitely killing the ssd.
I don't have much to add, I'm just surprised that you're
: ceph-users@lists.ceph.com
Envoyé: Vendredi 12 Septembre 2014 08:15:08
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over
3, 2K IOPS
results of fio on rbd with kernel patch
fio rbd crucial m550 1 osd 0.85 (osd_enable_op_tracker true or false, same
result
Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Thursday, September 18, 2014 11:06 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Couple of questions: Are those client IOPS
DERUMIER; Sebastien Han
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 17/09/14 08:39, Alexandre DERUMIER wrote:
Hi,
I’m just surprised that you’re only getting 5299 with 0.85 since
I’ve been able to get 6,4K, well I
: Alexandre DERUMIER aderum...@odiso.com
À: Cedric Lemarchand ced...@yipikai.org
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 12 Septembre 2014 08:15:08
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
results of fio on rbd with kernel patch
fio rbd
: Vendredi 12 Septembre 2014 08:15:08
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
results of fio on rbd with kernel patch
fio rbd crucial m550 1 osd 0.85 (osd_enable_op_tracker true or false, same
result):
---
bw
On 17/09/14 08:39, Alexandre DERUMIER wrote:
Hi,
I’m just surprised that you’re only getting 5299 with 0.85 since I’ve been able
to get 6,4K, well I was using the 200GB model
Your model is
DC S3700
mine is DC s3500
with lower writes, so that could explain the difference.
Interesting -
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 17/09/14 08:39, Alexandre DERUMIER wrote:
Hi,
I’m just surprised that you’re only getting 5299 with 0.85 since
I’ve been able to get 6,4K, well I was using the 200GB model
Your model is
DC S3700
mine
aderum...@odiso.com
À: Cedric Lemarchand ced...@yipikai.org
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 12 Septembre 2014 07:58:05
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
For crucial, I'll try to apply the patch from stefan priebe, to ignore
flushes
Septembre 2014 08:15:08
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
results of fio on rbd with kernel patch
fio rbd crucial m550 1 osd 0.85 (osd_enable_op_tracker true or false, same
result):
---
bw=12327KB/s, iops=3081
So
Of Sebastien Han
Sent: Thursday, August 28, 2014 12:12 PM
To: ceph-users
Cc: Mark Nelson
Subject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hey all,
It has been a while since the last thread performance related on the ML :p
I've been running some experiment
-November/035707.html
- Mail original -
De: Cedric Lemarchand ced...@yipikai.org
À: ceph-users@lists.ceph.com
Envoyé: Jeudi 11 Septembre 2014 21:23:23
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Le 11/09/2014 19:33, Cedric Lemarchand a écrit
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi,
seem that intel s3500 perform a lot better with o_dsync
crucial m550
#fio --filename=/dev/sdb --direct=1 --rw=write --bs=4k --numjobs=2
--group_reporting --invalidate=0 --name=ab --sync=1
bw
.
--
Warren Wang
Comcast Cloud (OpenStack)
From: Cedric Lemarchand ced...@yipikai.org
Date: Wednesday, September 3, 2014 at 5:14 PM
To: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Le 03/09
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Sebastien Han
Sent: Thursday, August 28, 2014 12:12 PM
To: ceph-users
Cc: Mark Nelson
Subject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hey all,
It has been a while since the last thread performance related
: Somnath Roy somnath@sandisk.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 2 Septembre 2014 02:19:16
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Mark and all, Ceph IOPS performance has definitely improved with Giant.
With this version: ceph version
original -
De: Sebastien Han sebastien@enovance.com
À: Somnath Roy somnath@sandisk.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 2 Septembre 2014 02:19:16
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Mark and all, Ceph IOPS performance
, 2014 8:01 PM
To: Somnath Roy
Cc: Andrey Korolyov; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
2K IOPS
Hi Roy,
I already scan your merged codes about fdcache and optimizing for
lfn_find/lfn_open, could you give some
On 02/09/14 19:38, Alexandre DERUMIER wrote:
Hi Sebastien,
I got 6340 IOPS on a single OSD SSD. (journal and data on the same partition).
Shouldn't it better to have 2 partitions, 1 for journal and 1 for datas ?
(I'm thinking about filesystem write syncs)
Oddly enough, it does not seem
somnath@sandisk.com
Cc: ceph-users@lists.ceph.com
Envoyé: Mardi 2 Septembre 2014 02:19:16
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Mark and all, Ceph IOPS performance has definitely improved with Giant.
With this version: ceph version 0.84-940
@lists.ceph.com
Envoyé: Mardi 2 Septembre 2014 02:19:16
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Mark and all, Ceph IOPS performance has definitely improved with Giant.
With this version: ceph version 0.84-940-g3215c52
: Mardi 2 Septembre 2014 13:59:13
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
@Dan, hop my bad I forgot to use these settings, I’ll try again and see how
much I can get on the read performance side.
@Mark, thanks again and yes I believe that due to some
Han sebastien@enovance.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-users@lists.ceph.com, Cédric Lemarchand c.lemarch...@yipikai.org
Envoyé: Mardi 2 Septembre 2014 15:25:05
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Well the last time I ran
Lemarchand c.lemarch...@yipikai.org
Envoyé: Mardi 2 Septembre 2014 15:25:05
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Well the last time I ran two processes in parallel I got half the total
amount available so 1,7k per client.
On 02 Sep 2014, at 15:19
Subject: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS
Hey all,
It has been a while since the last thread performance related on the ML :p I've
been running some experiment to see how much I can get from an SSD on a Ceph
cluster.
To achieve that I did something pretty
As I said, 107K with IOs serving from memory, not hitting the disk..
From: Jian Zhang [mailto:amberzhan...@gmail.com]
Sent: Sunday, August 31, 2014 8:54 PM
To: Somnath Roy
Cc: Haomai Wang; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
8:54 PM
To: Somnath Roy
Cc: Haomai Wang; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
2K IOPS
Somnath,
on the small workload performance, 107k is higher than the theoretical IOPS
of 520, any idea why?
Single client is ~14K
On 31/08/14 17:55, Mark Kirkwood wrote:
On 29/08/14 22:17, Sebastien Han wrote:
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the
journal (plus your ceph setting) didn’t bring much, now I can reach
3,5K IOPS.
By any chance, would it be possible for
[mailto:and...@xdel.ru and...@xdel.ru]
Sent: Thursday, August 28, 2014 12:57 PM
To: Somnath Roy
Cc: David Moreau Simard; Mark Nelson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go
over 3, 2K IOPS
On Thu, Aug 28, 2014 at 10:48 PM, Somnath
[mailto:and...@xdel.ru and...@xdel.ru]
Sent: Thursday, August 28, 2014 12:57 PM
To: Somnath Roy
Cc: David Moreau Simard; Mark Nelson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go
over 3, 2K IOPS
On Thu, Aug 28, 2014 at 10:48 PM, Somnath
On 01/09/14 12:36, Mark Kirkwood wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
from our devices.
We use the Intel S3700 for
...@catalyst.net.nz
À: Sebastien Han sebastien@enovance.com, ceph-users
ceph-users@lists.ceph.com
Envoyé: Lundi 1 Septembre 2014 02:36:45
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 31/08/14 17:55, Mark Kirkwood wrote:
On 29/08/14 22:17, Sebastien Han wrote
original -
De: Mark Kirkwood mark.kirkw...@catalyst.net.nz
À: Sebastien Han sebastien@enovance.com, ceph-users
ceph-users@lists.ceph.com
Envoyé: Lundi 1 Septembre 2014 02:36:45
Objet: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 31/08/14 17
On 01/09/14 17:10, Alexandre DERUMIER wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
from our devices.
Hi,
Just check this:
On 29/08/14 22:17, Sebastien Han wrote:
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the journal (plus
your ceph setting) didn’t bring much, now I can reach 3,5K IOPS.
By any chance, would it be possible for you to test with a single OSD SSD?
To: Somnath Roy
Cc: Andrey Korolyov; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Hi Roy,
I already scan your merged codes about fdcache and optimizing for
lfn_find/lfn_open, could you give some performance improvement data about it?
I
On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy somnath@sandisk.com wrote:
Thanks Haomai !
Here is some of the data from my setup.
On Fri, Aug 29, 2014 at 4:03 PM, Andrey Korolyov and...@xdel.ru wrote:
On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy somnath@sandisk.com
wrote:
Thanks Haomai !
Here is some of the data from my setup.
Thanks a lot for the answers, even if we drifted from the main subject a little
bit.
Thanks Somnath for sharing this, when can we expect any codes that might
improve _write_ performance?
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the journal (plus
Hi Sébastien,
On Thu, Aug 28, 2014 at 06:11:37PM +0200, Sebastien Han wrote:
Hey all,
(...)
We have been able to reproduce this on 3 distinct platforms with some
deviations (because of the hardware) but the behaviour is the same.
Any thoughts will be highly appreciated, only getting 3,2k out
On 08/29/2014 06:10 AM, Dan Van Der Ster wrote:
Hi Sebastien,
Here’s my recipe for max IOPS on a _testing_ instance with SSDs:
osd op threads = 2
With SSDs, In the past I've seen increasing the osd op thread count can
help random reads.
osd disk threads = 2
journal max write
@Dan: thanks for sharing your config, with all your flags I don’t seem to get
more that 3,4K IOPS and they even seem to slow me down :( This is really weird.
Yes I already tried to run to simultaneous processes and only half of 3,4K for
each of them.
@Kasper: thanks for these results, I believe
Excellent, I've been meaning to check into how the TCP transport is
going. Are you using a hybrid threadpool/epoll approach? That I
suspect would be very effective at reducing context switching,
especially compared to what we do now.
Mark
On 08/28/2014 10:40 PM, Matt W. Benjamin wrote:
Hi Mark,
Yeah. The application defines portals which are active threaded, then the
transport layer is servicing the portals with EPOLL.
Matt
- Mark Nelson mark.nel...@inktank.com wrote:
Excellent, I've been meaning to check into how the TCP transport is
going. Are you using a hybrid
@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On Fri, Aug 29, 2014 at 10:37 AM, Somnath Roy somnath@sandisk.com wrote:
Thanks Haomai !
Here is some of the data from my setup
AM
To: ceph-users
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
Thanks a lot for the answers, even if we drifted from the main subject a little
bit.
Thanks Somnath for sharing this, when can we expect any codes that might
improve _write_ performance?
@Mark
Hi Somnath,
we're in the process evaluating sandisk ssds for ceph (fs and journal on each).
8 osds / ssds per host xeon e3 1650
Which one can you recommend?
Greets,
Stefan
Excuse my typo sent from my mobile phone.
Am 29.08.2014 um 18:33 schrieb Somnath Roy somnath@sandisk.com:
: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS
Hey all,
It has been a while since the last thread performance related on the ML :p I've
been running some experiment to see how much I can get from an SSD on a Ceph
cluster.
To achieve that I did something pretty simple
AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On 08/28/2014 12:39 PM, Somnath Roy wrote:
Hi Sebastian,
If you are trying with the latest Ceph master, there are some changes we made
that will be increasing your read
OSD performance on SSD] Can't go over 3,
2K IOPS
Hey all,
It has been a while since the last thread performance related on the ML
:p I've been running some experiment to see how much I can get from an
SSD on a Ceph cluster.
To achieve that I did something pretty simple:
* Debian wheezy 7.6
: Re: [ceph-users] [Single OSD performance on SSD] Can't go over
3, 2K IOPS
On 08/28/2014 12:39 PM, Somnath Roy wrote:
Hi Sebastian,
If you are trying with the latest Ceph master, there are some changes
we made that will be increasing your read performance from SSD a factor
of ~5X if the ios
On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy somnath@sandisk.com wrote:
Nope, this will not be back ported to Firefly I guess.
Thanks Regards
Somnath
Thanks for sharing this, the first thing in thought when I looked at
this thread, was your patches :)
If Giant will incorporate them,
...@xdel.ru]
Sent: Thursday, August 28, 2014 12:57 PM
To: Somnath Roy
Cc: David Moreau Simard; Mark Nelson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy somnath@sandisk.com wrote:
Nope
On 29/08/14 04:11, Sebastien Han wrote:
Hey all,
See my fio template:
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_lo
time_based
runtime=60
ioengine=rbd
clientname=admin
pool=test
rbdname=fio
invalidate=0# mandatory
#rw=randwrite
On 29/08/14 14:06, Mark Kirkwood wrote:
... mounting (xfs) with nobarrier seems to get
much better results. The run below is for a single osd on an xfs
partition from an Intel 520. I'm using another 520 as a journal:
...and adding
filestore_queue_max_ops = 2
improved IOPS a bit more:
12:57 PM
To: Somnath Roy
Cc: David Moreau Simard; Mark Nelson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K
IOPS
On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy somnath@sandisk.com wrote:
Nope, this will not be back ported
] [Single OSD performance on SSD] Can't go over 3,
2K IOPS
On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy somnath@sandisk.com
wrote:
Nope, this will not be back ported to Firefly I guess.
Thanks Regards
Somnath
Thanks for sharing this, the first thing in thought when I looked
Hi,
There's also an early-stage TCP transport implementation for Accelio, also
EPOLL-based. (We haven't attempted to run Ceph protocols over it yet, to my
knowledge, but it should be straightforward.)
Regards,
Matt
- Haomai Wang haomaiw...@gmail.com wrote:
Hi Roy,
As for
70 matches
Mail list logo