c: ceph-users@lists.ceph.com
Sent: Friday, 19 June, 2015 5:08:31 PM
Subject: Re: [ceph-users] rbd performance issue - can't find bottleneck
On 06/19/2015 10:29 AM, Andrei Mikhailovsky wrote:
Mark,
Thanks, I do understand that there is a risk of data loss by doing this.
Having said this, ceph is d
All - I have been following this thread for a bit, and am happy to see how
involved, capable, and collaborative that this ceph-users community seems
to be. It appears there is a fairly strong amount of domain knowledge
around the hardware used by many Ceph deployments, with a lot of "thumbs
up" a
e, 2015 5:08:31 PM
> Subject: Re: [ceph-users] rbd performance issue - can't find bottleneck
>
> On 06/19/2015 10:29 AM, Andrei Mikhailovsky wrote:
> > Mark,
> >
> > Thanks, I do understand that there is a risk of data loss by doing this.
> > Having said this
no one
will give you support for this kind of use case if you have problems.
Mark
Cheers
Andrei
- Original Message -
From: "Alexandre DERUMIER"
To: "Jacek Jarosiewicz"
Cc: "ceph-users"
Sent: Thursday, 18 June, 2015 11:54:42 AM
Subject: Re: [ceph-users] r
19 June, 2015 3:59:55 PM
> Subject: Re: [ceph-users] rbd performance issue - can't find bottleneck
>
>
>
> On 06/19/2015 09:54 AM, Andrei Mikhailovsky wrote:
> > Hi guys,
> >
> > I also use a combination of intel 520 and 530 for my journals and have
> >
ei
- Original Message -
From: "Alexandre DERUMIER"
To: "Jacek Jarosiewicz"
Cc: "ceph-users"
Sent: Thursday, 18 June, 2015 11:54:42 AM
Subject: Re: [ceph-users] rbd performance issue - can't find bottleneck
Hi,
for read benchmark
with fio, what
age -
> From: "Alexandre DERUMIER"
> To: "Jacek Jarosiewicz"
> Cc: "ceph-users"
> Sent: Thursday, 18 June, 2015 11:54:42 AM
> Subject: Re: [ceph-users] rbd performance issue - can't find bottleneck
>
> Hi,
>
> for read benchmark
>
Hi,
On 06/18/2015 12:54 PM, Alexandre DERUMIER wrote:
Hi,
for read benchmark
with fio, what is the iodepth ?
my fio 4k randr results with
iodepth=1 : bw=6795.1KB/s, iops=1698
iodepth=2 : bw=14608KB/s, iops=3652
iodepth=4 : bw=32686KB/s, iops=8171
iodepth=8 : bw=76175KB/s, iops=19043
iodepth=
On 06/18/2015 12:23 PM, Mark Nelson wrote:
so.. in order to increase performance, do I need to change the ssd
drives?
I'm just guessing, but because your read performance is slow as well,
you may multiple issues going on. The Intel 530 being slow at O_DSYNC
writes is one of them, but it's poss
-users"
Envoyé: Jeudi 18 Juin 2015 11:49:11
Objet: Re: [ceph-users] rbd performance issue - can't find bottleneck
On 06/17/2015 04:19 PM, Mark Nelson wrote:
>> SSD's are INTEL SSDSC2BW240A4
>
> Ah, if I'm not mistaken that's the Intel 530 right? You'll wa
On 06/18/2015 04:49 AM, Jacek Jarosiewicz wrote:
On 06/17/2015 04:19 PM, Mark Nelson wrote:
SSD's are INTEL SSDSC2BW240A4
Ah, if I'm not mistaken that's the Intel 530 right? You'll want to see
this thread by Stefan Priebe:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg05667.html
On 06/17/2015 04:19 PM, Mark Nelson wrote:
SSD's are INTEL SSDSC2BW240A4
Ah, if I'm not mistaken that's the Intel 530 right? You'll want to see
this thread by Stefan Priebe:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg05667.html
In fact it was the difference in Intel 520 and In
On Wed, 17 Jun 2015 16:03:17 +0200 Jacek Jarosiewicz wrote:
> On 06/17/2015 03:34 PM, Mark Nelson wrote:
> > On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
> >> Hi,
> >>
>
> [ cut ]
>
> >>
> >> ~60MB/s seq writes
> >> ~100MB/s seq reads
> >> ~2-3k iops random reads
> >
> > Is this per SSD or a
On 06/17/2015 09:03 AM, Jacek Jarosiewicz wrote:
On 06/17/2015 03:34 PM, Mark Nelson wrote:
On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
Hi,
[ cut ]
~60MB/s seq writes
~100MB/s seq reads
~2-3k iops random reads
Is this per SSD or aggregate?
aggregate (if I understand You correct
On 06/17/2015 03:38 PM, Alexandre DERUMIER wrote:
Hi,
can you post your ceph.conf ?
sure:
[global]
fsid = e96fdc70-4f9c-4c12-aae8-63dd7c64c876
mon initial members = cf01,cf02
mon host = 10.4.10.211,10.4.10.212
auth cluster required = cephx
auth service required = cephx
auth client required =
On 06/17/2015 03:34 PM, Mark Nelson wrote:
On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
Hi,
[ cut ]
~60MB/s seq writes
~100MB/s seq reads
~2-3k iops random reads
Is this per SSD or aggregate?
aggregate (if I understand You correctly). This is what I see when I run
tests on client
yé: Mercredi 17 Juin 2015 11:10:26
Objet: [ceph-users] rbd performance issue - can't find bottleneck
Hi,
We've been doing some testing of ceph hammer (0.94.2), but the
performance is very slow and we can't find what's causing the problem.
Initially we've started with
On 06/17/2015 04:10 AM, Jacek Jarosiewicz wrote:
Hi,
We've been doing some testing of ceph hammer (0.94.2), but the
performance is very slow and we can't find what's causing the problem.
Initially we've started with four nodes with 10 osd's total.
The drives we've used were SATA enterprise driv
Hi,
We've been doing some testing of ceph hammer (0.94.2), but the
performance is very slow and we can't find what's causing the problem.
Initially we've started with four nodes with 10 osd's total.
The drives we've used were SATA enterprise drives and on top of that
we've used SSD drives as
19 matches
Mail list logo