Having your journals on the same disk causes all data to be written twice,
i.e. once to the journal and once to the
osd store. Notice that your tested throughput is slightly more than half
your expected maximum...
On Wed, Jan 1, 2014 at 11:32 PM, Dietmar Maurer diet...@proxmox.com wrote:
Hi
Having your journals on the same disk causes all data to be written twice,
i.e.
once to the journal and once to the
osd store. Notice that your tested throughput is slightly more than half your
expected maximum...
But AFAIK OSD bench already considers journal writes. The disk can write
-Original Message-
From: Stefan Priebe [mailto:s.pri...@profihost.ag]
Sent: Donnerstag, 02. Jänner 2014 18:36
To: Dietmar Maurer; Dino Yancey
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rados benchmark question
Hi,
Am 02.01.2014 17:10, schrieb Dietmar Maurer
Am 02.01.2014 18:48, schrieb Dietmar Maurer:
-Original Message-
From: Stefan Priebe [mailto:s.pri...@profihost.ag]
Sent: Donnerstag, 02. Jänner 2014 18:36
To: Dietmar Maurer; Dino Yancey
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] rados benchmark question
Hi,
Am
Hi,
Am 02.01.2014 17:10, schrieb Dietmar Maurer:
Having your journals on the same disk causes all data to be written twice, i.e.
once to the journal and once to the
osd store. Notice that your tested throughput is slightly more than half your
expected maximum...
But AFAIK OSD bench already
# iostat -x 5 (after about 30 seconds)
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz
await r_await w_await svctm %util
sdb 0.00 3.800.00 187.40 0.00 84663.60 903.56
157.62
796.930.00 796.93 5.34 100.00
Am 02.01.2014 19:06, schrieb Dietmar Maurer:
# iostat -x 5 (after about 30 seconds)
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz
await r_await w_await svctm %util
sdb 0.00 3.800.00 187.40 0.00 84663.60 903.56
157.62
so your disks are completely utilized and can't keep up see %util and
await.
But it say it writes at 80MB/s, so that would be about 40MB/s for
data? And 40*6=240 (not 190)
Did you miss the replication factor? I think it should be:
40MB/s*6/3 = 80MB/s
My test pool use size=1 (no
Am 02.01.2014 19:16, schrieb Dietmar Maurer:
so your disks are completely utilized and can't keep up see %util and await.
But it say it writes at 80MB/s, so that would be about 40MB/s for
data? And 40*6=240 (not 190)
Did you miss the replication factor? I think it should be:
40MB/s*6/3 =
Did you miss the replication factor? I think it should be:
40MB/s*6/3 = 80MB/s
My test pool use size=1 (no replication)
ok out of ideas... ;-( sorry
What values do you get? (osd bench vs. rados benchmar with pool size=1)
___
ceph-users
Am 02.01.2014 19:38, schrieb Dietmar Maurer:
Did you miss the replication factor? I think it should be:
40MB/s*6/3 = 80MB/s
My test pool use size=1 (no replication)
ok out of ideas... ;-( sorry
What values do you get? (osd bench vs. rados benchmar with pool size=1)
i have no idle
Hi all,
I run 3 nodes connected with a 10Gbit network, each running 2 OSDs.
Disks are 4TB Seagate Constellation ST4000NM0033-9ZM (xfs, journal on same
disk).
# ceph tell osd.0 bench
{ bytes_written: 1073741824,
blocksize: 4194304,
bytes_per_sec: 56494242.00}
So a single OSD can write
12 matches
Mail list logo