Hi,
Correct, but that's the theoretical maximum I was referring to. If I
calculate that I should be able to get 50MB/second then 30MB/second is
acceptable but 500KB/second is not :)
I have written a small benchmark for RBD :
https://gist.github.com/smunaut/5433222
It uses the librbd API
On 04/22/2013 12:32 AM, James Harper wrote:
On 04/19/2013 08:30 PM, James Harper wrote:
rados -p pool -b 4096 bench 300 seq -t 64
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0 - 0
read got -2
Hi,
Correct, but that's the theoretical maximum I was referring to. If I
calculate
that I should be able to get 50MB/second then 30MB/second is acceptable
but 500KB/second is not :)
I have written a small benchmark for RBD :
https://gist.github.com/smunaut/5433222
It uses the
On 04/22/2013 06:34 AM, James Harper wrote:
Hi,
Correct, but that's the theoretical maximum I was referring to. If I calculate
that I should be able to get 50MB/second then 30MB/second is acceptable
but 500KB/second is not :)
I have written a small benchmark for RBD :
My read speed is consistently around 40MB/second, and my write speed is
consistently around 22MB/second. I had expected better of read...
You may want to try increasing your read_ahead_kb on the OSD data disks
and see if that helps read speeds.
Default appears to be 128 and I was
On 04/22/2013 06:48 AM, James Harper wrote:
My read speed is consistently around 40MB/second, and my write speed is
consistently around 22MB/second. I had expected better of read...
You may want to try increasing your read_ahead_kb on the OSD data disks
and see if that helps read speeds.
On 04/22/2013 07:01 AM, Mark Nelson wrote:
On 04/22/2013 06:48 AM, James Harper wrote:
My read speed is consistently around 40MB/second, and my write speed is
consistently around 22MB/second. I had expected better of read...
You may want to try increasing your read_ahead_kb on the OSD data
You may want to try increasing your read_ahead_kb on the OSD data disks and
see if that helps read speeds.
Jumping into this thread late, so I'm not sure if this was covered, but:
Remember that readahead on the OSDs will only help up to the size of the
object (4MB). To get good read
Hi,
Unless Sylvian implemented this in his tool
explicitly, it won't happen there either.
The small bench tool submits requests using the asynchronous API as
fast as possible, using a 1M chunk.
Then it just waits for all the completions to be done.
Sylvain
--
To unsubscribe from this
On 04/19/2013 08:30 PM, James Harper wrote:
rados -p pool -b 4096 bench 300 seq -t 64
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0 - 0
read got -2
error during benchmark: -5
error 5: (5)
Hi,
My goal is 4 OSD's, each on separate machines, with 1 drive in each for a
start, but I want to see performance of at least the same order of magnitude
as the theoretical maximum on my hardware before I think about replacing my
existing setup.
My current understanding is that it's not
Hi,
My goal is 4 OSD's, each on separate machines, with 1 drive in each for a
start, but I want to see performance of at least the same order of magnitude
as the theoretical maximum on my hardware before I think about replacing
my existing setup.
My current understanding is that it's
On 04/19/2013 08:30 PM, James Harper wrote:
rados -p pool -b 4096 bench 300 seq -t 64
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0 - 0
read got -2
error during benchmark: -5
error 5:
Hi James,
do you VLAN's interfaces configured on your bonding interfaces? Because
I saw a similar situation in my setup.
Kind Regards
Harald Roessler
On Fri, 2013-04-19 at 01:11 +0200, James Harper wrote:
Hi James,
This is just pure speculation, but can you assure that the bonding
Hi James,
do you VLAN's interfaces configured on your bonding interfaces? Because
I saw a similar situation in my setup.
No VLAN's on my bonding interface, although extensively used elsewhere.
Thanks
James
James Harper wrote:
Hi James,
do you VLAN's interfaces configured on your bonding interfaces? Because
I saw a similar situation in my setup.
No VLAN's on my bonding interface, although extensively used elsewhere.
What the OP described is *exactly* like a problem I've been struggling
with.
Where should I start looking for performance problems? I've tried
running
some of the benchmark stuff in the documentation but I haven't gotten
very
far...
Hi James! Sorry to hear about the performance trouble! Is it just
sequential 4KB direct IO writes that are giving you
I did an strace -c to gather some performance info, if that helps:
Oops. Forgot to say that that's an strace -c of the osd process!
% time seconds usecs/call callserrors syscall
-- --- --- - -
78.13 39.589549
I just tried a 3.8 series kernel and can now get 25mbytes/second using dd with
a 4mb block size, instead of the 700kbytes/second I was getting with the debian
3.2 kernel.
I'm still getting 120kbytes/second with a dd 4kb block size though... is that
expected?
James
--
To unsubscribe from this
On 04/19/2013 06:09 AM, James Harper wrote:
I just tried a 3.8 series kernel and can now get 25mbytes/second using dd with
a 4mb block size, instead of the 700kbytes/second I was getting with the debian
3.2 kernel.
That's unexpected. Was this the kernel on the client, the OSDs, or
On 04/19/2013 06:09 AM, James Harper wrote:
I just tried a 3.8 series kernel and can now get 25mbytes/second using dd
with a 4mb block size, instead of the 700kbytes/second I was getting with the
debian 3.2 kernel.
That's unexpected. Was this the kernel on the client, the OSDs, or
I'm doing some basic testing so I'm not really fussed about poor performance,
but my write performance appears to be so bad I think I'm doing something wrong.
Using dd to test gives me kbytes/second for write performance for 4kb block
sizes, while read performance is acceptable (for testing at
Hi James,
This is just pure speculation, but can you assure that the bonding works
correctly? Maybe you have issues there. I have seen a lot of incorrectly
configured bonding throughout my life as unix admin.
Maybe this could help you a little:
On 04/18/2013 06:46 AM, James Harper wrote:
I'm doing some basic testing so I'm not really fussed about poor performance,
but my write performance appears to be so bad I think I'm doing something wrong.
Using dd to test gives me kbytes/second for write performance for 4kb block
sizes, while
On Thu, Apr 18, 2013 at 5:43 PM, Mark Nelson mark.nel...@inktank.com wrote:
On 04/18/2013 06:46 AM, James Harper wrote:
I'm doing some basic testing so I'm not really fussed about poor
performance, but my write performance appears to be so bad I think I'm doing
something wrong.
Using dd to
On 04/18/2013 11:46 AM, Andrey Korolyov wrote:
On Thu, Apr 18, 2013 at 5:43 PM, Mark Nelson mark.nel...@inktank.com wrote:
On 04/18/2013 06:46 AM, James Harper wrote:
I'm doing some basic testing so I'm not really fussed about poor
performance, but my write performance appears to be so bad I
Where should I start looking for performance problems? I've tried running
some of the benchmark stuff in the documentation but I haven't gotten very
far...
Hi James! Sorry to hear about the performance trouble! Is it just
sequential 4KB direct IO writes that are giving you troubles?
http://xdel.ru/downloads/ceph-logs-dbg/
On Fri, Mar 23, 2012 at 9:53 PM, Samuel Just sam.j...@dreamhost.com wrote:
(CCing the list)
Actually, can you could re-do the rados bench run with 'debug journal
= 20' along with the other debugging? That should give us better
information.
-Sam
On
(CCing the list)
Actually, can you could re-do the rados bench run with 'debug journal
= 20' along with the other debugging? That should give us better
information.
-Sam
On Fri, Mar 23, 2012 at 5:25 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi Sam,
Can you please suggest on where to start
Our journal writes are actually sequential. Could you send FIO
results for sequential 4k writes osd.0's journal and osd.1's journal?
-Sam
On Thu, Mar 22, 2012 at 5:21 AM, Andrey Korolyov and...@xdel.ru wrote:
FIO output for journal partition, directio enabled, seems good(same
results for ext4
(CCing the list)
So, the problem isn't the bandwidth. Before we respond to the client,
we write the operation to the journal. In this case, that operation
is taking 1s per operation on osd.1. Both rbd and rados bench will
only allow a limited number of ops in flight at a time, so this
latency
rados bench 60 write -p data
skip
Total time run:61.217676
Total writes made: 989
Write size:4194304
Bandwidth (MB/sec):64.622
Average Latency: 0.989608
Max latency: 2.21701
Min latency: 0.255315
Here a snip from osd log, seems write size is
Can you set osd and filestore debugging to 20, restart the osds, run
rados bench as before, and post the logs?
-Sam Just
On Tue, Mar 20, 2012 at 1:37 PM, Andrey Korolyov and...@xdel.ru wrote:
rados bench 60 write -p data
skip
Total time run: 61.217676
Total writes made: 989
Write
It sounds like maybe you're using Xen? The rbd writeback window option only
works for userspace rbd implementations (eg, KVM).
If you are using KVM, you probably want 8192 (~80MB) rather than 8192000
(~8MB).
What options are you running dd with? If you run a rados bench from both
Nope, I`m using KVM for rbd guests. Surely I`ve been noticed that Sage
mentioned too small value and I`ve changed it to 64M before posting
previous message with no success - both 8M and this value cause a
performance drop. When I tried to wrote small amount of data that can
be compared to
On 03/19/2012 11:13 AM, Andrey Korolyov wrote:
Nope, I`m using KVM for rbd guests. Surely I`ve been noticed that Sage
mentioned too small value and I`ve changed it to 64M before posting
previous message with no success - both 8M and this value cause a
performance drop. When I tried to wrote
On Sat, 17 Mar 2012, Andrey Korolyov wrote:
Hi,
I`ve did some performance tests at the following configuration:
mon0, osd0 and mon1, osd1 - two twelve-core r410 with 32G ram, mon2 -
dom0 with three dedicated cores and 1.5G, mostly idle. First three
disks on each r410 arranged into raid0
37 matches
Mail list logo