http://xdel.ru/downloads/ceph-logs-dbg/
On Fri, Mar 23, 2012 at 9:53 PM, Samuel Just sam.j...@dreamhost.com wrote:
(CCing the list)
Actually, can you could re-do the rados bench run with 'debug journal
= 20' along with the other debugging? That should give us better
information.
-Sam
On
(CCing the list)
Actually, can you could re-do the rados bench run with 'debug journal
= 20' along with the other debugging? That should give us better
information.
-Sam
On Fri, Mar 23, 2012 at 5:25 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi Sam,
Can you please suggest on where to start
Our journal writes are actually sequential. Could you send FIO
results for sequential 4k writes osd.0's journal and osd.1's journal?
-Sam
On Thu, Mar 22, 2012 at 5:21 AM, Andrey Korolyov and...@xdel.ru wrote:
FIO output for journal partition, directio enabled, seems good(same
results for ext4
(CCing the list)
So, the problem isn't the bandwidth. Before we respond to the client,
we write the operation to the journal. In this case, that operation
is taking 1s per operation on osd.1. Both rbd and rados bench will
only allow a limited number of ops in flight at a time, so this
latency
rados bench 60 write -p data
skip
Total time run:61.217676
Total writes made: 989
Write size:4194304
Bandwidth (MB/sec):64.622
Average Latency: 0.989608
Max latency: 2.21701
Min latency: 0.255315
Here a snip from osd log, seems write size is
Can you set osd and filestore debugging to 20, restart the osds, run
rados bench as before, and post the logs?
-Sam Just
On Tue, Mar 20, 2012 at 1:37 PM, Andrey Korolyov and...@xdel.ru wrote:
rados bench 60 write -p data
skip
Total time run: 61.217676
Total writes made: 989
Write
It sounds like maybe you're using Xen? The rbd writeback window option only
works for userspace rbd implementations (eg, KVM).
If you are using KVM, you probably want 8192 (~80MB) rather than 8192000
(~8MB).
What options are you running dd with? If you run a rados bench from both
Nope, I`m using KVM for rbd guests. Surely I`ve been noticed that Sage
mentioned too small value and I`ve changed it to 64M before posting
previous message with no success - both 8M and this value cause a
performance drop. When I tried to wrote small amount of data that can
be compared to
On 03/19/2012 11:13 AM, Andrey Korolyov wrote:
Nope, I`m using KVM for rbd guests. Surely I`ve been noticed that Sage
mentioned too small value and I`ve changed it to 64M before posting
previous message with no success - both 8M and this value cause a
performance drop. When I tried to wrote
On Sat, 17 Mar 2012, Andrey Korolyov wrote:
Hi,
I`ve did some performance tests at the following configuration:
mon0, osd0 and mon1, osd1 - two twelve-core r410 with 32G ram, mon2 -
dom0 with three dedicated cores and 1.5G, mostly idle. First three
disks on each r410 arranged into raid0
10 matches
Mail list logo