Hi,
I have run tests with v0.50 and see the same symptom.
However: I have also the impression that the problem is related
to the frequency at which requests arrive at the OSD.
I have run tests with rbd kernel client using 126 KByte and
512 KByte size for write requests. In the 2nd case, the
What rbd block size were you using?
-Sam
On Tue, Aug 21, 2012 at 10:29 PM, Andreas Bluemle
andreas.blue...@itxperts.de wrote:
Hi,
Samuel Just wrote:
Was the cluster complete healthy at the time that those traces were taken?
If there were osds going in/out/up/down, it would trigger osdmap
Hi Sage,
Sage Weil wrote:
Hi Andreas,
On Thu, 16 Aug 2012, Andreas Bluemle wrote:
Hi,
I have been trying to migrate a ceph cluster (ceph-0.48argonaut)
to a high speed cluster network and encounter scalability problems:
the overall performance of the ceph cluster does not scale well
with
On Mon, 20 Aug 2012, Andreas Bluemle wrote:
Hi Sage,
Sage Weil wrote:
Hi Andreas,
On Thu, 16 Aug 2012, Andreas Bluemle wrote:
Hi,
I have been trying to migrate a ceph cluster (ceph-0.48argonaut)
to a high speed cluster network and encounter scalability problems:
the
Hi,
please find attached for ceph-0.48argonaut which adds the
timestamp code. The only binary which utilizes these timestamps
is the ceph-osd daemon.
Timestamps are kept in-memory.
Configuration and extraction of timestamps is
achieved through the ceph osd tell interface.
Timestamps are only
Hi Yehuda,
I don't think throttling is the issue here.
The default thottle count is 100 MByte (100 20), it I read the source
correctly.
My simple sequential write writes 40 MBytes in total.
Spread over 4 OSDs with 2 replicas this should amount to roughly 20
MBytes per OSD.
Also I
Can you provide more details about where you pulled this measurement
data from? There are already a number of timestamps saved to track
stuff like this and in our own (admittedly still incomplete) work on
this subject we haven't seen any delays from the SimpleMessenger.
We've had several guys
On Thu, Aug 16, 2012 at 9:08 AM, Andreas Bluemle
andreas.blue...@itxperts.de wrote:
Hi,
I have been trying to migrate a ceph cluster (ceph-0.48argonaut)
to a high speed cluster network and encounter scalability problems:
the overall performance of the ceph cluster does not scale well
with
Hi Andreas,
On Thu, 16 Aug 2012, Andreas Bluemle wrote:
Hi,
I have been trying to migrate a ceph cluster (ceph-0.48argonaut)
to a high speed cluster network and encounter scalability problems:
the overall performance of the ceph cluster does not scale well
with an increase in the