On 06/23/2011 03:03 AM, wang xuchen wrote:
What confuses me is 500(k/s /device) * 10 (devices) = 5M/s. It only uses
5M/1000M = 0.5% of the overall bandwidth.
Have you made sure that you can actually reach that throughput (TCP
benchmarking)?
Is this a back-to-back connection? It should be.
On 06/22/2011 04:28 AM, Digimer wrote:
Are all ten DRBD resources on the same set of drives?
Good hint: if there *are* indeed 10 DRBDs, the syncer rate should of
course be 30% * THROUGHPUT / NUM_DRBDs, because each resource will use
the defined rate. I.e. in your case, some 30M.
To the OP: Does
Felix, thanks for your reply.
I did an experiment yesterday. Here is what I`ve got.
I created ten DRBD devices using default syncer rate(primary/primary
configuration). I got the initial sync rate at about 1M/s per device. My
client is a Windows 2008 server with IOMeter running on it. The
Hi All,
Here is one problem I have encountered but haven`t figured out the answer
yet.
In my testing environment, I have ten DRBD devices configured as
primary/primary. I do have a cluster management software that
is irrelevant to the topic. My client is running IOMeter with queue depth
set to
On 06/21/2011 06:42 PM, wang xuchen wrote:
syncer {
rate 300M;
verify-alg crc32c;
al-extents 3800;
}
Drop that to about 75% of your maximum sustainable throughput (at most,
ideally even less).
The problem you are describing is the reason why the default is low. :)
--
Digimer,
Thanks for your reply.
I come up with the number 300M from DRBD official website A good rule of
thumb for this value(rate parameter) is to use about 30% of the available
replication bandwidth. . I use 10G Ethernet card for replication traffic,
1024M * 0.3 is 300M. What wrong with my
On 06/21/2011 09:51 PM, wang xuchen wrote:
Digimer,
Thanks for your reply.
I come up with the number 300M from DRBD official website A good rule
of thumb for this value(rate parameter) is to use about 30% of the
available replication bandwidth. . I use 10G Ethernet card for
replication