On 01.04.2012 20:20, Lars Ellenberg wrote:
On Sun, Apr 01, 2012 at 02:16:42PM +0200, Arnold Krille wrote:
On Saturday 31 March 2012 15:08:59 Digimer wrote:
DRBD isn't much slower than the native disk performance, provided your
network is fast enough.
I wouldn't sign that. While 1GB-network compares to current sata disks, the
throughput isn't everything. There is also latency where the network layer in
drbd introduces a factor of ten compared to pure local disks and when using C-
protocol.
How is that?
SATA disk latency for random writes:>= 10ms
Round trip time on GigE direct link:<  0.15 ms

My experience says otherwise...

So wherever you see your factor 10,
it is unlikely to be the "network layer in DRBD".

Its not the "network layer in drbd", its "the sending buffer, the switch, the receiving buffer, the remote disk latency, the sending buffer, the switch, the receiving buffer" of DRBD with protocol C.

The resulting factor of ten is what my co-admin used to point to me as the guilty for the low performance just before we switched to protocol A, where its really only the local disks latency.

And when there are many users/apps accessing this resource, its the
latency that makes them complain.
That is correct.

Have fun,

Arnold
--
Dieses Email wurde elektronisch erstellt und ist ohne handschriftliche Unterschrift gültig.
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to