On Wed, 13 Nov 2013 17:13:39 +0900 Christian Balzer <[email protected]>
wrote:
> On Wed, 13 Nov 2013 07:29:13 +0100 ml ml wrote:
> > i have a simple 2 Node Setup. They are directly connected. We are
> > now planning to implement SSD Disks.
> > I now have the fear that the gigabit link between the two hosts will
> > become a bottle neck.
> It would be even with "normal" disks, certainly so if you have more
> than one (RAID).
> > Do you think bonding or tcp multipath will increase this? What
> > would you recommend?
> A google search finds the section of the DRBD user guide first, and
> funnily enough a thread with some morsels of wisdom by yours truly as
> well:
> 
> http://lists.linbit.com/pipermail/drbd-user/2010-October/014858.html
>  
> > Some mix between performance and redundancy is the goal.
> > Has someone got real life production experience here?
> Search and you will find many examples in the list archives and
> elsewhere. 
> 
> Up to about 200MB/s replication speed and with a small budget, bonded
> (directly connected) GbE links are fine.
> 
> Once you need speeds over 250MB/s and/or fast I/O (transactions) I
> would recommend directly connected Infiniband.

We are currently getting good results with two bonded direct-connected
10G links, 20Gbits seems almost fast enough for us:-)

But I am planning on breaking that up to connect all three nodes with
three 10G direct connections. And then I will either stay with 8.3 and
distribute the drbd-volumes across the three nodes or go for drbd9 (or
sheepdog or ceph) and do autobalancing/3-way-redundancy...

If 10G (or infiniband) isn't in your budget, two bonded 1G connections
are fine. Provided they are for drbd only.
If you find the write-latency not good enough think about whether you
need protocol C or if A or B are sufficient.

Have fun,

Arnold

Attachment: signature.asc
Description: PGP signature

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to