Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-31 Thread Roberto Resoli
Il 31/08/2016 15:06, Lars Ellenberg ha scritto:
> Instead of bridging,
> explicit routes could be an other option.
> ip route add .../.. dev ...
> 
> Lars

Already tried, and didn't work for me, I guess that if there are two
interfaces with same IP drbd processes will listen on one only.

Maybe I'm wrong.

rob


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-31 Thread Lars Ellenberg
On Mon, Aug 22, 2016 at 10:43:18AM +0200, Roberto Resoli wrote:
> Il 18/08/2016 14:03, Veit Wahlich ha scritto:
> > Am Donnerstag, den 18.08.2016, 12:33 +0200 schrieb Roberto Resoli:
> >> Il 18/08/2016 10:09, Adam Goryachev ha scritto:
> >>> I can't comment on the DRBD related portions, but can't you add both
> >>> interfaces on each machine to a single bridge, and then configure the IP
> >>> address on the bridge. Hence each machine will only have one IP address,
> >>> and the other machines will use their dedicated network to connect to
> >>> it. I would assume the overhead of the bridge inside the kernel would be
> >>> minimal, but possibly not, so it might be a good idea to test it out.
> >>
> >> Very clever suggestion!
> >>
> >> Many thanks, will try and report.
> > 
> > If you try this, take care to enable STP on the bridges, or this will
> > create loops.
> 
> Yes, this worked immediately as aspected.
> 
> > Also STP will give you redundancy in case a link breaks and will try to
> > determine the shortest path between nodes.
> 
> I confirm. With three nodes and three links of course stp blocks one of
> the three links, with root bridge forwording traffing between the other two.
> 
> It is possible to control which bridge becomes root using the parameter
> "bridgeprio" of brctl.
> 
> > But the shortest link is not guaranteed. Especially after recovery from
> > a network link failure.
> > You might want to monitor each node for the shortest path.
> 
> Using stp of course has the side effect of not using one of the three
> links (it is the price to pay for failover).
> 
> I tried to disable stp, blocking at the same time (with a simple ebtable
> rule) the forwardings through the bridge in order to avoid
> loops/broadcast storms. In the resulting topology every link carries
> only the traffic of the two nodes it connects (at the expense of having
> no failover).
> 
> it is very handy to monitor that all is working correctly using:
> 
> watch brctl showstp 
> 
> and
> 
> watch brctl showmacs 
> 
> I post here the configuration I ended up to use, for reference:
> (I put it in a "drbd-interfaces" file, referenced in
> "/etc/network/interfaces" using the "source" directive)
> 
> ===
> auto drbdbr
> iface drbdbr inet static
> address  
> netmask  255.255.255.0
> bridge_ports eth2 eth3
> bridge_stp off
> bridge_ageing 30
> bridge_fd 5
> # Only with stp on
>   # node1 and node2 are preferred
> #bridge_bridgeprio 1000
> # Only with stp off
>   pre-up ifconfig eth2 mtu 9000 && ifconfig eth3 mtu 9000
> up  ebtables -I FORWARD --logical-in drbdbr -j DROP
> down ebtables -D FORWARD --logical-in drbdbr -j DROP
> ==

Instead of bridging,
explicit routes could be an other option.
ip route add .../.. dev ...

Lars

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-22 Thread Roberto Resoli
Il 18/08/2016 14:03, Veit Wahlich ha scritto:
> Am Donnerstag, den 18.08.2016, 12:33 +0200 schrieb Roberto Resoli:
>> Il 18/08/2016 10:09, Adam Goryachev ha scritto:
>>> I can't comment on the DRBD related portions, but can't you add both
>>> interfaces on each machine to a single bridge, and then configure the IP
>>> address on the bridge. Hence each machine will only have one IP address,
>>> and the other machines will use their dedicated network to connect to
>>> it. I would assume the overhead of the bridge inside the kernel would be
>>> minimal, but possibly not, so it might be a good idea to test it out.
>>
>> Very clever suggestion!
>>
>> Many thanks, will try and report.
> 
> If you try this, take care to enable STP on the bridges, or this will
> create loops.

Yes, this worked immediately as aspected.

> Also STP will give you redundancy in case a link breaks and will try to
> determine the shortest path between nodes.

I confirm. With three nodes and three links of course stp blocks one of
the three links, with root bridge forwording traffing between the other two.

It is possible to control which bridge becomes root using the parameter
"bridgeprio" of brctl.

> But the shortest link is not guaranteed. Especially after recovery from
> a network link failure.
> You might want to monitor each node for the shortest path.

Using stp of course has the side effect of not using one of the three
links (it is the price to pay for failover).

I tried to disable stp, blocking at the same time (with a simple ebtable
rule) the forwardings through the bridge in order to avoid
loops/broadcast storms. In the resulting topology every link carries
only the traffic of the two nodes it connects (at the expense of having
no failover).

it is very handy to monitor that all is working correctly using:

watch brctl showstp 

and

watch brctl showmacs 

I post here the configuration I ended up to use, for reference:
(I put it in a "drbd-interfaces" file, referenced in
"/etc/network/interfaces" using the "source" directive)

===
auto drbdbr
iface drbdbr inet static
address  
netmask  255.255.255.0
bridge_ports eth2 eth3
bridge_stp off
bridge_ageing 30
bridge_fd 5
# Only with stp on
  # node1 and node2 are preferred
#bridge_bridgeprio 1000
# Only with stp off
pre-up ifconfig eth2 mtu 9000 && ifconfig eth3 mtu 9000
up  ebtables -I FORWARD --logical-in drbdbr -j DROP
down ebtables -D FORWARD --logical-in drbdbr -j DROP
==

bye
rob
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-19 Thread Veit Wahlich
Hi Dan,

Am Donnerstag, den 18.08.2016, 10:33 -0600 schrieb dan:
> Simplest solution here is to overbuild.  If you are going to do a
> 3-node 'full-mesh' then you should consider 10G ethernet (a melanox w/
> cables on ebay is about US$20 w/ cables!).  Then you just enable STP
> on all the bridges and let it be.  If you are taking 2 hops, that
> should still be well over the transfer rates you need for such a small
> cluster and STP will eventually work itself out.

I agree that it would work fine even when not in optimal state.
But my least-hop consideration was more about latency and unexpected CPU
overhead for bridging and not simply trusting the system to return to
the optimal state automatically.

At least for my own applications, I need to know, not to trust. Thus I
suggest to monitor the state.

Best regards,
// Veit

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-19 Thread Roberto Resoli
Il 18/08/2016 11:15, Roland Kammerer ha scritto:
> On Thu, Aug 18, 2016 at 09:47:51AM +0200, Roberto Resoli wrote:
>> In particular I see that currently is not possible to dedicate an IP for
>> every different link betwen a managed resource and its peer node.
>>
>> Am I wrong?
> 
> No.
> 
>> Any advice/suggestion?
> 
> Don't do it ;-).

:-)

> We had that discussion on the ML. If you manually overwrite res files
> generated by DRBD Manage, they can get rewritten by DRBD Manage at "any
> time".
> Currently, it is simply not supported from a DRBD Manage point of view.
> To be honest, it is not on my TODO list.

I fully agree; I have tried the really elegant workaround proposed by Adam:

http://lists.linbit.com/pipermail/drbd-user/2016-August/023174.html

and it seems to work very well; i tried also to disable stp, blocking
forwarding traffic thru the bridges, so that the final topology is
exactly the three point to point dedicated connections I desired.

I'm going provide more details in next posts.

Many thanks for all the precious suggestion received on this list; I'm
really happy to be here!

rob
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-18 Thread dan
On Thu, Aug 18, 2016 at 6:03 AM, Veit Wahlich  wrote:
> But the shortest link is not guaranteed. Especially after recovery from
> a network link failure.
> You might want to monitor each node for the shortest path.

Simplest solution here is to overbuild.  If you are going to do a
3-node 'full-mesh' then you should consider 10G ethernet (a melanox w/
cables on ebay is about US$20 w/ cables!).  Then you just enable STP
on all the bridges and let it be.  If you are taking 2 hops, that
should still be well over the transfer rates you need for such a small
cluster and STP will eventually work itself out.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-18 Thread Veit Wahlich
Am Donnerstag, den 18.08.2016, 12:33 +0200 schrieb Roberto Resoli:
> Il 18/08/2016 10:09, Adam Goryachev ha scritto:
> > I can't comment on the DRBD related portions, but can't you add both
> > interfaces on each machine to a single bridge, and then configure the IP
> > address on the bridge. Hence each machine will only have one IP address,
> > and the other machines will use their dedicated network to connect to
> > it. I would assume the overhead of the bridge inside the kernel would be
> > minimal, but possibly not, so it might be a good idea to test it out.
> 
> Very clever suggestion!
> 
> Many thanks, will try and report.

If you try this, take care to enable STP on the bridges, or this will
create loops.
Also STP will give you redundancy in case a link breaks and will try to
determine the shortest path between nodes.

But the shortest link is not guaranteed. Especially after recovery from
a network link failure.
You might want to monitor each node for the shortest path.

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-18 Thread Roberto Resoli
Il 18/08/2016 10:09, Adam Goryachev ha scritto:
>>
> I can't comment on the DRBD related portions, but can't you add both
> interfaces on each machine to a single bridge, and then configure the IP
> address on the bridge. Hence each machine will only have one IP address,
> and the other machines will use their dedicated network to connect to
> it. I would assume the overhead of the bridge inside the kernel would be
> minimal, but possibly not, so it might be a good idea to test it out.

Very clever suggestion!

Many thanks, will try and report.

rob


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-18 Thread Roland Kammerer
On Thu, Aug 18, 2016 at 09:47:51AM +0200, Roberto Resoli wrote:
> In particular I see that currently is not possible to dedicate an IP for
> every different link betwen a managed resource and its peer node.
> 
> Am I wrong?

No.

> Any advice/suggestion?

Don't do it ;-).

We had that discussion on the ML. If you manually overwrite res files
generated by DRBD Manage, they can get rewritten by DRBD Manage at "any
time".

Currently, it is simply not supported from a DRBD Manage point of view.
To be honest, it is not on my TODO list.

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD9: full-mesh and managed resources

2016-08-18 Thread Adam Goryachev

On 18/08/2016 17:47, Roberto Resoli wrote:


Hello,

I'm currently running a three nodes cluster with drbd 9.0.3 and
drbdmanage 0.97.

It happens that in my setup i can dedicate two phisical interfaces per
node to the storage network, and possibly create a "full mesh" network
as of Chapter 5.1.4 of DRBD9 manual
(http://drbd.linbit.com/doc/users-guide-90/ch-admin-manual#s-drbdconf-conns).


The goal is to use only dedicated links (no network switch) for the
storage network connections.

I understand that this network topology is currently not supported by
drbdmanage, and I'm asking if it would be possible to configure the
three storage nodes as usual (one ip address per node) and change the
configuration of the network connections afterwards.

In particular I see that currently is not possible to dedicate an IP for
every different link betwen a managed resource and its peer node.

Am I wrong? Any advice/suggestion?

I can't comment on the DRBD related portions, but can't you add both 
interfaces on each machine to a single bridge, and then configure the IP 
address on the bridge. Hence each machine will only have one IP address, 
and the other machines will use their dedicated network to connect to 
it. I would assume the overhead of the bridge inside the kernel would be 
minimal, but possibly not, so it might be a good idea to test it out.


Regards,
Adam
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] DRBD9: full-mesh and managed resources

2016-08-18 Thread Roberto Resoli
Hello,

I'm currently running a three nodes cluster with drbd 9.0.3 and
drbdmanage 0.97.

It happens that in my setup i can dedicate two phisical interfaces per
node to the storage network, and possibly create a "full mesh" network
as of Chapter 5.1.4 of DRBD9 manual
(http://drbd.linbit.com/doc/users-guide-90/ch-admin-manual#s-drbdconf-conns).


The goal is to use only dedicated links (no network switch) for the
storage network connections.

I understand that this network topology is currently not supported by
drbdmanage, and I'm asking if it would be possible to configure the
three storage nodes as usual (one ip address per node) and change the
configuration of the network connections afterwards.

In particular I see that currently is not possible to dedicate an IP for
every different link betwen a managed resource and its peer node.

Am I wrong? Any advice/suggestion?

thanks,
rob
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user