On 02/12/2014 11:21 PM, Gianluca Cecchi wrote:
On Wed, Feb 12, 2014 at 6:18 PM, ml ml wrote:

I guess the brick details are stored in the postgres database and everything
else after will fail?!

Yes, we fixed the issue with resolving brick's host while syncing with gluster CLI in oVirt 3.4. However, when you use multiple addresses, you will need to use the workaround below.


Am i the only one with dedicated migration/storage interfaces? :)

Thanks,
Mario

One of the workarounds I found and that works for me as I'm not using
dns is this:

- for engine host node1 and node two have ip on mgmt
- for node1 and node2 their own ip addresses are on dedicated gluster network

so for example

10.4.4.x = mgmt
192.168.3.x = gluster dedicated

before:

on engine
/etc/hosts
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine

on node01
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine


after:

on engine (the same as before)
/etc/hosts
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine

on node01
#10.4.4.58 node01
#10.4.4.59 node02
192.168.3.1 node01
192.168.3.3 node02
10.4.4.60 engine

No operations on RDBMS.

Thanks, Gianluca!

I will update the wiki page so that this workaround is clear.


HIH,
Gianluca

_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to