Adrian,

please post the slave side logs.

I see you use a file:// slave, so to produce them, you need a running
glusterd on the slave too.

So the clearest procedure to follow would be like this:

- stop geo-rep with the instance producing this
- delete master side logs (to get rid of old data)
- start glusterd on slave box (if it was not running)
- start the geo-rep session again, wait for fault to come up
- collect newly produced logs both on master and slave side and post them.

How to locate slave side logs is described here:

http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Troubleshooting_Geo-replication

Csaba

On Thu, Jun 30, 2011 at 4:43 PM, Adrian Carpenter <[email protected]> wrote:
> Yes I can ssh between all the boxes without password as root.
>
>
> On 30 Jun 2011, at 15:27, Csaba Henk wrote:
>
>> t seems that the connection gets dropped (or not even able to
>> establish). Is the ssh auth set up properly from the second volume?
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to