It's a pain as I'm now forced to re-evaluate our whole solution to 3-way sync because of this issue...

When a file "test.txt" is updated, the logs show:

[12:22:01] Connecting to host node1 (PLAIN) ...
[12:22:01] Updating /test.txt on node1 ...
[12:22:01] File /test.txt is different on peer (cktxt char #0).
[12:22:01] File has been checked successfully (difference found).
[12:22:01] File is different on peer (rsync sig).
Local> PATCH 1Al8R_3ij5XXfI2e385IPPmvObt4OyXrHUxo0UnMvdZETeXIC2SGrFgQIeSMnSGu /test.txt\n
Peer> OK (send_data).\n
[12:22:01] Csync2 / Librsync: csync_rs_delta('/test.txt')
[12:22:01] Receiving sig_file from peer..
[12:22:01] Delta has been created successfully.
[12:22:02] While syncing file /opt/WeatherTest/test.txt:
[12:22:02] response from peer(/test.txt): node1 [1] <- OK (cmd_finished).
Local> SETOWN 1Al8R_3ij5XXfI2e385IPPmvObt4OyXrHUxo0UnMvdZETeXIC2SGrFgQIeSMnSGu /test.txt 100904787 100000513 \n
Peer> OK (cmd_finished).\n
Local> SETMOD 1Al8R_3ij5XXfI2e385IPPmvObt4OyXrHUxo0UnMvdZETeXIC2SGrFgQIeSMnSGu /test.txt 33268\n
Peer> OK (cmd_finished).\n
Local> SETIME 1Al8R_3ij5XXfI2e385IPPmvObt4OyXrHUxo0UnMvdZETeXIC2SGrFgQIeSMnSGu /test.txt 1499944915\n
Peer> OK (cmd_finished).\n
Local> BYE\n
Peer> OK (cu_later).\n

So I can only deduce that somewhere between "PATCH" and "SETOWN" csync is crashing/failing/timing out. In one case, I had the same file set to "root" on 2 nodes, but correct permissions on the 3rd node.

I'm wondering if it's sqlite, or a locking issue, or network latency?

On 17/07/2017 14:01, Aristedes Maniatis wrote:
After years of reliable service, csync2 2.0 did exactly that for me just last 
week. One file suddenly owned by root.

On 17/7/17 10:32PM, Kevin Cackler wrote:
Funnily enough, we are also experiencing this issue with the root owned files. 
Randomly, and without any definable pattern, so far as we can tell, we'll get a 
file that suddenly is owned by root:root with rw permissions and we have to go 
in and correct the permissions. So far we haven't been able to nail down the 
cause of this.

Mark Hodge wrote:

I've recently setup a csync cluster between 3 nodes and although the
ring model works ok, it obviously fails when the middle server (node
2) is offline. Therefore I've been trying to get a working config that
is something like this:

node1 => node2 + node3
node2 => node1 + node3
node3 => node1 + node2

So at least if node2 is offline, node1+node3 are still syncing.

Is this the best way to achieve this? using "master (slave)" pairs?

I ended up putting csync on all nodes in a cron every minute (lsync
would crash occasionally) when csync returned errors.

I got lots of "Database is busy, sleeping a sec" errors, which I
presumed was because csync was running at the same time on each node
and causing db locks. So I staggered them, at 0, 20 and 40 sec in the
minute which got rid of the "busy" errors.

However, occasionally I will get random files appear one one or more
nodes owned by "root" with perms "rw" only for owner. This means
standard users cannot access these files. I suspect that csync is
somehow failing to set uid/gid/perms after the copy.

How can this happen?


Csync2 mailing list

Csync2 mailing list

Csync2 mailing list

Csync2 mailing list

Reply via email to