Hi Greg,
I might take a look at this test case - but no promises wrt timing. Keep
in mind, though, that Bucardo (at least B4) would not behave well at all
if you attempted to replicate unique keys containing one or more NULLs.
Question: Is Bucardo5 still intended to replicate to databases other
than Postgres, or did it turn out to be a bit too ambitious? If the
former, a lot more testing is required - naturally. Some databases might
already support NULLs in the primary key vector in the WHERE clause ...
Best,
-ar
On 12/23/2012 11:29 PM, Greg Sabino Mullane wrote:
On Fri, Dec 14, 2012 at 01:37:48PM -0800, Rosser Schwarz wrote:
I can't speak to your proposed algorithmic changes, but I do think chunk
size should maybe be configurable, to help deal with low-memory
environments like the OP's, and to allow DBAs to set larger chunk sizes,
when they've also set max_stack_depth and the relevant ulimits accordingly.
I am pretty sure that has already been implemented. The patch references
a hard-coded 100,000 - which has already been removed in the 4.x branch
and replaced with configuration items max_delete_clause and
max_select_clause. Pretty sure the former will handle the OPs case.
That NULL problem and solution looks patch worthy though. Wonder if it
applies for b5 as well? Anyone want to write a simple test case and/or
expand t/20-postgres.t?
_______________________________________________
Bucardo-general mailing list
[email protected]
https://mail.endcrypt.com/mailman/listinfo/bucardo-general