Bucardo is creating load on database. I would suggest use other solutions to 
send/receive data between two nodes like Kafka or rabbitmq. It will not create 
load on db. It is running smoothly for me.
RegardsOm PrakashBangalore

Sent from Yahoo Mail on Android 
 
  On Tue, Feb 12, 2019 at 4:11, 
[email protected]<[email protected]> wrote: 
  Send Bucardo-general mailing list submissions to
    [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
    https://mail.endcrypt.com/mailman/listinfo/bucardo-general
or, via email, send a message with subject or body 'help' to
    [email protected]

You can reach the person managing the list at
    [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Bucardo-general digest..."


Today's Topics:

  1. bucardo VAC errors (Paul Theodoropoulos)
  2. new Bucardo installation with a 1TB DB (Lucas Possamai)
  3. Re: new Bucardo installation with a 1TB DB (David Christensen)
  4. Re: new Bucardo installation with a 1TB DB (Lucas Possamai)
  5. Re: new Bucardo installation with a 1TB DB (Lucas Possamai)


----------------------------------------------------------------------

Message: 1
Date: Mon, 4 Feb 2019 15:43:43 -0800
From: Paul Theodoropoulos <[email protected]>
To: "[email protected] List" <[email protected]>
Subject: [Bucardo-general] bucardo VAC errors
Message-ID: <[email protected]>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Was running Bucardo 5.4.1. Recently upgraded servers from Debian Jessie 
to Debian Stretch. This upgraded postgresql from 9.4 to 9.6, and 
upgraded perl from 5.20.1 to 5.24.1. I reinstalled all required perl 
modules (including Bucardo).

Since then, I get errors such as these:

(27541) [Mon Feb? 4 15:37:53 2019] VAC Warning! VAC was killed at line 
7868: DBD::Pg::st pg_result failed: ERROR:? relation 
"bucardo.delta_pg_toast_pg_toast_21502_index" does not exist
LINE 1: DELETE FROM bucardo.delta_pg_toast_pg_toast_21502_index USIN...
 ??????????????????? ^
QUERY:? DELETE FROM bucardo.delta_pg_toast_pg_toast_21502_index USING 
(SELECT txntime AS tt FROM bucardo.track_pg_toast_pg_toast_21502_index 
GROUP BY 1 HAVING COUNT(*) = 1) AS foo WHERE txntime = tt AND txntime < 
now() - interval '45 seconds'
CONTEXT:? PL/pgSQL function bucardo.bucardo_purge_delta_oid(text,oid) 
line 49 at EXECUTE
SQL statement "SELECT bucardo.bucardo_purge_delta_oid($1, myrec.tablename)"
PL/pgSQL function bucardo_purge_delta(text) line 12 at SQL statement at 
/usr/local/share/perl/5.24.1/Bucardo.pm line 7868.

Just to be thorough, I upgraded to Bucardo 5.5.0 (followed all upgrade 
steps), but no joy, the errors remain.

Thoughts?

By the way, really like the improvements to 'bucardo status', with more 
details of what bucardo is doing 'right now'.

-- 
Paul Theodoropoulos
www.anastrophe.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20190204/f546167b/attachment-0001.html>

------------------------------

Message: 2
Date: Wed, 6 Feb 2019 16:39:18 +1300
From: Lucas Possamai <[email protected]>
To: [email protected]
Subject: [Bucardo-general] new Bucardo installation with a 1TB DB
Message-ID:
    <CAE_gQfXNEV9_YDEznOFtf36GoPALb1PaANQh6=aoegtruuy...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi.

We're moving our EC2 Postgres 1TB DB to RDS and decided to use Bucardo, as
it supports multiple PG versions.
We have a PG 9.2 cluster, and instead of upgrading it, we'll use Bucardo to
replicate to a PG 9.6, then use DMS or pg_dump to restore it in RDS.

Because of the size of the current 9.2 DB, I cannot stop the application so
it doesn't write to the DB while I do pg_dump and restore it in the new
bucardo slave.

So, I thought I would do something like this:

  1. Install Bucardo and add large tables to a pushdelta sync
  2. Copy the tables to the new server (e.g. with pg_dump)
  3. Startup Bucardo and catch things up (e.g. copy all rows changes since
  step 2)

Steps for the above would be pretty much like this article
<https://www.endpoint.com/blog/2009/09/16/migrating-postgres-with-bucardo-4>;
if I'm not mistaken.

My questions are:

  1. Is it the right approach? do you guys have any other suggestions?
  2. I'll have the following: pg-9.2 master --> bucardo instance (with the
  bucardo DB only) --> pg-9.6 slave from bucardo
      1. When doing the pg_dump, I only need to restore it on the "pg-9.6
      slave from bucardo" instance, correct? the bucardo DB does not store the
      data?

Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20190206/33ff1108/attachment-0001.html>

------------------------------

Message: 3
Date: Wed, 6 Feb 2019 08:09:48 -0600
From: David Christensen <[email protected]>
To: Lucas Possamai <[email protected]>
Cc: [email protected]
Subject: Re: [Bucardo-general] new Bucardo installation with a 1TB DB
Message-ID: <[email protected]>
Content-Type: text/plain; charset="utf-8"

> We're moving our EC2 Postgres 1TB DB to RDS and decided to use Bucardo, as it 
> supports multiple PG versions.
> We have a PG 9.2 cluster, and instead of upgrading it, we'll use Bucardo to 
> replicate to a PG 9.6, then use DMS or pg_dump to restore it in RDS.
> 
> Because of the size of the current 9.2 DB, I cannot stop the application so 
> it doesn't write to the DB while I do pg_dump and restore it in the new 
> bucardo slave.
> 
> So, I thought I would do something like this:
>     ? Install Bucardo and add large tables to a pushdelta sync
>     ? Copy the tables to the new server (e.g. with pg_dump)
>     ? Startup Bucardo and catch things up (e.g. copy all rows changes since 
> step 2)
> Steps for the above would be pretty much like this article; if I'm not 
> mistaken.
> 
> My questions are:
>     ? Is it the right approach? do you guys have any other suggestions?

Yeah, as long as you?re doing a one-way sync (master -> target) it?s sufficient 
to start capturing the deltas (generally by creating a sync with auto-kick 
off), then pg_dump the data, then kick the sync/set autokick until you?re 
caught up then cutover the app as needed.

>     ? I'll have the following: pg-9.2 master --> bucardo instance (with the 
> bucardo DB only) --> pg-9.6 slave from bucardo
>         ? When doing the pg_dump, I only need to restore it on the "pg-9.6 
> slave from bucardo" instance, correct? the bucardo DB does not store the data?

Right, ?bucardo? database holds meta-information about the syncs, sync history, 
etc, but no user data.

> Thanks!

Best,

David
--
David Christensen
End Point Corporation
[email protected]
785-727-1171



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Message signed with OpenPGP
URL: 
<https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20190206/2acd2482/attachment-0001.sig>

------------------------------

Message: 4
Date: Thu, 7 Feb 2019 09:18:18 +1300
From: Lucas Possamai <[email protected]>
Cc: [email protected]
Subject: Re: [Bucardo-general] new Bucardo installation with a 1TB DB
Message-ID:
    <CAE_gQfX25YY38ZEw-3a210-=Viq5BwY+1SVQGdB=dqrgyor...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Thu, Feb 7, 2019 at 3:09 AM David Christensen <[email protected]> wrote:

> > We're moving our EC2 Postgres 1TB DB to RDS and decided to use Bucardo,
> as it supports multiple PG versions.
> > We have a PG 9.2 cluster, and instead of upgrading it, we'll use Bucardo
> to replicate to a PG 9.6, then use DMS or pg_dump to restore it in RDS.
> >
> > Because of the size of the current 9.2 DB, I cannot stop the application
> so it doesn't write to the DB while I do pg_dump and restore it in the new
> bucardo slave.
> >
> > So, I thought I would do something like this:
> >      ? Install Bucardo and add large tables to a pushdelta sync
> >      ? Copy the tables to the new server (e.g. with pg_dump)
> >      ? Startup Bucardo and catch things up (e.g. copy all rows changes
> since step 2)
> > Steps for the above would be pretty much like this article; if I'm not
> mistaken.
> >
> > My questions are:
> >      ? Is it the right approach? do you guys have any other suggestions?
>
> Yeah, as long as you?re doing a one-way sync (master -> target) it?s
> sufficient to start capturing the deltas (generally by creating a sync with
> auto-kick off), then pg_dump the data, then kick the sync/set autokick
> until you?re caught up then cutover the app as needed.
>
>
First of all, thanks for your reply!
That would be pushdelta <https://bucardo.org/pushdelta/>, correct?


> >      ? I'll have the following: pg-9.2 master --> bucardo instance
> (with the bucardo DB only) --> pg-9.6 slave from bucardo
> >              ? When doing the pg_dump, I only need to restore it on the
> "pg-9.6 slave from bucardo" instance, correct? the bucardo DB does not
> store the data?
>
> Right, ?bucardo? database holds meta-information about the syncs, sync
> history, etc, but no user data.
>
>
Perfect!


> > Thanks!
>
>
>
Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20190207/072595ee/attachment-0001.html>

------------------------------

Message: 5
Date: Tue, 12 Feb 2019 11:40:40 +1300
From: Lucas Possamai <[email protected]>
Cc: [email protected]
Subject: Re: [Bucardo-general] new Bucardo installation with a 1TB DB
Message-ID:
    <CAE_gQfWAg2Ok=x4dmey3dvwjhs_awdxhs3cjfvce80ozl6g...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Thu, 7 Feb 2019 at 09:18, Lucas Possamai <[email protected]> wrote:

> On Thu, Feb 7, 2019 at 3:09 AM David Christensen <[email protected]>
> wrote:
>
>> > We're moving our EC2 Postgres 1TB DB to RDS and decided to use Bucardo,
>> as it supports multiple PG versions.
>> > We have a PG 9.2 cluster, and instead of upgrading it, we'll use
>> Bucardo to replicate to a PG 9.6, then use DMS or pg_dump to restore it in
>> RDS.
>> >
>> > Because of the size of the current 9.2 DB, I cannot stop the
>> application so it doesn't write to the DB while I do pg_dump and restore it
>> in the new bucardo slave.
>> >
>> > So, I thought I would do something like this:
>> >      ? Install Bucardo and add large tables to a pushdelta sync
>> >      ? Copy the tables to the new server (e.g. with pg_dump)
>> >      ? Startup Bucardo and catch things up (e.g. copy all rows changes
>> since step 2)
>> > Steps for the above would be pretty much like this article; if I'm not
>> mistaken.
>> >
>> > My questions are:
>> >      ? Is it the right approach? do you guys have any other
>> suggestions?
>>
>> Yeah, as long as you?re doing a one-way sync (master -> target) it?s
>> sufficient to start capturing the deltas (generally by creating a sync with
>> auto-kick off), then pg_dump the data, then kick the sync/set autokick
>> until you?re caught up then cutover the app as needed.
>>
>>
> First of all, thanks for your reply!
> That would be pushdelta <https://bucardo.org/pushdelta/>, correct?
>


Hi.

When using "type=pushdelta" on bucardo 5.4.1, I get the error: Unknown
option 'type'
What is the equivalent of "type=pushdelta" on Bucardo 5? Couldn't find that
on the documentation.


Cheers.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://mail.endcrypt.com/pipermail/bucardo-general/attachments/20190212/7e7205bc/attachment.html>

------------------------------

_______________________________________________
Bucardo-general mailing list
[email protected]
https://mail.endcrypt.com/mailman/listinfo/bucardo-general


End of Bucardo-general Digest, Vol 133, Issue 1
***********************************************
  
_______________________________________________
Bucardo-general mailing list
[email protected]
https://mail.endcrypt.com/mailman/listinfo/bucardo-general

Reply via email to