Try to backup schema by schema or table by table starting from those which
are being changed rarely.

On Tue, Jan 18, 2011 at 8:49 AM, Vladislav Geller <
[email protected]> wrote:

> HI,
>
> The problem with this solution is that i don't have enough space on the
> targeted live to implement such a procedure. It would have to work by
> pulling data directly off the live server. The issue with new data whilst
> the procedure is running (over a span of days or weeks in not severe as each
> daily activity generates new tables)
>
> As  someone else mentioned - I am against implementing any kind of
> replication as the live server will only have a subset of the data after i'm
> migrating thus making replication useless. And i also don't have the
> bandwidth to pull out the data in one day.
>
> Regards,
>
> Vladislav Geller
>
>
> On Jan 18, 2011, at 8:39 AM, Vladimir Rusinov wrote:
>
>
>
> On Tue, Jan 18, 2011 at 8:07 AM, Vladislav Geller <
> [email protected]> wrote:
>
>> Hello,
>>
>> I'm currently in the process of migrating a huge live database from one
>> part of the world to the other (for local use and data analysis). The
>> bandwidth does not allow me to get a decent transfer speed. Furthermore i
>> can not migrate during business hours since the connection is critical. This
>> leaves me with a timeframe of 9 hours a day where i can migrate this
>> database. Does anyone have experience he is willing to share in respect how
>> to migrate such databases?
>>
>
> run pg_start_backup() and use rsync on live data files to transfer them.
> If sync won't finish in 9 hours, abort it, run pg_stop_backup() and
> continue next day - I assume most of the data won't change, so rsync won't
> re-transfer it (but will calculate hash and compare).
>
> --
> Vladimir Rusinov
> http://greenmice.info/
>
>
>


-- 
---------------------------------------
Viktor Bojović
---------------------------------------
Wherever I go, Murphy goes with me

Reply via email to