Re: [ADMIN] Migrating a live database

2011-01-18 Thread Viktor Bojović
Try to backup schema by schema or table by table starting from those which
are being changed rarely.

On Tue, Jan 18, 2011 at 8:49 AM, Vladislav Geller 
vladislav.gel...@vincorex.ch wrote:

 HI,

 The problem with this solution is that i don't have enough space on the
 targeted live to implement such a procedure. It would have to work by
 pulling data directly off the live server. The issue with new data whilst
 the procedure is running (over a span of days or weeks in not severe as each
 daily activity generates new tables)

 As  someone else mentioned - I am against implementing any kind of
 replication as the live server will only have a subset of the data after i'm
 migrating thus making replication useless. And i also don't have the
 bandwidth to pull out the data in one day.

 Regards,

 Vladislav Geller


 On Jan 18, 2011, at 8:39 AM, Vladimir Rusinov wrote:



 On Tue, Jan 18, 2011 at 8:07 AM, Vladislav Geller 
 vladislav.gel...@vincorex.ch wrote:

 Hello,

 I'm currently in the process of migrating a huge live database from one
 part of the world to the other (for local use and data analysis). The
 bandwidth does not allow me to get a decent transfer speed. Furthermore i
 can not migrate during business hours since the connection is critical. This
 leaves me with a timeframe of 9 hours a day where i can migrate this
 database. Does anyone have experience he is willing to share in respect how
 to migrate such databases?


 run pg_start_backup() and use rsync on live data files to transfer them.
 If sync won't finish in 9 hours, abort it, run pg_stop_backup() and
 continue next day - I assume most of the data won't change, so rsync won't
 re-transfer it (but will calculate hash and compare).

 --
 Vladimir Rusinov
 http://greenmice.info/





-- 
---
Viktor Bojović
---
Wherever I go, Murphy goes with me


Re: [ADMIN] Migrating a live database

2011-01-18 Thread Vladimir Rusinov
You don't need a lot of additional space to run pg_start_backup(). Read
following:
http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html (see
24.3.2)

On Tue, Jan 18, 2011 at 10:49 AM, Vladislav Geller 
vladislav.gel...@vincorex.ch wrote:

 HI,

 The problem with this solution is that i don't have enough space on the
 targeted live to implement such a procedure. It would have to work by
 pulling data directly off the live server. The issue with new data whilst
 the procedure is running (over a span of days or weeks in not severe as each
 daily activity generates new tables)

 As  someone else mentioned - I am against implementing any kind of
 replication as the live server will only have a subset of the data after i'm
 migrating thus making replication useless. And i also don't have the
 bandwidth to pull out the data in one day.

 Regards,

 Vladislav Geller


 On Jan 18, 2011, at 8:39 AM, Vladimir Rusinov wrote:



 On Tue, Jan 18, 2011 at 8:07 AM, Vladislav Geller 
 vladislav.gel...@vincorex.ch wrote:

 Hello,

 I'm currently in the process of migrating a huge live database from one
 part of the world to the other (for local use and data analysis). The
 bandwidth does not allow me to get a decent transfer speed. Furthermore i
 can not migrate during business hours since the connection is critical. This
 leaves me with a timeframe of 9 hours a day where i can migrate this
 database. Does anyone have experience he is willing to share in respect how
 to migrate such databases?


 run pg_start_backup() and use rsync on live data files to transfer them.
 If sync won't finish in 9 hours, abort it, run pg_stop_backup() and
 continue next day - I assume most of the data won't change, so rsync won't
 re-transfer it (but will calculate hash and compare).

 --
 Vladimir Rusinov
 http://greenmice.info/





-- 
Vladimir Rusinov
http://greenmice.info/


Re: [ADMIN] Migrating a live database

2011-01-18 Thread Jasen Betts
On 2011-01-18, Vladislav Geller vladislav.gel...@vincorex.ch wrote:
 Hello,

 I'm currently in the process of migrating a huge live database from
 one part of the world to the other (for local use and data analysis).
 The bandwidth does not allow me to get a decent transfer speed.
 Furthermore i can not migrate during business hours since the
 connection is critical. This leaves me with a timeframe of 9 hours a
 day where i can migrate this database. Does anyone have experience he
 is willing to share in respect how to migrate such databases?

how about using replication?

eg: slony1 or wal-shipping

compression will help too, but I assume you're already doing that.


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Migrating a live database

2011-01-18 Thread Scott Ribe
On Jan 18, 2011, at 12:39 AM, Vladimir Rusinov wrote:

 If sync won't finish in 9 hours, abort it, run pg_stop_backup() and continue 
 next day - I assume most of the data won't change, so rsync won't re-transfer 
 it (but will calculate hash and compare).

And there's --partial in case you have to stop it in the middle of a large file.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres Backup Utility

2011-01-18 Thread French, Martin
I'm assuming that this needs to be tightly controlled and as such a
replication tool is out of the question?

 

In that case; The first thing to pop into my head here would be to use
either use shell scripting, or to use the pg API and write a c program
to handle it.

 

I remember doing something very similar with Oracle a few years back.

 

Cheers

 

Martin

 

 

From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Bradley Holbrook
Sent: 18 January 2011 00:08
To: pgsql-admin@postgresql.org
Subject: [ADMIN] Postgres Backup Utility

 

Hello!

 

First day on the new mailing list as I have need of some expert's
advice.

 

I need to be able to quickly apply the structure updates from a
development database to a testing database, and do selective data
updates (like on lookup tables, but not content tables).

 

Any help would be appreciated!

 

Brad


___ 
  
This email is intended for the named recipient. The information contained 
in it is confidential.  You should not copy it for any purposes, nor 
disclose its contents to any other party.  If you received this email 
in error, please notify the sender immediately via email, and delete it from
your computer. 
  
Any views or opinions presented are solely those of the author and do not 
necessarily represent those of the company. 
  
PCI Compliancy: Please note, we do not send or wish to receive banking, credit
or debit card information by email or any other form of communication. 

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive
Wigston, Leicester LE18 1AT. Tel 0116 2888000
Registered in England and Wales, Reg No 00986161
VAT GB 115 5713 87 900
__



Re: [ADMIN] Postgres Backup Utility

2011-01-18 Thread Bradley Holbrook
Well, I can't just go dropping and recreating tables. it needs to create the
correct alter statements if existing tables and or functions already exist.

 

Secondly, when I'm finished changing the structure, I need to be able to
select the list of tables that will have content updates.

 

Using a script might be more work maintaining then it's worth. I have a
backup utility that can do the job, but 3 tedious steps per schema, that
only work about 10% of the time (and no batching options so that I can
create a list of actions and run the list).

 

 

From: French, Martin [mailto:fren...@cromwell.co.uk] 
Sent: January-18-11 5:47 AM
To: Bradley Holbrook; pgsql-admin@postgresql.org
Subject: RE: [ADMIN] Postgres Backup Utility

 

I'm assuming that this needs to be tightly controlled and as such a
replication tool is out of the question?

 

In that case; The first thing to pop into my head here would be to use
either use shell scripting, or to use the pg API and write a c program to
handle it.

 

I remember doing something very similar with Oracle a few years back.

 

Cheers

 

Martin

 

 

From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Bradley Holbrook
Sent: 18 January 2011 00:08
To: pgsql-admin@postgresql.org
Subject: [ADMIN] Postgres Backup Utility

 

Hello!

 

First day on the new mailing list as I have need of some expert's advice.

 

I need to be able to quickly apply the structure updates from a development
database to a testing database, and do selective data updates (like on
lookup tables, but not content tables).

 

Any help would be appreciated!

 

Brad


___ 

This email is intended for the named recipient. The information contained 
in it is confidential. You should not copy it for any purposes, nor 
disclose its contents to any other party. If you received this email 
in error, please notify the sender immediately via email, and delete 
it from your computer. 

Any views or opinions presented are solely those of the author and do not 
necessarily represent those of the company. 

PCI Compliancy: Please note, we do not send or wish to receive banking, 
credit or debit card information by email or any other form of 
communication. 

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive 
Wigston, Leicester LE18 1AT. Tel 0116 2888000 
Registered in England and Wales, Reg No 00986161 
VAT GB 115 5713 87 900 
__ 



Re: [ADMIN] Postgres Backup Utility

2011-01-18 Thread French, Martin
Ok, you say that you cannot drop and recreate, so you need to do this
via alter statements only? That's obviously going to complicate matters,
as a straight dump, drop, recreate, restore would be the fastest and by
far simplest method.

 

So, Ideally, you'll need to do a table def comparison over the two
databases, and generate the necessary sql to amend the tables in test
accordingly? 

 

Querying the pg_catalog/information_schema over the two db's should give
you the table ddl from which you can diff, and then generate the alter
statements from the results. 

 

Cheers 

 

Martin

 

From: Bradley Holbrook [mailto:operations_brad...@servillian.ca] 
Sent: 18 January 2011 16:57
To: French, Martin
Cc: pgsql-admin@postgresql.org
Subject: RE: [ADMIN] Postgres Backup Utility

 

Well, I can't just go dropping and recreating tables... it needs to
create the correct alter statements if existing tables and or functions
already exist.

 

Secondly, when I'm finished changing the structure, I need to be able to
select the list of tables that will have content updates.

 

Using a script might be more work maintaining then it's worth. I have a
backup utility that can do the job, but 3 tedious steps per schema, that
only work about 10% of the time (and no batching options so that I can
create a list of actions and run the list).

 

 

From: French, Martin [mailto:fren...@cromwell.co.uk] 
Sent: January-18-11 5:47 AM
To: Bradley Holbrook; pgsql-admin@postgresql.org
Subject: RE: [ADMIN] Postgres Backup Utility

 

I'm assuming that this needs to be tightly controlled and as such a
replication tool is out of the question?

 

In that case; The first thing to pop into my head here would be to use
either use shell scripting, or to use the pg API and write a c program
to handle it.

 

I remember doing something very similar with Oracle a few years back.

 

Cheers

 

Martin

 

 

From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Bradley Holbrook
Sent: 18 January 2011 00:08
To: pgsql-admin@postgresql.org
Subject: [ADMIN] Postgres Backup Utility

 

Hello!

 

First day on the new mailing list as I have need of some expert's
advice.

 

I need to be able to quickly apply the structure updates from a
development database to a testing database, and do selective data
updates (like on lookup tables, but not content tables).

 

Any help would be appreciated!

 

Brad


___ 

This email is intended for the named recipient. The information
contained 
in it is confidential. You should not copy it for any purposes, nor 
disclose its contents to any other party. If you received this email 
in error, please notify the sender immediately via email, and delete 
it from your computer. 

Any views or opinions presented are solely those of the author and do
not 
necessarily represent those of the company. 

PCI Compliancy: Please note, we do not send or wish to receive banking, 
credit or debit card information by email or any other form of 
communication. 

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive 
Wigston, Leicester LE18 1AT. Tel 0116 2888000 
Registered in England and Wales, Reg No 00986161 
VAT GB 115 5713 87 900 
__ 


___ 
  
This email is intended for the named recipient. The information contained 
in it is confidential.  You should not copy it for any purposes, nor 
disclose its contents to any other party.  If you received this email 
in error, please notify the sender immediately via email, and delete it from
your computer. 
  
Any views or opinions presented are solely those of the author and do not 
necessarily represent those of the company. 
  
PCI Compliancy: Please note, we do not send or wish to receive banking, credit
or debit card information by email or any other form of communication. 

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive
Wigston, Leicester LE18 1AT. Tel 0116 2888000
Registered in England and Wales, Reg No 00986161
VAT GB 115 5713 87 900
__