nal Message-
> From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:17 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Backup
>
> *how* are the backups being generated?
>
> On Thu, Nov 29, 2012 at 5:16 PM
Sabry Sadiq wrote:
> Does it work well with version 9.1.3?
It might work better in 9.1.6:
http://www.postgresql.org/support/versioning/
And it would probably pay to keep up-to-date as new minor releases
become available.
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.or
t; From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:15 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Backup
>
> I don't know, I've never tried. If I had to guess, I'd say no, as that
> versi
Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:17 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
*how* are the backups being generated?
On Thu, Nov 29, 2012 at 5:16 PM, Sabry Sadiq wrote:
> Currently backups are performed on the master datab
3 8630 9990 / E mailto:ssa...@whispir.com
1300 WHISPIR / 1300 944 774
www.whispir.com
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:15 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
I don't
that the communication is free of errors, virus, interception
> or interference.
>
>
> -Original Message-
> From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:13 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: R
Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:13 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
There aren't any, assuming that all of the servers are using the same
postgresql.conf. I'm referring
gt; Kind Regards,
> Sabry
>
>
>
>
> Sabry Sadiq
> Systems Administrator
> -Original Message-
> From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:11 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: Re:
Yes. Works fine in 9.2.x.
On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq wrote:
> Hi All,
>
>
>
> Has anyone been successful in offloading the database backup from the
> production database to the standby database?
>
>
>
> Kind Regards,
>
> Sabry
>
>
>
--
Sent via pgsql-admin mailing list (pgsq
free of errors, virus, interception or interference.
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:11 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
Yes. Works fine in 9.2.x.
On Thu, Nov 29, 2012 at 4
Hi All,
Has anyone been successful in offloading the database backup from the
production database to the standby database?
Kind Regards,
Sabry
Sabry Sadiq
Systems Administrator
Whispir
Level 30 360 Collins Street
Melbourne / Victoria 3000 / Australia
GPO Box 130 / Victoria 3001 / Australia
Hi Everybody,
I am experimenting with backups and restores
I am running into something curious and would appreciate any suggestions.
Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)
-Took a compressed binary backup of a single db (the default option in
pgAdminIII, rig
On 09/21/2012 01:01 AM, Kasia Tuszynska wrote:
Hi Everybody,
I am experimenting with backups and restores….
I am running into something curious and would appreciate any suggestions.
Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)
-Took a compressed binary backup of a single
Hi Everybody,
I am experimenting with backups and restores
I am running into something curious and would appreciate any suggestions.
Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)
-Took a compressed binary backup of a single db (the default option in
pgAdminIII, rig
Hi,
I would recommend this:
http://www.postgresql.org/docs/9.1/static/backup.html
Very straightforward and easy reading ...
-fred
On Mon, Jun 18, 2012 at 10:50 AM, lohita nama wrote:
> Hi
>
> I am working as sql dba recently our team had oppurtunity to work on
> postgres databases and i ha
lohita nama wrote:
> I am working as sql dba recently our team had oppurtunity to work
> on postgres databases and i had experience on sql server and on
> windows platform and now our company had postgres databases on
> solaris platform
>
> can u please suggest how to take the back up of postgr
Hi
I am working as sql dba recently our team had oppurtunity to work on
postgres databases and i had experience on sql server and on windows
platform and now our company had postgres databases on solaris platform
can u please suggest how to take the back up of postgress databases by step
by step
I mean bucardo (even though there are more tools like this one) just
for the replication stuff and the hot database backup only for the
backup stuff and only one bounce is needed to turn the archiving on, you
do not need to turn anything at all down during the backup.
A.A
On 04/25/2012 10:23
On 04/25/2012 09:11 AM, Scott Whitney wrote:
...
My current setup uses a single PG 8.x...
My _new_ setup will instead be 2 PG 9.x ...
It is best to specify actual major version. While 8.0.x or 9.1.x is
sufficient to discuss features and capabilities, 9.1 is a different
major release than 9.0, n
On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote:
> I believe, then, that when I restart server #3 (the standby who is
> replicating), he'll say "oh, geez, I was down, let me catch up on all that
> crap that happened while I was out of the loop," he'll replay the WAL files
> that were written
Both good points, thanks, although I suspect that a direct network copy of the
pg_data directory will be faster than a tar/untar event.
- Original Message -
> Hi Scott,
> Why you do not replicate this master to the other location/s using
> other
> methods like bucardo?, you can pick the
Hi Scott,
Why you do not replicate this master to the other location/s using other
methods like bucardo?, you can pick the tables you really want get
replicated there.
For the backup turn to hot backup (tar $PGDATA)+ archiving, easier,
faster and more efficient rather than a logical copy with p
Hello, everyone. I want to throw a scenario out there to see what y'all think.
Soon, my cluster backups will be increasing in size inordinately. They're going
to immediately go to 3x as large as they currently are with the potential to be
about 20x within a year or so.
My current setup uses a
On Tue, 2011-12-27 at 13:01 +0530, nagaraj L M wrote:
> Hi sir
> Can u tell how to take back up individual schema in
> PostgresQL
>
Use the -n command line option
(http://www.postgresql.org/docs/9.1/interactive/app-pgdump.html).
--
Guillaume
http://blog.guillaume.lelarge.i
Hi sir
Can u tell how to take back up individual schema in PostgresQL
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Karuna Karpe wrote:
> I want get cold backup of database cluster, but in database
> cluster there are four non-built-in tablespaces. So, when get the
> cold backup of database cluster and restore on another machine and
> I check tablespaces for that there is no any non-built-in
> tablespace is a
Hi,
I want get cold backup of database cluster, but in database cluster
there are four non-built-in tablespaces. So, when get the cold backup of
database cluster and restore on another machine and I check tablespaces for
that there is no any non-built-in tablespace is available.
So,Please
OK, thank you.
于2011年9月11日 1:30:48,Guillaume Lelarge写到:
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform fi
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
> I'm making a base backup with 9.1rc by following 24.3.3 in manual:
> http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
> 1. SELECT pg_start_backup('label');
> 2. perform file system backup with tar
> 3. SELECT pg_stop_backu
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform file system backup with tar
3. SELECT pg_stop_backup();
But when I was performing step 2, I got warning from tar c
On Fri, Mar 18, 2011 at 4:55 PM, Stephen Rees wrote:
> Robert,
>
> Thank you for reply. I had the wrong end of the stick regarding pg_dump and
> hot-standby.
> I will take a look at omnipitr, as you suggest.
>
> Per your comment
>>
>> You have to stop replay while you are doing the dumps like this
Robert,
Thank you for reply. I had the wrong end of the stick regarding
pg_dump and hot-standby.
I will take a look at omnipitr, as you suggest.
Per your comment
You have to stop replay while you are doing the dumps like this
how do I stop, then resume, replay with both the master and hot-
On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees wrote:
> Using PostgreSQL 9.0.x
>
> I cannot use pg_dump to generate a backup of a database on a hot-standby
> server, because it is, by definition, read-only.
That really makes no sense :-) You can use pg_dump on a read-only
slave, but I think the i
Stephen Rees wrote:
> I cannot use pg_dump to generate a backup of a database on a hot-
> standby server, because it is, by definition, read-only.
That seems like a non sequitur -- I didn't think pg_dump wrote
anything to the source database. Have you actually tried? If so,
please show your
Using PostgreSQL 9.0.x
I cannot use pg_dump to generate a backup of a database on a hot-
standby server, because it is, by definition, read-only. However, it
seems that I can use COPY TO within a serializable transaction to
create a consistent set of data file(s). For example,
BEGIN TRANSA
On Mar 1, 2011, at 3:20 PM, A B wrote:
>
> But what would happen if you
> 1. run rsync
> 2. throw server through the window and buy new server
> 3. copy the rsynced data
> 4. start server
> now, what would happen?
> I guess the server would think: uh-oh, it has crashed, I'll try to fix it.
This
Hello.
In the docs of 8.4 I read that one way of doing filesystem backup of
PostgreSQL is to
1. run rsync
2. stop the server
3. run second rsync
4. start server
But what would happen if you
1. run rsync
2. throw server through the window and buy new server
3. copy the rsynced data
4. start server
Manasi Save, 29.11.2010 08:24:
I am new to postgresql. I have pgadmin installed on my windows
machine locally using which i m connecting to the client server and
accessing the database. I want to take the backup of client database.
but it seems hard the database is very large. and when i select a
Hi All,
I am new to postgresql. I have pgadmin installed on my windows machine locally
using which i m connecting to the client server and accessing the database. I
want to take the backup of client database. but it seems hard the database is
very large. and when i select any database and hit b
Sorry for the delay.
On Thu, Mar 4, 2010 at 3:47 PM, Mikko Partio wrote:
> Hi
> I'm currently testing Pg 9.0.0 alpha 4 and the hot standby feature (with
> streaming replication) is working great. I tried to take a filesystem backup
> from a hot standby, but I guess that is not possible since exec
Hi Scott,
I m real new in this so be patient :)
I check in postgres and:
radius-# \l
List of databases
Name| Owner | Encoding
---+--+--
postgres | postgres | UTF8
radius| postgres | UTF8
root | postgres | UTF8
template0 | postgres | UTF8
Lots there, let's break it down individually:
On Mon, Mar 22, 2010 at 6:38 AM, blast wrote:
>
> Hi all,
>
> I need to backup and restore a DB.
> In this particular case the data in the database is not important (strange
> hum...) but only the schema to put new data...
>
> I m thinking use the p
Hi all,
I need to backup and restore a DB.
In this particular case the data in the database is not important (strange
hum...) but only the schema to put new data...
I m thinking use the pg_dump:
pg_dump -c -C -s schema > file.out
With this i have in file.out the schema, correct?
So, to restor
Hi
I'm currently testing Pg 9.0.0 alpha 4 and the hot standby feature (with
streaming replication) is working great. I tried to take a filesystem backup
from a hot standby, but I guess that is not possible since executing "SELECT
pg_start_backup('ss')" returns an error? Or can I just tar $PGDATA a
Hello,
I am curious if there is a way to know which databases have changed (any write
transaction) since a given timestamp? I use pg_dump nightly to backup several
databases within the cluster, but I would like to only pg_dump those databases
which have actually changed during the day. Is th
Kasia Tuszynska writes:
> The problem arises, if data in lets say the adam schema is dependent on
> tables in the public schema, since the data in the public schema does not
> exist yet, being created later.
That's not supposed to happen. Are you possibly running an early 8.3
release? pg_dump
Hello Postgres Gurus,
I have a restore problem.
If you do the backup as a text file:
pg_dump.exe -i -h machine -p 5432 -U postgres -F p -v -f
"C:\dbname_text.dump.backup" dbname
You can see the order in which the restore will happen. And the restore seems
to be happening in the following order
Hi all,
Sorry, but I found a little bug in the command line...
To solve just replace "$i" for "$pid":
for pid in `psql -A -t -c "select procpid from pg_stat_activity"`; do
pg_ctl kill TERM $pid; done
Sorry... :-)
Fabrízio de Royes Mello escreveu:
Hello Mark,
I don't know a command in post
Hello Mark,
I don't know a command in postgres to do that, but if you're running
postgres on Linux try it on the command line:
for pid in `psql -A -t -c "select procpid from pg_stat_activity"`; do
pg_ctl kill TERM $i; done
Best regards.
Ps: Sorry, but my english isn't so good.
--
Fabrízi
Hi,
you can use
pg_ctl stop -m fast
pg_ctl start
who kill client and abort current transaction
and if you have multiple database you can use the -D option for
specify database directory
-manu
Le 15 oct. 08 à 16:11, Mark Steben a écrit :
We have a server that backups and then recreates o
We have a server that backups and then recreates our production database on
a nightly basis
In order to drop and recreate the database we would stop and restart the
server - this would
Effectively kick off any straggling users so we could get our refresh done.
No problem.
Now we have more than o
Got it. Thanks a bunch. Your last email put it all together.
Thanks,
-Original Message-
From: Evan Rempel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 16, 2008 10:22 AM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process
postgres does not use "time" to
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 15, 2008 9:46 PM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process
You can not mix WAL recovery/restore and pg_dump restores. To restore a
pg_dump, you
require a fully functioning postgresql server, which makes its own WAL
files
On Tue, Jul 15, 2008 at 11:08:27AM -0500, Campbell, Lance wrote:
> 1) On the primary server, all WAL files will be written to a backup
> directory. Once a night I will delete all of the WAL files on the primary
> server from the backup directory. I will create a full file SQL dump of the
PostgreSQL: 8.2
I am about to change my backup and failover procedure from dumping a full file
SQL dump of our data every so many minutes to using WAL files. Could someone
review the below strategy to identify if this strategy has any issues?
1) On the primary server, all WAL files will
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> What happens if you take an SQL snapshot of a database while
> creating WAL archives then later restore from that SQL snapshot and
> apply those WAL files?
What do you mean by "an SQL snapshot of a database"? WAL files only
come into play for
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> I have read this documentation.
> I wanted to check if there was some type of timestamp
My previous email omitted the URL I meant to paste:
http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#RECOVERY-CONFIG-SETTINGS
: Tuesday, July 15, 2008 12:24 PM
To: Campbell, Lance; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup and failover process
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> PostgreSQL: 8.2
> I am about to change my backup and failover procedure from dumping a
fu
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> PostgreSQL: 8.2
> I am about to change my backup and failover procedure from dumping a
full
> file SQL dump of our data every so many minutes
You're currently running pg_dump every so many minutes?
> to using WAL files.
Be sure you have rea
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> > I wonder what it's meaning by invalid arg?
>
> On my Fedora machine, "man write" explains EINVAL thusly:
>
>EINVAL fd is attached to an object which is unsuitable for
writing; or
> the file was opened with the O_DIRECT flag
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> I wonder what it's meaning by invalid arg?
On my Fedora machine, "man write" explains EINVAL thusly:
EINVAL fd is attached to an object which is unsuitable for writing; or
the file was opened with the O_DIRECT flag, and eith
> > > > Do we think this is a Postgres problem, a Linux problem or a
> > > > problem specific to my hardware setup? Was I wrong to think
> > > > that I should be able to stream directly from pg_dump to
> > > > /dev/st0? I would have thought it *should* work, but maybe
> > > > I was wrong in
On Tue, Feb 26, 2008 at 10:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
>
> > > Do we think this is a Postgres problem, a Linux problem or a problem
> > > specific to my hardware setup? Was I wrong to think that I should be
> > > able to stream directly from pg_dump to /dev/st0? I would have
> > Do we think this is a Postgres problem, a Linux problem or a problem
> > specific to my hardware setup? Was I wrong to think that I should be
> > able to stream directly from pg_dump to /dev/st0? I would have
> > thought it *should* work, but maybe I was wrong in the first place
> > wit
On Tue, Feb 26, 2008 at 9:38 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
>
> Do we think this is a Postgres problem, a Linux problem or a problem
> specific to my hardware setup? Was I wrong to think that I should be able to
> stream directly from pg_dump to /dev/st0? I would have thought it *s
Sorry Steve, I missed the "reply all" by 3 pixels :)
> > > tar -cf -
> > >
> > > the '-f -' says take input.
> >
> > That would be to write to stdout :) I can't figure out how to accept
> > from stdin :(
> >
> > -f is where the send the output, either a file, a device (such as
> > tape) or stdo
On Wed, 27 Feb 2008 13:48:38 +1100
"Phillip Smith" <[EMAIL PROTECTED]> wrote:
> > Coming in the middle of this thread, so slap me if I'm off base here.
> > tar will accept standard in as:
> >
> > tar -cf -
> >
> > the '-f -' says take input.
>
> That would be to write to stdout :) I can't figu
>> What would the correct syntax be for that - I can't figure out how to
>> make tar accept stdin:
> I don't think it can. Instead, maybe dd with blocksize set equal to the
tape drive's required blocksize would do? You'd have to check what options
your
> dd version has for padding out the last
> Coming in the middle of this thread, so slap me if I'm off base here.
> tar will accept standard in as:
>
> tar -cf -
>
> the '-f -' says take input.
That would be to write to stdout :) I can't figure out how to accept from
stdin :(
-f is where the send the output, either a file, a device (s
Tom Lane wrote:
"Phillip Smith" <[EMAIL PROTECTED]> writes:
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
A couple of possible things to try; pg_dump to a text file and try
cat'ting that to the tape drive, or pipe it through tar and then to the
tape.
What would t
"Phillip Smith" <[EMAIL PROTECTED]> writes:
> On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
> <[EMAIL PROTECTED]> wrote:
>> A couple of possible things to try; pg_dump to a text file and try
> cat'ting that to the tape drive, or pipe it through tar and then to the
> tape.
> What would the correct
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
>> PostgreSQL 8.2.4
>> RedHat ES4
>>
>> I have a nightly cron job that is (supposed) to dump a specific
>> database to magnetic tape:
>> /usr/local/bin/pg_dump dbname > /dev/st0
>>
>> This runs, and doesn't throw
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
> PostgreSQL 8.2.4
> RedHat ES4
>
> I have a nightly cron job that is (supposed) to dump a specific database to
> magnetic tape:
> /usr/local/bin/pg_dump dbname > /dev/st0
>
> This runs, and doesn't throw any erro
PostgreSQL 8.2.4
RedHat ES4
I have a nightly cron job that is (supposed) to dump a specific database to
magnetic tape:
/usr/local/bin/pg_dump dbname > /dev/st0
This runs, and doesn't throw any errors, but when I try to restore it fails
because the tape is incomplete:
[EMAIL PROTEC
On Thu, 2008-01-31 at 10:02 -0500, Chander Ganesan wrote:
> Magnus Hagander wrote:
> > On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
> >
> >> On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
> >>
> >>> That sentence has no place in any discussion about
On Thu, 2008-01-31 at 12:09 -0300, Alvaro Herrera wrote:
> > Simon Riggs wrote:
>
> >> As far as I am concerned, if any Postgres user loses data then we're all
> >> responsible.
>
> Remember, our license says this software is given without any warranty
> whatsoever, implicit or explicit, written
> Simon Riggs wrote:
>> As far as I am concerned, if any Postgres user loses data then we're all
>> responsible.
Remember, our license says this software is given without any warranty
whatsoever, implicit or explicit, written or implied, given or sold,
alive or deceased.
--
Alvaro Herrera
Magnus Hagander wrote:
On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
That sentence has no place in any discussion about "backup" because the
risk is not just a few transactions, it is a corrupt and in
Simon Riggs wrote:
On Thu, 2008-01-31 at 07:21 -0500, Chander Ganesan wrote:
If you don't mind if you lose some transactions
That sentence has no place in any discussion about "backup" because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which
On Thu, Jan 31, 2008 at 03:34:05PM +0100, Martijn van Oosterhout wrote:
> On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
> > That sentence has no place in any discussion about "backup" because the
> > risk is not just a few transactions, it is a corrupt and inconsistent
> > database f
On Thu, Jan 31, 2008 at 01:28:48PM +, Simon Riggs wrote:
> That sentence has no place in any discussion about "backup" because the
> risk is not just a few transactions, it is a corrupt and inconsistent
> database from which both old and new data would be inaccessible.
Hmm? I thought the whole
On Thu, 2008-01-31 at 07:21 -0500, Chander Ganesan wrote:
> If you don't mind if you lose some transactions
That sentence has no place in any discussion about "backup" because the
risk is not just a few transactions, it is a corrupt and inconsistent
database from which both old and new data would
Simon Riggs wrote:
On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
We have a center in Europe who has just started to use PostgreSQL and was
asking me if there are any Symantec product or other products that backup
this type of database.
It doesn't appear to.
The design
up.html
>
> - Vishal
>
>
>
>
>
> ________
> Subject: [ADMIN] Backup
> Date: Thu, 24 Jan 2008 14:08:26 -0500
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]; pgsql-admin@postgresql.org
> CC: [EMAIL PROTECTED]
>
>
>
>
>
PostgreSQL has its own inbuilt mechanism for backing up the database. you can
refer to the postgres manual online for more information.
http://www.postgresql.org/docs/8.2/interactive/backup.html
- Vishal
Subject: [ADMIN] BackupDate: Thu, 24 Jan 2008 14:08:26 -0500From: [EMAIL
PROTECTED]:
Thank you very much Scott..
I'll keep you updated on my progress.
Thanks again.
Nuwan.
Scott Marlowe <[EMAIL PROTECTED]> wrote: On Jan 26, 2008 3:06 PM, NUWAN
LIYANAGE wrote:
> Yes, I was thinking of doing a pg_dumpall, but my only worry was that the
> singl file is going to be pretty large. I
On Jan 26, 2008 3:06 PM, NUWAN LIYANAGE <[EMAIL PROTECTED]> wrote:
> Yes, I was thinking of doing a pg_dumpall, but my only worry was that the
> singl file is going to be pretty large. I guess I don't have to worry too
> much about that.
> But my question to you sir is, If I want to create the deve
Yes, I was thinking of doing a pg_dumpall, but my only worry was that the singl
file is going to be pretty large. I guess I don't have to worry too much about
that.
But my question to you sir is, If I want to create the development db using
this pg dump file, how do I actually edit create tabl
On Jan 25, 2008 1:55 PM, NUWAN LIYANAGE <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I have a 450gb production database, and was trying to create a development
> database using a bkp.
> I was following the instructions on postgres documentation, and came across
> the paragraph that says...
> " If you
Hello,
I have a 450gb production database, and was trying to create a
development database using a bkp.
I was following the instructions on postgres documentation, and came across
the paragraph that says...
" If you are using tablespaces that do not reside underneath this (data)
On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
> > We have a center in Europe who has just started to use PostgreSQL and was
> > asking me if there are any Symantec product or other products that backup
> > this type of database.
>
> It doesn't appear to.
The design of the PITR system a
On Jan 24, 2008 1:08 PM, Dominic Carlucci <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
> We have a center in Europe who has just started to use PostgreSQL and
> was asking me if there are any Symantec product or other products that
> backup this type of database. We presently run VERITAS ver9.1 on
>
> We have a center in Europe who has just started to use PostgreSQL and was
> asking me if there are any Symantec product or other products that backup
> this type of database.
It doesn't appear to. I've just been through the whole rigmarole of
BackupExec for some Windows Servers, and I couldn't f
Hi,
We have a center in Europe who has just started to use PostgreSQL
and was asking me if there are any Symantec product or other products
that backup this type of database. We presently run VERITAS ver9.1 on
windows2003 server. What is being used by users out there now. We are
thinking of
If you don't start archiving log files, your first backup won't be valid
-- well I suppose you could do it the hard way and start the backup and
the log archiving at exactly the same time (can't picture how to time
that), but the point is you need the current log when you kick off the
backup.
On Jan 16, 2008 4:56 PM, Tom Davies <[EMAIL PROTECTED]> wrote:
>
> On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
> > The important thing is to start archiving the WAL files *prior* to
> > the first OS backup, or you will end up with an unusable data base.
>
> Why does the recovery need WAL files fr
Tom Davies <[EMAIL PROTECTED]> writes:
> On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
>> The important thing is to start archiving the WAL files *prior* to
>> the first OS backup, or you will end up with an unusable data base.
> Why does the recovery need WAL files from before the backup?
It d
On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
The important thing is to start archiving the WAL files *prior* to
the first OS backup, or you will end up with an unusable data base.
Why does the recovery need WAL files from before the backup?
Tom
---(end of broadcast)
On Wed, 16 Jan 2008 10:19:12 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> Steve Holdoway <[EMAIL PROTECTED]> writes:
> > You can be absolutely certain that the tar backup of a file that's changed
> > is a complete waste of time. Because it changed while you were copying it.
>
> That is, no doubt
Hi, Brian
We have been doing PITR backups since the feature first became available
in postgresql. We first used tar, then, due to the dreadful warning
being emitted by tar (which made us doubt that it was actually archiving
that particular file) we decided to try CPIO, which actually emits mu
1 - 100 of 317 matches
Mail list logo