nal Message-
> From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:17 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Backup
>
> *how* are the backups being generated?
>
> On Thu, Nov 29, 2012 at 5:16 PM
Sabry Sadiq wrote:
> Does it work well with version 9.1.3?
It might work better in 9.1.6:
http://www.postgresql.org/support/versioning/
And it would probably pay to keep up-to-date as new minor releases
become available.
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.or
t; From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:15 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Backup
>
> I don't know, I've never tried. If I had to guess, I'd say no, as that
> versi
Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:17 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
*how* are the backups being generated?
On Thu, Nov 29, 2012 at 5:16 PM, Sabry Sadiq wrote:
> Currently backups are performed on the master datab
3 8630 9990 / E mailto:ssa...@whispir.com
1300 WHISPIR / 1300 944 774
www.whispir.com
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:15 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
I don't
that the communication is free of errors, virus, interception
> or interference.
>
>
> -Original Message-
> From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:13 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: R
Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:13 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
There aren't any, assuming that all of the servers are using the same
postgresql.conf. I'm referring
gt; Kind Regards,
> Sabry
>
>
>
>
> Sabry Sadiq
> Systems Administrator
> -Original Message-
> From: Lonni J Friedman [mailto:netll...@gmail.com]
> Sent: Friday, 30 November 2012 12:11 PM
> To: Sabry Sadiq
> Cc: pgsql-admin@postgresql.org
> Subject: Re:
Yes. Works fine in 9.2.x.
On Thu, Nov 29, 2012 at 4:59 PM, Sabry Sadiq wrote:
> Hi All,
>
>
>
> Has anyone been successful in offloading the database backup from the
> production database to the standby database?
>
>
>
> Kind Regards,
>
> Sabry
>
>
>
--
Sent via pgsql-admin mailing list (pgsq
free of errors, virus, interception or interference.
-Original Message-
From: Lonni J Friedman [mailto:netll...@gmail.com]
Sent: Friday, 30 November 2012 12:11 PM
To: Sabry Sadiq
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup
Yes. Works fine in 9.2.x.
On Thu, Nov 29, 2012 at 4
On 09/21/2012 01:01 AM, Kasia Tuszynska wrote:
Hi Everybody,
I am experimenting with backups and restores….
I am running into something curious and would appreciate any suggestions.
Backing up from:
Postgres 8.3.0
Windows 2003 sp1 server (32bit)
-Took a compressed binary backup of a single
Hi,
I would recommend this:
http://www.postgresql.org/docs/9.1/static/backup.html
Very straightforward and easy reading ...
-fred
On Mon, Jun 18, 2012 at 10:50 AM, lohita nama wrote:
> Hi
>
> I am working as sql dba recently our team had oppurtunity to work on
> postgres databases and i ha
lohita nama wrote:
> I am working as sql dba recently our team had oppurtunity to work
> on postgres databases and i had experience on sql server and on
> windows platform and now our company had postgres databases on
> solaris platform
>
> can u please suggest how to take the back up of postgr
I mean bucardo (even though there are more tools like this one) just
for the replication stuff and the hot database backup only for the
backup stuff and only one bounce is needed to turn the archiving on, you
do not need to turn anything at all down during the backup.
A.A
On 04/25/2012 10:23
On 04/25/2012 09:11 AM, Scott Whitney wrote:
...
My current setup uses a single PG 8.x...
My _new_ setup will instead be 2 PG 9.x ...
It is best to specify actual major version. While 8.0.x or 9.1.x is
sufficient to discuss features and capabilities, 9.1 is a different
major release than 9.0, n
On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote:
> I believe, then, that when I restart server #3 (the standby who is
> replicating), he'll say "oh, geez, I was down, let me catch up on all that
> crap that happened while I was out of the loop," he'll replay the WAL files
> that were written
Both good points, thanks, although I suspect that a direct network copy of the
pg_data directory will be faster than a tar/untar event.
- Original Message -
> Hi Scott,
> Why you do not replicate this master to the other location/s using
> other
> methods like bucardo?, you can pick the
Hi Scott,
Why you do not replicate this master to the other location/s using other
methods like bucardo?, you can pick the tables you really want get
replicated there.
For the backup turn to hot backup (tar $PGDATA)+ archiving, easier,
faster and more efficient rather than a logical copy with p
On Tue, 2011-12-27 at 13:01 +0530, nagaraj L M wrote:
> Hi sir
> Can u tell how to take back up individual schema in
> PostgresQL
>
Use the -n command line option
(http://www.postgresql.org/docs/9.1/interactive/app-pgdump.html).
--
Guillaume
http://blog.guillaume.lelarge.i
Karuna Karpe wrote:
> I want get cold backup of database cluster, but in database
> cluster there are four non-built-in tablespaces. So, when get the
> cold backup of database cluster and restore on another machine and
> I check tablespaces for that there is no any non-built-in
> tablespace is a
OK, thank you.
于2011年9月11日 1:30:48,Guillaume Lelarge写到:
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
I'm making a base backup with 9.1rc by following 24.3.3 in manual:
http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
1. SELECT pg_start_backup('label');
2. perform fi
On Sun, 2011-09-11 at 01:19 +0800, Rural Hunter wrote:
> I'm making a base backup with 9.1rc by following 24.3.3 in manual:
> http://www.postgresql.org/docs/9.1/static/continuous-archiving.html
> 1. SELECT pg_start_backup('label');
> 2. perform file system backup with tar
> 3. SELECT pg_stop_backu
On Fri, Mar 18, 2011 at 4:55 PM, Stephen Rees wrote:
> Robert,
>
> Thank you for reply. I had the wrong end of the stick regarding pg_dump and
> hot-standby.
> I will take a look at omnipitr, as you suggest.
>
> Per your comment
>>
>> You have to stop replay while you are doing the dumps like this
Robert,
Thank you for reply. I had the wrong end of the stick regarding
pg_dump and hot-standby.
I will take a look at omnipitr, as you suggest.
Per your comment
You have to stop replay while you are doing the dumps like this
how do I stop, then resume, replay with both the master and hot-
On Tue, Mar 15, 2011 at 5:50 PM, Stephen Rees wrote:
> Using PostgreSQL 9.0.x
>
> I cannot use pg_dump to generate a backup of a database on a hot-standby
> server, because it is, by definition, read-only.
That really makes no sense :-) You can use pg_dump on a read-only
slave, but I think the i
Stephen Rees wrote:
> I cannot use pg_dump to generate a backup of a database on a hot-
> standby server, because it is, by definition, read-only.
That seems like a non sequitur -- I didn't think pg_dump wrote
anything to the source database. Have you actually tried? If so,
please show your
On Mar 1, 2011, at 3:20 PM, A B wrote:
>
> But what would happen if you
> 1. run rsync
> 2. throw server through the window and buy new server
> 3. copy the rsynced data
> 4. start server
> now, what would happen?
> I guess the server would think: uh-oh, it has crashed, I'll try to fix it.
This
Manasi Save, 29.11.2010 08:24:
I am new to postgresql. I have pgadmin installed on my windows
machine locally using which i m connecting to the client server and
accessing the database. I want to take the backup of client database.
but it seems hard the database is very large. and when i select a
Sorry for the delay.
On Thu, Mar 4, 2010 at 3:47 PM, Mikko Partio wrote:
> Hi
> I'm currently testing Pg 9.0.0 alpha 4 and the hot standby feature (with
> streaming replication) is working great. I tried to take a filesystem backup
> from a hot standby, but I guess that is not possible since exec
Hi Scott,
I m real new in this so be patient :)
I check in postgres and:
radius-# \l
List of databases
Name| Owner | Encoding
---+--+--
postgres | postgres | UTF8
radius| postgres | UTF8
root | postgres | UTF8
template0 | postgres | UTF8
Lots there, let's break it down individually:
On Mon, Mar 22, 2010 at 6:38 AM, blast wrote:
>
> Hi all,
>
> I need to backup and restore a DB.
> In this particular case the data in the database is not important (strange
> hum...) but only the schema to put new data...
>
> I m thinking use the p
Kasia Tuszynska writes:
> The problem arises, if data in lets say the adam schema is dependent on
> tables in the public schema, since the data in the public schema does not
> exist yet, being created later.
That's not supposed to happen. Are you possibly running an early 8.3
release? pg_dump
Hi all,
Sorry, but I found a little bug in the command line...
To solve just replace "$i" for "$pid":
for pid in `psql -A -t -c "select procpid from pg_stat_activity"`; do
pg_ctl kill TERM $pid; done
Sorry... :-)
Fabrízio de Royes Mello escreveu:
Hello Mark,
I don't know a command in post
Hello Mark,
I don't know a command in postgres to do that, but if you're running
postgres on Linux try it on the command line:
for pid in `psql -A -t -c "select procpid from pg_stat_activity"`; do
pg_ctl kill TERM $i; done
Best regards.
Ps: Sorry, but my english isn't so good.
--
Fabrízi
Hi,
you can use
pg_ctl stop -m fast
pg_ctl start
who kill client and abort current transaction
and if you have multiple database you can use the -D option for
specify database directory
-manu
Le 15 oct. 08 à 16:11, Mark Steben a écrit :
We have a server that backups and then recreates o
Got it. Thanks a bunch. Your last email put it all together.
Thanks,
-Original Message-
From: Evan Rempel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 16, 2008 10:22 AM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process
postgres does not use "time" to
[mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 15, 2008 9:46 PM
To: Campbell, Lance
Subject: Re: [ADMIN] Backup and failover process
You can not mix WAL recovery/restore and pg_dump restores. To restore a
pg_dump, you
require a fully functioning postgresql server, which makes its own WAL
files
On Tue, Jul 15, 2008 at 11:08:27AM -0500, Campbell, Lance wrote:
> 1) On the primary server, all WAL files will be written to a backup
> directory. Once a night I will delete all of the WAL files on the primary
> server from the backup directory. I will create a full file SQL dump of the
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> What happens if you take an SQL snapshot of a database while
> creating WAL archives then later restore from that SQL snapshot and
> apply those WAL files?
What do you mean by "an SQL snapshot of a database"? WAL files only
come into play for
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> I have read this documentation.
> I wanted to check if there was some type of timestamp
My previous email omitted the URL I meant to paste:
http://www.postgresql.org/docs/8.2/interactive/continuous-archiving.html#RECOVERY-CONFIG-SETTINGS
: Tuesday, July 15, 2008 12:24 PM
To: Campbell, Lance; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup and failover process
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> PostgreSQL: 8.2
> I am about to change my backup and failover procedure from dumping a
fu
>>> "Campbell, Lance" <[EMAIL PROTECTED]> wrote:
> PostgreSQL: 8.2
> I am about to change my backup and failover procedure from dumping a
full
> file SQL dump of our data every so many minutes
You're currently running pg_dump every so many minutes?
> to using WAL files.
Be sure you have rea
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> > I wonder what it's meaning by invalid arg?
>
> On my Fedora machine, "man write" explains EINVAL thusly:
>
>EINVAL fd is attached to an object which is unsuitable for
writing; or
> the file was opened with the O_DIRECT flag
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> I wonder what it's meaning by invalid arg?
On my Fedora machine, "man write" explains EINVAL thusly:
EINVAL fd is attached to an object which is unsuitable for writing; or
the file was opened with the O_DIRECT flag, and eith
> > > > Do we think this is a Postgres problem, a Linux problem or a
> > > > problem specific to my hardware setup? Was I wrong to think
> > > > that I should be able to stream directly from pg_dump to
> > > > /dev/st0? I would have thought it *should* work, but maybe
> > > > I was wrong in
On Tue, Feb 26, 2008 at 10:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
>
> > > Do we think this is a Postgres problem, a Linux problem or a problem
> > > specific to my hardware setup? Was I wrong to think that I should be
> > > able to stream directly from pg_dump to /dev/st0? I would have
> > Do we think this is a Postgres problem, a Linux problem or a problem
> > specific to my hardware setup? Was I wrong to think that I should be
> > able to stream directly from pg_dump to /dev/st0? I would have
> > thought it *should* work, but maybe I was wrong in the first place
> > wit
On Tue, Feb 26, 2008 at 9:38 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
>
> Do we think this is a Postgres problem, a Linux problem or a problem
> specific to my hardware setup? Was I wrong to think that I should be able to
> stream directly from pg_dump to /dev/st0? I would have thought it *s
Sorry Steve, I missed the "reply all" by 3 pixels :)
> > > tar -cf -
> > >
> > > the '-f -' says take input.
> >
> > That would be to write to stdout :) I can't figure out how to accept
> > from stdin :(
> >
> > -f is where the send the output, either a file, a device (such as
> > tape) or stdo
On Wed, 27 Feb 2008 13:48:38 +1100
"Phillip Smith" <[EMAIL PROTECTED]> wrote:
> > Coming in the middle of this thread, so slap me if I'm off base here.
> > tar will accept standard in as:
> >
> > tar -cf -
> >
> > the '-f -' says take input.
>
> That would be to write to stdout :) I can't figu
>> What would the correct syntax be for that - I can't figure out how to
>> make tar accept stdin:
> I don't think it can. Instead, maybe dd with blocksize set equal to the
tape drive's required blocksize would do? You'd have to check what options
your
> dd version has for padding out the last
> Coming in the middle of this thread, so slap me if I'm off base here.
> tar will accept standard in as:
>
> tar -cf -
>
> the '-f -' says take input.
That would be to write to stdout :) I can't figure out how to accept from
stdin :(
-f is where the send the output, either a file, a device (s
Tom Lane wrote:
"Phillip Smith" <[EMAIL PROTECTED]> writes:
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
A couple of possible things to try; pg_dump to a text file and try
cat'ting that to the tape drive, or pipe it through tar and then to the
tape.
What would t
"Phillip Smith" <[EMAIL PROTECTED]> writes:
> On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
> <[EMAIL PROTECTED]> wrote:
>> A couple of possible things to try; pg_dump to a text file and try
> cat'ting that to the tape drive, or pipe it through tar and then to the
> tape.
> What would the correct
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
>> PostgreSQL 8.2.4
>> RedHat ES4
>>
>> I have a nightly cron job that is (supposed) to dump a specific
>> database to magnetic tape:
>> /usr/local/bin/pg_dump dbname > /dev/st0
>>
>> This runs, and doesn't throw
On Sun, Feb 24, 2008 at 9:20 PM, Phillip Smith
<[EMAIL PROTECTED]> wrote:
> PostgreSQL 8.2.4
> RedHat ES4
>
> I have a nightly cron job that is (supposed) to dump a specific database to
> magnetic tape:
> /usr/local/bin/pg_dump dbname > /dev/st0
>
> This runs, and doesn't throw any erro
AFAIK Dominic needs a plug-in certified by Symantec, and that is not the case.
As you may have read prior to this mail, the common way is to pg_dump
against a file, picking up that file later with BackupExec as a
regular file.
We are currently (www.globant.com) using it that way, no problems at a
PostgreSQL has its own inbuilt mechanism for backing up the database. you can
refer to the postgres manual online for more information.
http://www.postgresql.org/docs/8.2/interactive/backup.html
- Vishal
Subject: [ADMIN] BackupDate: Thu, 24 Jan 2008 14:08:26 -0500From: [EMAIL
PROTECTED]:
Thank you very much Scott..
I'll keep you updated on my progress.
Thanks again.
Nuwan.
Scott Marlowe <[EMAIL PROTECTED]> wrote: On Jan 26, 2008 3:06 PM, NUWAN
LIYANAGE wrote:
> Yes, I was thinking of doing a pg_dumpall, but my only worry was that the
> singl file is going to be pretty large. I
On Jan 26, 2008 3:06 PM, NUWAN LIYANAGE <[EMAIL PROTECTED]> wrote:
> Yes, I was thinking of doing a pg_dumpall, but my only worry was that the
> singl file is going to be pretty large. I guess I don't have to worry too
> much about that.
> But my question to you sir is, If I want to create the deve
Yes, I was thinking of doing a pg_dumpall, but my only worry was that the singl
file is going to be pretty large. I guess I don't have to worry too much about
that.
But my question to you sir is, If I want to create the development db using
this pg dump file, how do I actually edit create tabl
On Jan 25, 2008 1:55 PM, NUWAN LIYANAGE <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I have a 450gb production database, and was trying to create a development
> database using a bkp.
> I was following the instructions on postgres documentation, and came across
> the paragraph that says...
> " If you
On Fri, 2008-01-25 at 11:34 +1100, Phillip Smith wrote:
> > We have a center in Europe who has just started to use PostgreSQL and was
> > asking me if there are any Symantec product or other products that backup
> > this type of database.
>
> It doesn't appear to.
The design of the PITR system a
On Jan 24, 2008 1:08 PM, Dominic Carlucci <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
> We have a center in Europe who has just started to use PostgreSQL and
> was asking me if there are any Symantec product or other products that
> backup this type of database. We presently run VERITAS ver9.1 on
>
> We have a center in Europe who has just started to use PostgreSQL and was
> asking me if there are any Symantec product or other products that backup
> this type of database.
It doesn't appear to. I've just been through the whole rigmarole of
BackupExec for some Windows Servers, and I couldn't f
If you don't start archiving log files, your first backup won't be valid
-- well I suppose you could do it the hard way and start the backup and
the log archiving at exactly the same time (can't picture how to time
that), but the point is you need the current log when you kick off the
backup.
On Jan 16, 2008 4:56 PM, Tom Davies <[EMAIL PROTECTED]> wrote:
>
> On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
> > The important thing is to start archiving the WAL files *prior* to
> > the first OS backup, or you will end up with an unusable data base.
>
> Why does the recovery need WAL files fr
Tom Davies <[EMAIL PROTECTED]> writes:
> On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
>> The important thing is to start archiving the WAL files *prior* to
>> the first OS backup, or you will end up with an unusable data base.
> Why does the recovery need WAL files from before the backup?
It d
On 17/01/2008, at 4:42 AM, Tom Arthurs wrote:
The important thing is to start archiving the WAL files *prior* to
the first OS backup, or you will end up with an unusable data base.
Why does the recovery need WAL files from before the backup?
Tom
---(end of broadcast)
On Wed, 16 Jan 2008 10:19:12 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> Steve Holdoway <[EMAIL PROTECTED]> writes:
> > You can be absolutely certain that the tar backup of a file that's changed
> > is a complete waste of time. Because it changed while you were copying it.
>
> That is, no doubt
Hi, Brian
We have been doing PITR backups since the feature first became available
in postgresql. We first used tar, then, due to the dreadful warning
being emitted by tar (which made us doubt that it was actually archiving
that particular file) we decided to try CPIO, which actually emits mu
Brian Modra wrote:
Sorry to be hammering this point, but I want to be totally sure its
OK, rather than 5 months down the line attempt to recover, and it fails...
Are you absolutely certain that the tar backup of the file that
changed, is OK? (And that even if that file is huge, tar has manage
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Am Mittwoch, 16. Januar 2008 schrieb Tom Lane:
>> (Thinks for a bit...) Actually I guess there's one extra assumption in
>> there, which is that tar must issue its reads in multiples of our page
>> size. But that doesn't seem like much of a stretch.
Am Mittwoch, 16. Januar 2008 schrieb Tom Lane:
> (Thinks for a bit...) Actually I guess there's one extra assumption in
> there, which is that tar must issue its reads in multiples of our page
> size. But that doesn't seem like much of a stretch.
There is something about that here:
http://www.gn
Steve Holdoway <[EMAIL PROTECTED]> writes:
> You can be absolutely certain that the tar backup of a file that's changed is
> a complete waste of time. Because it changed while you were copying it.
That is, no doubt, the reasoning that prompted the gnu tar people to
make it do what it does, but i
Brian Modra wrote:
Sorry to be hammering this point, but I want to be totally sure its OK,
rather than 5 months down the line attempt to recover, and it fails...
Are you absolutely certain that the tar backup of the file that changed,
is OK?
Have you considered testing it?
Sincerely,
Josh
You can be absolutely certain that the tar backup of a file that's changed is a
complete waste of time. Because it changed while you were copying it.
Steve.
On Wed, 16 Jan 2008 10:24:00 +0200
"Brian Modra" <[EMAIL PROTECTED]> wrote:
> Sorry to be hammering this point, but I want to be totally s
Sorry to be hammering this point, but I want to be totally sure its OK,
rather than 5 months down the line attempt to recover, and it fails...
Are you absolutely certain that the tar backup of the file that changed, is
OK? (And that even if that file is huge, tar has managed to save the file as
it
Brian Modra wrote:
The documentation about WAL says that you can start a live backup, as
long as you use WAL backup also.
I'm concerned about the integrity of the tar file. Can someone help me
with that?
If you are using point in time recovery:
http://www.postgresql.org/docs/8.2/static/contin
Tom Lane <[EMAIL PROTECTED]> wrote:
> Tom Davies <[EMAIL PROTECTED]> writes:
> > On 16/01/2008, at 2:41 AM, Tom Lane wrote:
> >> You definitely should not expect to convert the names to integers.
>
> > Presumably you can convert them to 96 bit integers? i.e. they are
> > always strings of hex c
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Brian Modra wrote:
>> If tar reports that a file was modified while it was being archived,
>> does that mean that the file was archived correctly, or is it corrupted
>> in the archive?
> You can not use tar to backup postgresql if it is running.
Y
The documentation about WAL says that you can start a live backup, as long
as you use WAL backup also.
I'm concerned about the integrity of the tar file. Can someone help me with
that?
On 16/01/2008, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>
> Brian Modra wrote:
> > Hi,
> > If tar reports that
Brian Modra wrote:
Hi,
If tar reports that a file was modified while it was being archived,
does that mean that the file was archived correctly, or is it corrupted
in the archive?
Does tar take a snapshot of the file so that even if it is modified, at
least the archive is safe?
You can not u
Tom Davies <[EMAIL PROTECTED]> writes:
> On 16/01/2008, at 2:41 AM, Tom Lane wrote:
>> You definitely should not expect to convert the names to integers.
> Presumably you can convert them to 96 bit integers? i.e. they are
> always strings of hex characters?
You could, but in most scripting lang
On 16/01/2008, at 2:41 AM, Tom Lane wrote:
You definitely should not expect to convert the names to integers.
Presumably you can convert them to 96 bit integers? i.e. they are
always strings of hex characters?
Tom
---(end of broadcast)---
T
"Sebastian Reitenbach" <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> wrote:
>> You definitely should not expect to convert the names to integers.
> Then I do not understand why only the names of the first and the last WAL
> file are stored in the backup history file.
You can compar
Hi,
Tom Lane <[EMAIL PROTECTED]> wrote:
> "Sebastian Reitenbach" <[EMAIL PROTECTED]> writes:
> > The WAL files have names like this:
> > 00010001003C
>
> > I am wonder what the meaning of the two 1 in the filename is?
>
> The first one (the first 8 hex digits actually) are the curren
"Sebastian Reitenbach" <[EMAIL PROTECTED]> writes:
> The WAL files have names like this:
> 00010001003C
> I am wonder what the meaning of the two 1 in the filename is?
The first one (the first 8 hex digits actually) are the current
"timeline" number. The second one isn't very interes
On Thu, Nov 22, 2007 at 02:59:33PM +0100, Marco Bizzarri wrote:
> Andrew, can you confirm the previous statement? I'm checking on a Debian
> Linux,
> at it seems to be a Vixie Cron, and that feature is described in the man
> page...
If the feature's in your man page, then it works on your system
On Nov 22, 2007 2:53 PM, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 22, 2007 at 02:28:08PM +0100, Marco Bizzarri wrote:
> >
> > why don't you add a "MAILTO=" at the start of your
> > crontab file, so that you can receive a report of the problem?
>
> Note: check that your cron accepts
On Thu, Nov 22, 2007 at 02:28:08PM +0100, Marco Bizzarri wrote:
>
> why don't you add a "MAILTO=" at the start of your
> crontab file, so that you can receive a report of the problem?
Note: check that your cron accepts such an addition. Many systems now use
Vixie's cron, which does accept that,
On Nov 22, 2007 2:46 PM, Sorin N. Ciolofan <[EMAIL PROTECTED]> wrote:
> Hi Marco!
>
> Thank you for the advice.
>
> I got:
>
> /home/swkm/services/test/backup.sh: line 4: pg_dump: command not found
> updating: mydb_dump_22-11-07.out (stored 0%)
>
> which seems strange
>
>
Try putting the full path
: Thursday, November 22, 2007 3:28 PM
To: Sorin N. Ciolofan
Cc: pgsql-admin@postgresql.org; [EMAIL PROTECTED]
Subject: Re: [ADMIN] backup of postgres scheduled with cron
On Nov 22, 2007 2:19 PM, Sorin N. Ciolofan <[EMAIL PROTECTED]> wrote:
> Hello all!
>
> I've a small bash
On Nov 22, 2007 2:19 PM, Sorin N. Ciolofan <[EMAIL PROTECTED]> wrote:
> Hello all!
>
> I've a small bash script backup.sh for creating dumps on my Postgre db:
>
> #!/bin/bash
> time=`date '+%d'-'%m'-'%y'`
> cd /home/swkm/services/test
> pg_dump mydb > mydb_dump_$time.o
raju; Vishal Arora; [EMAIL PROTECTED];
pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup Scheduler...
--- Jayakumar_Mukundaraju <[EMAIL PROTECTED]> wrote:
> Dear Visha/Ashish,
>
> My batch file is content of below script.
> @echo off
> "C:\Program FIles\PostgreSQ
L PROTECTED]
Sent: Fri 7/27/2007 12:59 PM
To: Jayakumar_Mukundaraju
Subject: RE: [ADMIN] Backup Scheduler...
Subject: Re: [ADMIN] Backup Scheduler...
Date: Fri, 27 Jul 2007 12:03:36 +0530
From: [EMAIL PROTECTED]
To: [EMAIL P
MAIL PROTECTED]; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup Scheduler...
Are you sure you have included "D:\test\test1.bat" within double quotes and
selected the kind as batch in job.
Can you see event log and send it to me, what is the exact error pgAgent
service is giv
--- Jayakumar_Mukundaraju <[EMAIL PROTECTED]> wrote:
> Dear Visha/Ashish,
>
> My batch file is content of below script.
> @echo off
> "C:\Program FIles\PostgreSQL\8.2\bin\pg_dump" -U postgres -f
> D:\test\test1.sql -F p -C -d -D
> postgres
> @echo on
>
The solution that I use is found on
7 4:32 PM
To: Jayakumar_Mukundaraju; [EMAIL PROTECTED]; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Backup Scheduler...
Are you sure you have included "D:\test\test1.bat" within double quotes and
selected the kind as batch in job.
Can you see event log and send it to me, what
To: "Jayakumar_Mukundaraju" <[EMAIL PROTECTED]>,
"Vishal Arora" <[EMAIL PROTECTED]>,
<[EMAIL PROTECTED]>,
Subject: Re: [ADMIN] Backup Scheduler...
Date: Thu, 26 Jul 2007 15:34:59 +0530
Dear Vishal/Ashish
Yes i did same as u said... I created
1 - 100 of 232 matches
Mail list logo