On 1/20/17 10:06 AM, Stephen Frost wrote:
> Ah, yes, I noticed that you passed over the file twice but wasn't quite
> sure what functools.partial() was doing and a quick read of the docs
> made me think you were doing seeking there.
>
> All the pages are the same size, so I'm surprised you
Greetings,
* Rick Otten (rottenwindf...@gmail.com) wrote:
> Actually, I think this may be the way Oracle Hot Backups work. It was my
> impression that feature temporarily suspends writes into a specific
> tablespace so you can take a snapshot of it. It has been a few years since
> I've had to
Actually, I think this may be the way Oracle Hot Backups work. It was my
impression that feature temporarily suspends writes into a specific
tablespace so you can take a snapshot of it. It has been a few years since
I've had to do Oracle work though and I could be mis-remembering. People
may be
On Mon, Jan 23, 2017 at 9:43 AM, Simon Riggs wrote:
> On 23 January 2017 at 17:12, Jeff Janes wrote:
>
> >> Just to make sure anyone reading the mailing list archives isn't
> >> confused, running pg_start_backup does *not* make PG stop writing to
>
Dear Jeff,
Thanks for the correction and by this email, we hope that myth has gone
forever :)
Will do that to inform other about this matter.
And agree with all of us here that: using pg_basebackup is the best
approach rather than do it manually through pg_start_backup, right?
Thanks and
* Simon Riggs (si...@2ndquadrant.com) wrote:
> On 23 January 2017 at 17:12, Jeff Janes wrote:
> >> Just to make sure anyone reading the mailing list archives isn't
> >> confused, running pg_start_backup does *not* make PG stop writing to
> >> BASEDIR (or DATADIR, or
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> On Mon, Jan 23, 2017 at 7:28 AM, Jim Nasby wrote:
> > On 1/22/17 11:32 AM, Stephen Frost wrote:
> >> The 1-second window concern is regarding the validity of a subsequent
> >> incremental backup.
> >
> > BTW, there's a simpler
On 23 January 2017 at 17:12, Jeff Janes wrote:
>> Just to make sure anyone reading the mailing list archives isn't
>> confused, running pg_start_backup does *not* make PG stop writing to
>> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write
>> data into
On Mon, Jan 23, 2017 at 7:28 AM, Jim Nasby wrote:
> On 1/22/17 11:32 AM, Stephen Frost wrote:
>
>> The 1-second window concern is regarding the validity of a subsequent
>> incremental backup.
>>
>
> BTW, there's a simpler scenario here:
>
> Postgres touches file.
>
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost wrote:
> > Just to make sure anyone reading the mailing list archives isn't
> > confused, running pg_start_backup does *not* make PG stop writing to
> > BASEDIR (or DATADIR, or
Greetings,
* Torsten Zuehlsdorff (mailingli...@toco-domains.de) wrote:
> I just have around 11 TB but switched to ZFS based backups only. I'm
> using snapshots therefore which gives some flexibility. I can
> rolback them, i can just clone it and work with a full copy as a
> different cluster (and
On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost wrote:
> Greetings,
>
> * julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> > CORRECTION:
> >
> > "you might you pg_start_backup to tell the server not to write into the
> > DATADIR"
> >
> > become
> >
> > "you might *use*
Hello,
Increments in pgbackrest are done on file level which is not really
efficient. We have done parallelism, compression and page-level
increments (9.3+) in barman fork [1], but unfortunately guys from
2ndquadrant-it don’t hurry to work on it.
We're looking at page-level incremental backup
On 1/23/17 9:27 AM, Stephen Frost wrote:
If you want my 2c on that, running with BLKSZ <> 8192 is playing with
fire, or at least running with scissors.
I've never seen it myself, but I'm under the impression that it's not
unheard of for OLAP environments. Given how sensitive PG is to IO
On 1/22/17 11:32 AM, Stephen Frost wrote:
The 1-second window concern is regarding the validity of a subsequent
incremental backup.
BTW, there's a simpler scenario here:
Postgres touches file.
rsync notices file has different timestamp, starts copying.
Postgres touches file again.
If those 3
* Jim Nasby (jim.na...@bluetreble.com) wrote:
> On 1/20/17 9:06 AM, Stephen Frost wrote:
> >All the pages are the same size, so I'm surprised you didn't consider
> >just having a format along the lines of: magic+offset+page,
> >magic+offset+page, magic+offset+page, etc...
>
> Keep in mind that if
On 1/20/17 9:06 AM, Stephen Frost wrote:
All the pages are the same size, so I'm surprised you didn't consider
just having a format along the lines of: magic+offset+page,
magic+offset+page, magic+offset+page, etc...
Keep in mind that if you go that route you need to accommodate BLKSZ <>
8192.
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 19:59, Stephen Frost написал(а):
> >>> How are you testing your backups..? Do you have page-level checksums
> >>> enabled on your database?
> >>
> >> Yep, we use checksums. We restore latest
> 20 янв. 2017 г., в 19:59, Stephen Frost написал(а):
>
>>> How are you testing your backups..? Do you have page-level checksums
>>> enabled on your database?
>>
>> Yep, we use checksums. We restore latest backup with recovery_target =
>> 'immediate' and do COPY
Hi All,
Especially for Stephen Frost, Thank you very much for your deeply
explanation and elaboration!
Anyway, all has clear, i am not disagree with Stephen, i am the lucky one
get in corrected by Expert like you.
in short, please use pg_basebackup for getting snapshot and don't forget
for the
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> Thanks for elaborating this Information, this is new, so whatever it is the
> procedure is *Correct and Workable*.
Backups are extremely important, so I get quite concerned when people
provide incorrect information regarding them.
Hi Stephen,
> > When PostgreSQL in the mode of Start Backup, PostgreSQL only writes to
> the
> > XLOG, then you can safely rsync / copy the base data (snapshot) then
> later
> > you can have full copy of snapshot backup data.
>
> You are confusing two things.
>
> After calling pg_start_backup,
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> Please elaborate more of what you are saying. What i am saying is based on
> the Official Docs, Forum and our own test. This is what we had to do to
> save time, both backing up and restoring.
>
>
Hi Stephen,
Please elaborate more of what you are saying. What i am saying is based on
the Official Docs, Forum and our own test. This is what we had to do to
save time, both backing up and restoring.
https://www.postgresql.org/docs/9.6/static/functions-admin.html
When PostgreSQL in the mode
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> CORRECTION:
>
> "you might you pg_start_backup to tell the server not to write into the
> DATADIR"
>
> become
>
> "you might *use* pg_start_backup to tell the server not to write into the
> *BASEDIR*, actually server still writes
Greetings,
* julyanto SUTANDANG (julya...@equnix.co.id) wrote:
> Best practice in doing full backup is using RSYNC, but before you can copy
> the DATADIR, you might you pg_start_backup to tell the server not to write
> into the DATADIR, because you are copying that data. After finished copy
> all
CORRECTION:
"you might you pg_start_backup to tell the server not to write into the
DATADIR"
become
"you might *use* pg_start_backup to tell the server not to write into the
*BASEDIR*, actually server still writes but only to XLOGDIR "
Regards,
Julyanto SUTANDANG
Equnix Business
Hi Dinesh,
Best practice in doing full backup is using RSYNC, but before you can copy
the DATADIR, you might you pg_start_backup to tell the server not to write
into the DATADIR, because you are copying that data. After finished copy
all the data in DATADIR, you can ask server to continue
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 18:06, Stephen Frost написал(а):
> >
> > Right, without incremental or compressed backups, you'd have to have
> > room for 7 full copies of your database. Have you looked at what your
> >
> 20 янв. 2017 г., в 18:06, Stephen Frost написал(а):
>
> Right, without incremental or compressed backups, you'd have to have
> room for 7 full copies of your database. Have you looked at what your
> incrementals would be like with file-level incrementals and compression?
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 16:40, Stephen Frost написал(а):
> >> Increments in pgbackrest are done on file level which is not really
> >> efficient. We have done parallelism, compression and page-level increments
> >>
> 20 янв. 2017 г., в 16:40, Stephen Frost написал(а):
>
> Vladimir,
>
>> Increments in pgbackrest are done on file level which is not really
>> efficient. We have done parallelism, compression and page-level increments
>> (9.3+) in barman fork [1], but unfortunately guys
Vladimir,
* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 15:22, Stephen Frost написал(а):
> >> This process can be automatized by some applications like barman
> >> http://www.pgbarman.org/
> >
> > Last I checked, barman is still single-threaded.
> >
>
> 20 янв. 2017 г., в 15:22, Stephen Frost написал(а):
>>
>> This process can be automatized by some applications like barman
>> http://www.pgbarman.org/
>
> Last I checked, barman is still single-threaded.
>
> If the database is large enough that you need multi-process
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2017-01-20 12:53 GMT+01:00 Dinesh Chandra 12108 :
> > Thanks for quick response.
> >
> > May I know how can I use physical full backup with export transaction
> > segments.
> >
>
>
ent:* 20 January, 2017 5:19 PM
> *To:* Dinesh Chandra 12108 <dinesh.chan...@cyient.com>
> *Cc:* Madusudanan.B.N <b.n.madusuda...@gmail.com>;
> pgsql-performance@postgresql.org
>
> *Subject:* Re: [PERFORM] Backup taking long time !!!
>
>
>
> Hi
>
>
>
>
t.com>
Cc: Madusudanan.B.N <b.n.madusuda...@gmail.com>;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Backup taking long time !!!
Hi
2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108
<dinesh.chan...@cyient.com<mailto:dinesh.chan...@cyient.com>>:
Exactly parallel option is th
*
>>
>> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078
>> |dinesh.chan...@cyient.com
>>
>> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.
>>
>>
>>
>> *From:* Madusudanan.B.N [mailto:b.
gt; *To:* Dinesh Chandra 12108 <dinesh.chan...@cyient.com>
> *Cc:* pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] Backup taking long time !!!
>
>
>
> If you can upgrade to a newer version, there is parallel pg dump.
>
>
>
> Documentation - https://www.
nesh.chan...@cyient.com>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Backup taking long time !!!
If you can upgrade to a newer version, there is parallel pg dump.
Documentation - https://www.postgresql.org/docs/current/static/backup-dump.html
Related blog -
http://paquier.xyz/
If you can upgrade to a newer version, there is parallel pg dump.
Documentation -
https://www.postgresql.org/docs/current/static/backup-dump.html
Related blog -
http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/
Which can give significant speed up depending on your
41 matches
Mail list logo