Vladimir,

* Vladimir Borodin (r...@simply.name) wrote:
> > 20 янв. 2017 г., в 18:06, Stephen Frost <sfr...@snowman.net> написал(а):
> > 
> > Right, without incremental or compressed backups, you'd have to have
> > room for 7 full copies of your database.  Have you looked at what your
> > incrementals would be like with file-level incrementals and compression?
> 
> Most of our DBs can’t use partitioning over time-series fields, so we have a 
> lot of datafiles in which only a few pages have been modified. So file-level 
> increments didn’t really work for us. And we didn’t use compression in barman 
> before patching it because single-threaded compression sucks.

Interesting.  That's certainly the kind of use-case we are thinking
about for pgbackrest's page-level incremental support.  Hopefully it
won't be too much longer before we add support for it.

> > How are you testing your backups..?  Do you have page-level checksums
> > enabled on your database?  
> 
> Yep, we use checksums. We restore latest backup with recovery_target = 
> 'immediate' and do COPY tablename TO '/dev/null’ with checking exit code for 
> each table in each database (in several threads, of course).

Right, unfortunately that only checks the heap pages, it won't help with
corruption happening in an index file or other files which have a
checksum.

> > pgbackrest recently added the ability to
> > check PG page-level checksums during a backup and report issues.
> 
> Sounds interesting, should take a look.

It's done with a C library that's optional and not yet included in the
packages on apt/yum.p.o, though we hope it will be soon.  The C library
is based, unsurprisingly, on the PG backend code and so should be pretty
fast.  All of the checking is done on whole pgbackrest blocks, in
stream, so it doesn't slow down the backup process too much.

Thanks!

Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to