[PERFORM] Fastest Backup Restore for perf testing

2015-05-27 Thread Wes Vaske (wvaske)
Hi,

I'm running performance tests against a PostgreSQL database (9.4) with various 
hardware configurations and a couple different benchmarks (TPC-C  TPC-H).

I'm currently using pg_dump and pg_restore to refresh my dataset between runs 
but this process seems slower than it could be.

Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as a backup 
 restore method?

If not, is there another way to restore a dataset more quickly? The database is 
dedicated to the test dataset so trashing  rebuilding the entire 
application/OS/anything is no issue for me-there's no data for me to lose.

Thanks!

Wes Vaske | Senior Storage Solutions Engineer
Micron Technology
101 West Louis Henna Blvd, Suite 210 | Austin, TX 78728



Re: [PERFORM] Fastest Backup Restore for perf testing

2015-05-27 Thread Steve Atkins

 On May 27, 2015, at 1:24 PM, Wes Vaske (wvaske) wva...@micron.com wrote:
 
 Hi,
  
 I’m running performance tests against a PostgreSQL database (9.4) with 
 various hardware configurations and a couple different benchmarks (TPC-C  
 TPC-H).
  
 I’m currently using pg_dump and pg_restore to refresh my dataset between runs 
 but this process seems slower than it could be.
  
 Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as a 
 backup  restore method?
  
 If not, is there another way to restore a dataset more quickly? The database 
 is dedicated to the test dataset so trashing  rebuilding the entire 
 application/OS/anything is no issue for me—there’s no data for me to lose.
  

Dropping the database and recreating it from a template database with create 
database foo template foo_template is about as fast as a file copy, much 
faster than pg_restore tends to be.

Cheers,
  Steve

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Fastest Backup Restore for perf testing

2015-05-27 Thread Andrew Dunstan


On 05/27/2015 04:24 PM, Wes Vaske (wvaske) wrote:


Hi,

I’m running performance tests against a PostgreSQL database (9.4) with 
various hardware configurations and a couple different benchmarks 
(TPC-C  TPC-H).


I’m currently using pg_dump and pg_restore to refresh my dataset 
between runs but this process seems slower than it could be.


Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as 
a backup  restore method?


If not, is there another way to restore a dataset more quickly? The 
database is dedicated to the test dataset so trashing  rebuilding the 
entire application/OS/anything is no issue for me—there’s no data for 
me to lose.


Thanks!




Read all of this chapter. 
http://www.postgresql.org/docs/current/static/backup.html


cheers

andrew




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] ERROR: missing chunk number 0 for toast value 1821556134 in pg_toast_17881

2015-05-27 Thread Josh Berkus
On 05/27/2015 12:50 AM, Tory M Blue wrote:
 Greetings and salutations.
 
 I've got some weirdness. 
 
 Current:
 Postgres 9.3.4 
 Slony 2.2.3
 CentOS 6.5
 
 Prior running Postgres 9.1.2 w/slony 2.1.3 CentOS 6.2
 
 I found that if I tried to run a vacuum full on 1 table that I recently
 reindexed (out of possibly 8 tables) that I get this error:
 
 # vacuum full table.ads;
 
 ERROR:  missing chunk number 0 for toast value 1821556134 in pg_toast_17881
 
 If I run a vacuum analyze it completes fine, but I can't run a vacuum
 full without it throwing an error. I seem to be able to query the table
 and I seem to be able to add data to the table and slony seems fine as
 does postgres. 

Is this happening on the Slony master or the replica?

There have been serveral bugs since 9.1.2 which could have caused this
particular error.  Are you certain that this is a recent problem?

Note that this error affects just one compressed value or row, so you're
not losing other data, unless it's a symptom of an ongoing problem.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] ERROR: missing chunk number 0 for toast value 1821556134 in pg_toast_17881

2015-05-27 Thread Tory M Blue
Greetings and salutations.

I've got some weirdness.

Current:
Postgres 9.3.4
Slony 2.2.3
CentOS 6.5

Prior running Postgres 9.1.2 w/slony 2.1.3 CentOS 6.2

I found that if I tried to run a vacuum full on 1 table that I recently
reindexed (out of possibly 8 tables) that I get this error:

# vacuum full table.ads;

ERROR:  missing chunk number 0 for toast value 1821556134 in pg_toast_17881

If I run a vacuum analyze it completes fine, but I can't run a vacuum full
without it throwing an error. I seem to be able to query the table and I
seem to be able to add data to the table and slony seems fine as does
postgres.

I'm unclear why the vacuum full is failing with this error. I've done some
searching and there are hints to prior bugs, but I didn't catch anything in
9.3.3 to 9.3.7 that talks about this.

My next steps without your fine assistance, will be to drop the table from
slon and re-add it (meaning it will drop the table completely from this db
and recreate it from the master (there we can do a vacuum full without
failure)..

I have already tried to remove the indexes and just create those, but no
luck.

Ideas?

Thanks

Tory