I received a private email report yesterday from someone using
pg_upgrade with PG 9.0 who found it took five hours for pg_upgrade to
upgrade a database with 150k tables. Yes, that is a lot of tables, but
pg_upgrade should be able to do better than that.
I have modified pg_upgrade in git master
I've been having a look at this guy, trying to get a handle on how much
down time it will save.
As a quick check, I tried upgrading a cluster with a 1 non default db
containing a scale 100 pgbench schema:
- pg_upgrade : 57 s
- pgdump/pg_restore : 154 s
So, a reasonable saving all up
On 21/09/10 16:14, Mark Kirkwood wrote:
I've been having a look at this guy, trying to get a handle on how
much down time it will save.
As a quick check, I tried upgrading a cluster with a 1 non default db
containing a scale 100 pgbench schema:
- pg_upgrade : 57 s
-
I run performance test on in-place patch prototype which I sent for review and I
got nice result:
Original:
-
MQThL (Maximum Qualified Throughput LIGHT): 2202.12 tpm
MQThM (Maximum Qualified Throughput MEDIUM): 4706.60 tpm
MQThH (Maximum Qualified Throughput HEAVY): 3956.64 tpm