On Sun, Jan 20, 2013 at 02:11:48PM -0500, Tom Lane wrote:
> Bruce Momjian <br...@momjian.us> writes:
> > !            *      Using pg_restore --single-transaction is faster than 
> > other
> > !            *      methods, like --jobs.
> 
> Is this still the case now that Jeff's AtEOXact patch is in?  The risk
> of locktable overflow with --single-transaction makes me think that
> pg_upgrade should avoid it unless there is a *really* strong performance
> case for it, and I fear your old measurements are now invalidated.

I had thought that the AtEOXact patch only helped single transactions
with many tables, but I now remember it mostly helps backends that have
accessed many tables.

With max_locks_per_transaction set high, I tested with the attached
patch that removes --single-transaction from pg_restore.  I saw a 4%
improvement by removing that option, and 15% at 64k.  (Test script
attached.)  I have applied the patch.  This is good news not just for
pg_upgrade but for other backends that access many tables.  

                     git     patch
            1       11.06    11.03
         1000       19.97    20.86
         2000       28.50    27.61
         4000       46.90    45.65
         8000       79.38    80.68
        16000      153.33   147.13
        32000      317.40   302.96
        64000      782.94   659.52

FYI, this is better than the tests I did on the original patch that
showed --single-transaction was still a win then:

    http://www.postgresql.org/message-id/20121128202232.ga31...@momjian.us

>       #tbls       git     -1    AtOEXAct  both
>           1      11.06   13.06   10.99   13.20
>        1000      21.71   22.92   22.20   22.51
>        2000      32.86   31.09   32.51   31.62
>        4000      55.22   49.96   52.50   49.99
>        8000     105.34   82.10   95.32   82.94
>       16000     223.67  164.27  187.40  159.53
>       32000     543.93  324.63  366.44  317.93
>       64000    1697.14  791.82  767.32  752.57

Keep in mind this doesn't totally avoid the requirement to increase
max_locks_per_transaction.  There are cases at >6k where pg_dump runs
out of locks, but I don't see how we can improve that.  Hopefully users
have already seen pg_dump fail and have adjusted
max_locks_per_transaction.

-- 
  Bruce Momjian  <br...@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/contrib/pg_upgrade/pg_upgrade.c b/contrib/pg_upgrade/pg_upgrade.c
new file mode 100644
index 85997e5..88494b8
*** a/contrib/pg_upgrade/pg_upgrade.c
--- b/contrib/pg_upgrade/pg_upgrade.c
*************** create_new_objects(void)
*** 314,325 ****
  		snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
  
  		/*
! 		 *	Using pg_restore --single-transaction is faster than other
! 		 *	methods, like --jobs.  pg_dump only produces its output at the
! 		 *	end, so there is little parallelism using the pipe.
  		 */
  		parallel_exec_prog(log_file_name, NULL,
! 				  "\"%s/pg_restore\" %s --exit-on-error --single-transaction --verbose --dbname \"%s\" \"%s\"",
  				  new_cluster.bindir, cluster_conn_opts(&new_cluster),
  				  old_db->db_name, sql_file_name);
  	}
--- 314,324 ----
  		snprintf(log_file_name, sizeof(log_file_name), DB_DUMP_LOG_FILE_MASK, old_db->db_oid);
  
  		/*
! 		 *	pg_dump only produces its output at the end, so there is little
! 		 *	parallelism if using the pipe.
  		 */
  		parallel_exec_prog(log_file_name, NULL,
! 				  "\"%s/pg_restore\" %s --exit-on-error --verbose --dbname \"%s\" \"%s\"",
  				  new_cluster.bindir, cluster_conn_opts(&new_cluster),
  				  old_db->db_name, sql_file_name);
  	}
:

. traprm

export QUIET=$((QUIET + 1))

> /rtmp/out

export PGOPTIONS="-c synchronous_commit=off"

for CYCLES in 1 1000 2000 4000 8000 16000 32000 64000
do
        echo "$CYCLES" >> /rtmp/out

        for BRANCH in master no-single
        do      cd /pgtop
                pgsw $BRANCH
                cd -
                /pg_upgrade/tools/setup 9.3 9.3
                sleep 2
        
                # need for +16k
                for CONFIG in /u/pgsql.old/data/postgresql.conf 
/u/pgsql/data/postgresql.conf
                do      pipe sed 's/#max_locks_per_transaction = 
64/max_locks_per_transaction = 64000/' "$CONFIG"
                        pipe sed 's/shared_buffers = 128MB/shared_buffers = 
1GB/' "$CONFIG"
                done
        
                pgstart /u/pgsql.old/data
                sleep 8
                for JOT in $(jot "$CYCLES"); do echo "CREATE TABLE test$JOT (x 
SERIAL PRIMARY KEY);"; done| sql test
                pgstop /u/pgsql.old/data
                sleep 2
                /usr/bin/time --output=/rtmp/out --append --format '%e' 
/pg_upgrade/tools/upgrade
                sleep 2
        done
done

bell

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to