[COMMITTERS] pgsql: Add mode where contrib installcheck runs each module in a separa

2012-12-11 Thread Andrew Dunstan
Add mode where contrib installcheck runs each module in a separately named 
database.

Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Second attempt at this, this time accomodating make versions older
than 3.82.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/ad69bd052f8ac1edfd579ed0e32da1c33a775f78

Modified Files
--
contrib/dblink/Makefile |3 +++
src/Makefile.global.in  |9 +
src/makefiles/pgxs.mk   |6 +-
3 files changed, 17 insertions(+), 1 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add mode where contrib installcheck runs each module in a separa

2012-12-11 Thread Andrew Dunstan
Add mode where contrib installcheck runs each module in a separately named 
database.

Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Second attempt at this, this time accomodating make versions older
than 3.82.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.

Branch
--
REL9_2_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/4d29e8cc015918c54ad38ae88f78e21e61e653a8

Modified Files
--
contrib/dblink/Makefile |3 +++
src/Makefile.global.in  |9 +
src/makefiles/pgxs.mk   |6 +-
3 files changed, 17 insertions(+), 1 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add mode where contrib installcheck runs each module in a separa

2012-12-11 Thread Andrew Dunstan
Add mode where contrib installcheck runs each module in a separately named 
database.

Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Second attempt at this, this time accomodating make versions older
than 3.82.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.

Branch
--
REL9_1_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/5dd1c287c2866213a753495551dd75d9c18edbcb

Modified Files
--
contrib/dblink/Makefile |3 +++
src/Makefile.global.in  |9 +
src/makefiles/pgxs.mk   |6 +-
3 files changed, 17 insertions(+), 1 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add mode where contrib installcheck runs each module in a separa

2012-12-11 Thread Andrew Dunstan
Add mode where contrib installcheck runs each module in a separately named 
database.

Normally each module is tested in a database named contrib_regression,
which is dropped and recreated at the beginhning of each pg_regress run.
This new mode, enabled by adding USE_MODULE_DB=1 to the make command
line, runs most modules in a database with the module name embedded in
it.

This will make testing pg_upgrade on clusters with the contrib modules
a lot easier.

Second attempt at this, this time accomodating make versions older
than 3.82.

Still to be done: adapt to the MSVC build system.

Backpatch to 9.0, which is the earliest version it is reasonably
possible to test upgrading from.

Branch
--
REL9_0_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/fe20ff0c5646c49c14cd944c284d20bea91fba52

Modified Files
--
contrib/dblink/Makefile |3 +++
src/Makefile.global.in  |9 +
src/makefiles/pgxs.mk   |6 +-
3 files changed, 17 insertions(+), 1 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Consistency check should compare last record replayed, not last

2012-12-11 Thread Heikki Linnakangas
Consistency check should compare last record replayed, not last record read.

EndRecPtr is the last record that we've read, but not necessarily yet
replayed. CheckRecoveryConsistency should compare minRecoveryPoint with the
last replayed record instead. This caused recovery to think it's reached
consistency too early.

Now that we do the check in CheckRecoveryConsistency correctly, we have to
move the call of that function to after redoing a record. The current place,
after reading a record but before replaying it, is wrong. In particular, if
there are no more records after the one ending at minRecoveryPoint, we don't
enter hot standby until one extra record is generated and read by the
standby, and CheckRecoveryConsistency is called. These two bugs conspired
to make the code appear to work correctly, except for the small window
between reading the last record that reaches minRecoveryPoint, and
replaying it.

In the passing, rename recoveryLastRecPtr, which is the last record
replayed, to lastReplayedEndRecPtr. This makes it slightly less confusing
with replayEndRecPtr, which is the last record read that we're about to
replay.

Original report from Kyotaro HORIGUCHI, further diagnosis by Fujii Masao.
Backpatch to 9.0, where Hot Standby subtly changed the test from
"minRecoveryPoint < EndRecPtr" to "minRecoveryPoint <= EndRecPtr". The
former works because where the test is performed, we have always read one
more record than we've replayed.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/970fb12de121941939e64d6e0446c974bba3

Modified Files
--
src/backend/access/transam/xlog.c |   36 +---
1 files changed, 21 insertions(+), 15 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Consistency check should compare last record replayed, not last

2012-12-11 Thread Heikki Linnakangas
Consistency check should compare last record replayed, not last record read.

EndRecPtr is the last record that we've read, but not necessarily yet
replayed. CheckRecoveryConsistency should compare minRecoveryPoint with the
last replayed record instead. This caused recovery to think it's reached
consistency too early.

Now that we do the check in CheckRecoveryConsistency correctly, we have to
move the call of that function to after redoing a record. The current place,
after reading a record but before replaying it, is wrong. In particular, if
there are no more records after the one ending at minRecoveryPoint, we don't
enter hot standby until one extra record is generated and read by the
standby, and CheckRecoveryConsistency is called. These two bugs conspired
to make the code appear to work correctly, except for the small window
between reading the last record that reaches minRecoveryPoint, and
replaying it.

In the passing, rename recoveryLastRecPtr, which is the last record
replayed, to lastReplayedEndRecPtr. This makes it slightly less confusing
with replayEndRecPtr, which is the last record read that we're about to
replay.

Original report from Kyotaro HORIGUCHI, further diagnosis by Fujii Masao.
Backpatch to 9.0, where Hot Standby subtly changed the test from
"minRecoveryPoint < EndRecPtr" to "minRecoveryPoint <= EndRecPtr". The
former works because where the test is performed, we have always read one
more record than we've replayed.

Branch
--
REL9_1_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/8b6b374b39d992adea42f703baf28a19909ef747

Modified Files
--
src/backend/access/transam/xlog.c |   33 +++--
1 files changed, 19 insertions(+), 14 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Consistency check should compare last record replayed, not last

2012-12-11 Thread Heikki Linnakangas
Consistency check should compare last record replayed, not last record read.

EndRecPtr is the last record that we've read, but not necessarily yet
replayed. CheckRecoveryConsistency should compare minRecoveryPoint with the
last replayed record instead. This caused recovery to think it's reached
consistency too early.

Now that we do the check in CheckRecoveryConsistency correctly, we have to
move the call of that function to after redoing a record. The current place,
after reading a record but before replaying it, is wrong. In particular, if
there are no more records after the one ending at minRecoveryPoint, we don't
enter hot standby until one extra record is generated and read by the
standby, and CheckRecoveryConsistency is called. These two bugs conspired
to make the code appear to work correctly, except for the small window
between reading the last record that reaches minRecoveryPoint, and
replaying it.

In the passing, rename recoveryLastRecPtr, which is the last record
replayed, to lastReplayedEndRecPtr. This makes it slightly less confusing
with replayEndRecPtr, which is the last record read that we're about to
replay.

Original report from Kyotaro HORIGUCHI, further diagnosis by Fujii Masao.
Backpatch to 9.0, where Hot Standby subtly changed the test from
"minRecoveryPoint < EndRecPtr" to "minRecoveryPoint <= EndRecPtr". The
former works because where the test is performed, we have always read one
more record than we've replayed.

Branch
--
REL9_2_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/fb565f8c9616ec8ab1b5176d16f310725e581e6e

Modified Files
--
src/backend/access/transam/xlog.c |   36 +---
1 files changed, 21 insertions(+), 15 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Consistency check should compare last record replayed, not last

2012-12-11 Thread Heikki Linnakangas
Consistency check should compare last record replayed, not last record read.

EndRecPtr is the last record that we've read, but not necessarily yet
replayed. CheckRecoveryConsistency should compare minRecoveryPoint with the
last replayed record instead. This caused recovery to think it's reached
consistency too early.

Now that we do the check in CheckRecoveryConsistency correctly, we have to
move the call of that function to after redoing a record. The current place,
after reading a record but before replaying it, is wrong. In particular, if
there are no more records after the one ending at minRecoveryPoint, we don't
enter hot standby until one extra record is generated and read by the
standby, and CheckRecoveryConsistency is called. These two bugs conspired
to make the code appear to work correctly, except for the small window
between reading the last record that reaches minRecoveryPoint, and
replaying it.

In the passing, rename recoveryLastRecPtr, which is the last record
replayed, to lastReplayedEndRecPtr. This makes it slightly less confusing
with replayEndRecPtr, which is the last record read that we're about to
replay.

Original report from Kyotaro HORIGUCHI, further diagnosis by Fujii Masao.
Backpatch to 9.0, where Hot Standby subtly changed the test from
"minRecoveryPoint < EndRecPtr" to "minRecoveryPoint <= EndRecPtr". The
former works because where the test is performed, we have always read one
more record than we've replayed.

Branch
--
REL9_0_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/5840e3181b7e6c784fdb3aff708c4dcc2dfe551d

Modified Files
--
src/backend/access/transam/xlog.c |   33 +++--
1 files changed, 19 insertions(+), 14 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix pg_upgrade for invalid indexes

2012-12-11 Thread Bruce Momjian
Fix pg_upgrade for invalid indexes

All versions of pg_upgrade upgraded invalid indexes caused by CREATE
INDEX CONCURRENTLY failures and marked them as valid.  The patch adds a
check to all pg_upgrade versions and throws an error during upgrade or
--check.

Backpatch to 9.2, 9.1, 9.0.  Patch slightly adjusted.

Branch
--
REL9_1_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/97a60fa5a06bf60c857976e24ef2ed0cb882cd52

Modified Files
--
contrib/pg_upgrade/check.c |   91 
1 files changed, 91 insertions(+), 0 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix pg_upgrade for invalid indexes

2012-12-11 Thread Bruce Momjian
Fix pg_upgrade for invalid indexes

All versions of pg_upgrade upgraded invalid indexes caused by CREATE
INDEX CONCURRENTLY failures and marked them as valid.  The patch adds a
check to all pg_upgrade versions and throws an error during upgrade or
--check.

Backpatch to 9.2, 9.1, 9.0.  Patch slightly adjusted.

Branch
--
REL9_2_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/744358005c49238b2abc62f69fe84e5440ffde0f

Modified Files
--
contrib/pg_upgrade/check.c |   91 
1 files changed, 91 insertions(+), 0 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix pg_upgrade for invalid indexes

2012-12-11 Thread Bruce Momjian
Fix pg_upgrade for invalid indexes

All versions of pg_upgrade upgraded invalid indexes caused by CREATE
INDEX CONCURRENTLY failures and marked them as valid.  The patch adds a
check to all pg_upgrade versions and throws an error during upgrade or
--check.

Backpatch to 9.2, 9.1, 9.0.  Patch slightly adjusted.

Branch
--
REL9_0_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/33be41d3adcf8bd272303e0f5dcc4ec41051f141

Modified Files
--
contrib/pg_upgrade/check.c |   92 
1 files changed, 92 insertions(+), 0 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix pg_upgrade for invalid indexes

2012-12-11 Thread Bruce Momjian
Fix pg_upgrade for invalid indexes

All versions of pg_upgrade upgraded invalid indexes caused by CREATE
INDEX CONCURRENTLY failures and marked them as valid.  The patch adds a
check to all pg_upgrade versions and throws an error during upgrade or
--check.

Backpatch to 9.2, 9.1, 9.0.  Patch slightly adjusted.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/e95c4bd1133acf7fc58a52212253129ef2dc9d12

Modified Files
--
contrib/pg_upgrade/check.c |   91 
1 files changed, 91 insertions(+), 0 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Fix performance problems with autovacuum truncation in busy work

2012-12-11 Thread Kevin Grittner
Fix performance problems with autovacuum truncation in busy workloads.

In situations where there are over 8MB of empty pages at the end of
a table, the truncation work for trailing empty pages takes longer
than deadlock_timeout, and there is frequent access to the table by
processes other than autovacuum, there was a problem with the
autovacuum worker process being canceled by the deadlock checking
code. The truncation work done by autovacuum up that point was
lost, and the attempt tried again by a later autovacuum worker. The
attempts could continue indefinitely without making progress,
consuming resources and blocking other processes for up to
deadlock_timeout each time.

This patch has the autovacuum worker checking whether it is
blocking any other thread at 20ms intervals. If such a condition
develops, the autovacuum worker will persist the work it has done
so far, release its lock on the table, and sleep in 50ms intervals
for up to 5 seconds, hoping to be able to re-acquire the lock and
try again. If it is unable to get the lock in that time, it moves
on and a worker will try to continue later from the point this one
left off.

While this patch doesn't change the rules about when and what to
truncate, it does cause the truncation to occur sooner, with less
blocking, and with the consumption of fewer resources when there is
contention for the table's lock.

The only user-visible change other than improved performance is
that the table size during truncation may change incrementally
instead of just once.

This problem exists in all supported versions but is infrequently
reported, although some reports of performance problems when
autovacuum runs might be caused by this. Initial commit is just the
master branch, but this should probably be backpatched once the
build farm and general developer usage confirm that there are no
surprising effects.

Jan Wieck

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/b19e4250b45e91c9cbdd18d35ea6391ab5961c8d

Modified Files
--
src/backend/commands/vacuumlazy.c |  230 ++--
src/backend/storage/lmgr/lmgr.c   |   18 +++
src/backend/storage/lmgr/lock.c   |   92 +++
src/include/storage/lmgr.h|1 +
src/include/storage/lock.h|2 +
5 files changed, 279 insertions(+), 64 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Disable event triggers in standalone mode.

2012-12-11 Thread Tom Lane
Disable event triggers in standalone mode.

Per discussion, this seems necessary to allow recovery from broken event
triggers, or broken indexes on pg_event_trigger.

Dimitri Fontaine

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/cd3413ec3683918c9cb9cfb39ae5b2c32f231e8b

Modified Files
--
doc/src/sgml/ref/create_event_trigger.sgml |   11 +--
src/backend/commands/event_trigger.c   |   19 +++
2 files changed, 28 insertions(+), 2 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add defenses against integer overflow in dynahash numbuckets cal

2012-12-11 Thread Tom Lane
Add defenses against integer overflow in dynahash numbuckets calculations.

The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc.  I (tgl) had previously installed some defenses against that
in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one
call path.  Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway.  So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.

Trouble report and patch by Jeff Davis.

Branch
--
REL9_2_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/8bb937cc53a4568388c5ae85386eed58d88e5853

Modified Files
--
src/backend/executor/nodeHash.c   |4 ++-
src/backend/utils/hash/dynahash.c |   49 
2 files changed, 41 insertions(+), 12 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add defenses against integer overflow in dynahash numbuckets cal

2012-12-11 Thread Tom Lane
Add defenses against integer overflow in dynahash numbuckets calculations.

The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc.  I (tgl) had previously installed some defenses against that
in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one
call path.  Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway.  So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.

Trouble report and patch by Jeff Davis.

Branch
--
master

Details
---
http://git.postgresql.org/pg/commitdiff/691c5ebf79bb011648fad0e6b234b94a28177e3c

Modified Files
--
src/backend/executor/nodeHash.c   |4 ++-
src/backend/utils/hash/dynahash.c |   49 
2 files changed, 41 insertions(+), 12 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add defenses against integer overflow in dynahash numbuckets cal

2012-12-11 Thread Tom Lane
Add defenses against integer overflow in dynahash numbuckets calculations.

The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc.  I (tgl) had previously installed some defenses against that
in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one
call path.  Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway.  So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.

Trouble report and patch by Jeff Davis.

Branch
--
REL9_1_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/f0fc1d4c890135ec879860f7d0c49b34d492d99f

Modified Files
--
src/backend/executor/nodeHash.c   |4 ++-
src/backend/utils/hash/dynahash.c |   49 
2 files changed, 41 insertions(+), 12 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add defenses against integer overflow in dynahash numbuckets cal

2012-12-11 Thread Tom Lane
Add defenses against integer overflow in dynahash numbuckets calculations.

The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc.  I (tgl) had previously installed some defenses against that
in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one
call path.  Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway.  So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.

Trouble report and patch by Jeff Davis.

Branch
--
REL8_4_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/4b442161516797c1fca569f1421daf8f78133c55

Modified Files
--
src/backend/executor/nodeHash.c   |4 ++-
src/backend/utils/hash/dynahash.c |   49 
2 files changed, 41 insertions(+), 12 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add defenses against integer overflow in dynahash numbuckets cal

2012-12-11 Thread Tom Lane
Add defenses against integer overflow in dynahash numbuckets calculations.

The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc.  I (tgl) had previously installed some defenses against that
in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one
call path.  Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway.  So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.

Trouble report and patch by Jeff Davis.

Branch
--
REL9_0_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/d8caaacc9fad0395d9360481549d9a2c6ffeb1ad

Modified Files
--
src/backend/executor/nodeHash.c   |4 ++-
src/backend/utils/hash/dynahash.c |   49 
2 files changed, 41 insertions(+), 12 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers


[COMMITTERS] pgsql: Add defenses against integer overflow in dynahash numbuckets cal

2012-12-11 Thread Tom Lane
Add defenses against integer overflow in dynahash numbuckets calculations.

The dynahash code requires the number of buckets in a hash table to fit
in an int; but since we calculate the desired hash table size dynamically,
there are various scenarios where we might calculate too large a value.
The resulting overflow can lead to infinite loops, division-by-zero
crashes, etc.  I (tgl) had previously installed some defenses against that
in commit 299d1716525c659f0e02840e31fbe4dea3, but that covered only one
call path.  Moreover it worked by limiting the request size to work_mem,
but in a 64-bit machine it's possible to set work_mem high enough that the
problem appears anyway.  So let's fix the problem at the root by installing
limits in the dynahash.c functions themselves.

Trouble report and patch by Jeff Davis.

Branch
--
REL8_3_STABLE

Details
---
http://git.postgresql.org/pg/commitdiff/b7ef58ae3322db88ceaaa97894054a5afe6a9aaf

Modified Files
--
src/backend/executor/nodeHash.c   |4 ++-
src/backend/utils/hash/dynahash.c |   49 
2 files changed, 41 insertions(+), 12 deletions(-)


-- 
Sent via pgsql-committers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-committers