pgsql: Fix typo

2018-02-20 Thread Magnus Hagander
Fix typo

Author: Masahiko Sawada

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/9a44a26b65d3d36867267624b76d3dea3dc4f6f6

Modified Files
--
src/backend/storage/ipc/procarray.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)



pgsql: Adjust ALTER TABLE docs on partitioned constraints

2018-02-20 Thread Alvaro Herrera
Adjust ALTER TABLE docs on partitioned constraints

Move the "additional restrictions" comment to ALTER TABLE ADD
CONSTRAINT instead of ADD CONSTRAINT USING INDEX; and in the latter
instead indicate that partitioned tables are unsupported

Noted by David G. Johnston
Discussion: 
https://postgr.es/m/cakfquwy4ld7ecxl_kamaxwt0fuu5vcppn2l4dh+3beybrdb...@mail.gmail.com

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/9a89f6d85467be362f4d426c76439cea70cd327f

Modified Files
--
doc/src/sgml/ref/alter_table.sgml | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)



pgsql: Fix pg_dump's logic for eliding sequence limits that match the d

2018-02-20 Thread Tom Lane
Fix pg_dump's logic for eliding sequence limits that match the defaults.

The previous coding here applied atoi() to strings that could represent
values too large to fit in an int.  If the overflowed value happened to
match one of the cases it was looking for, it would drop that limit
value from the output, leading to incorrect restoration of the sequence.

Avoid the unsafe behavior, and also make the logic cleaner by explicitly
calculating the default min/max values for the appropriate kind of
sequence.

Reported and patched by Alexey Bashtanov, though I whacked his patch
around a bit.  Back-patch to v10 where the faulty logic was added.

Discussion: https://postgr.es/m/cb85a9a5-946b-c7c4-9cf2-6cd6e25d7...@imap.cc

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/3486bcf9e89d87b59d0e370af098fda38be97209

Modified Files
--
src/bin/pg_dump/pg_dump.c | 53 ++-
1 file changed, 29 insertions(+), 24 deletions(-)



pgsql: Fix pg_dump's logic for eliding sequence limits that match the d

2018-02-20 Thread Tom Lane
Fix pg_dump's logic for eliding sequence limits that match the defaults.

The previous coding here applied atoi() to strings that could represent
values too large to fit in an int.  If the overflowed value happened to
match one of the cases it was looking for, it would drop that limit
value from the output, leading to incorrect restoration of the sequence.

Avoid the unsafe behavior, and also make the logic cleaner by explicitly
calculating the default min/max values for the appropriate kind of
sequence.

Reported and patched by Alexey Bashtanov, though I whacked his patch
around a bit.  Back-patch to v10 where the faulty logic was added.

Discussion: https://postgr.es/m/cb85a9a5-946b-c7c4-9cf2-6cd6e25d7...@imap.cc

Branch
--
REL_10_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/6753f6c41998a4db03a6953ab9a0a6293c18b805

Modified Files
--
src/bin/pg_dump/pg_dump.c | 53 ++-
1 file changed, 29 insertions(+), 24 deletions(-)



pgsql: Error message improvement

2018-02-20 Thread Peter Eisentraut
Error message improvement

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/c2ff42c6c1631c6c67d09fc8574186a984566a0d

Modified Files
--
src/backend/commands/tablecmds.c   | 2 +-
src/test/regress/expected/truncate.out | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)



pgsql: Use platform independent type for TupleTableSlot->tts_off.

2018-02-20 Thread Andres Freund
Use platform independent type for TupleTableSlot->tts_off.

Previously tts_off was, for unknown reasons, of type long. For one
that's unnecessary as tuples are restricted in length, for another
long would be a bad choice of type even if that weren't the case, as
it's not reliably wider than an int. Also HeapTupleHeader->t_len is a
uint32.

This is split off from a larger patch implementing JITed tuple
deforming. Seems like an independent improvement, as tiny as it is.

Author: Andres Freund

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/4c0ec9ee28279cc6a610cde8470fc8b606267b68

Modified Files
--
src/backend/access/common/heaptuple.c | 4 ++--
src/include/executor/tuptable.h   | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)



pgsql: Blindly attempt to adapt sepgsql regression tests.

2018-02-20 Thread Andres Freund
Blindly attempt to adapt sepgsql regression tests.

Commit bf6c614a2f2c58312b3be34a47e7fb7362e07bcb broke the sepgsql test
due to a new invocation of the function access hook during grouping
equal initialization.

The new behaviour seems at least as correct as the old one, so try
adapt the tests. As I've no working sepgsql setup here, this is just
going from buildfarm results.

Author: Andres Freund
Discussion: 
https://postgr.es/m/20180217000337.lfsdvro3l6ccs...@alap3.anarazel.de

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/29d432e477a99f4c1e18820c5fc820a6b178c695

Modified Files
--
contrib/sepgsql/expected/misc.out | 4 
1 file changed, 4 insertions(+)



Re: pgsql: Do execGrouping.c via expression eval machinery, take two.

2018-02-20 Thread Andres Freund
Hi,

On 2018-02-16 16:03:37 -0800, Andres Freund wrote:
> This triggered a failure on rhinoceros, in the sepgsql test:
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2018-02-16%2023%3A45%3A02
> 
> The relevant diff is:
> + LOG:  SELinux: allowed { execute } 
> scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0-s0:c0.c255 
> tcontext=system_u:object_r:sepgsql_proc_exec_t:s0 tclass=db_procedure 
> name="pg_catalog.int4eq(integer,integer)"
> and that's because we now invoke the function access hook for grouping
> equal, which we previously didn't.
> 
> I personally think the new behaviour makes more sense, but if somebody
> wants to argue differently? The only argument against I can see is that
> there's some other cases where also don't yet invoke it, but that seems
> weak.
> 
> I never fully grasped the exact use-case for the function execute hook
> is, so maybe Kaigai and/or Robert could comment?

Fixed by adjusting the test output:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2018-02-21%2002%3A45%3A01

Greetings,

Andres Freund



Re: pgsql: Avoid valgrind complaint about write() of uninitalized bytes.

2018-02-20 Thread Andres Freund
On 2018-02-06 19:25:04 +, Robert Haas wrote:
> Avoid valgrind complaint about write() of uninitalized bytes.
> 
> LogicalTapeFreeze() may write out its first block when it is dirty but
> not full, and then immediately read the first block back in from its
> BufFile as a BLCKSZ-width block.  This can only occur in rare cases
> where very few tuples were written out, which is currently only
> possible with parallel external tuplesorts.  To avoid valgrind
> complaints, tell it to treat the tail of logtape.c's buffer as
> defined.
> 
> Commit 9da0cc35284bdbe8d442d732963303ff0e0a40bc exposed this problem
> but did not create it.  LogicalTapeFreeze() has always tended to write
> out some amount of garbage bytes, but previously never wrote less than
> one block of data in total, so the problem was masked.
> 
> Per buildfarm members lousyjack and skink.
> 
> Peter Geoghegan, based on a suggestion from Tom Lane and me.  Some
> comment revisions by me.

Doesn't appear to have fixed the problem entirely:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2018-02-20%2017%3A10%3A01

relevant excerpt:
==12452== Syscall param write(buf) points to uninitialised byte(s)
==12452==at 0x4E49C64: write (write.c:26)
==12452==by 0x4BF8BF: FileWrite (fd.c:2017)
==12452==by 0x4C1B69: BufFileDumpBuffer (buffile.c:513)
==12452==by 0x4C1C61: BufFileFlush (buffile.c:657)
==12452==by 0x4C21D6: BufFileRead (buffile.c:561)
==12452==by 0x63ADA8: ltsReadBlock (logtape.c:274)
==12452==by 0x63AEF6: ltsReadFillBuffer (logtape.c:304)
==12452==by 0x63B560: LogicalTapeRewindForRead (logtape.c:771)
==12452==by 0x640567: mergeruns (tuplesort.c:2671)
==12452==by 0x645122: tuplesort_performsort (tuplesort.c:1866)
==12452==by 0x23357D: _bt_parallel_scan_and_sort (nbtsort.c:1626)
==12452==by 0x234F71: _bt_parallel_build_main (nbtsort.c:1527)
==12452==  Address 0xc3cd368 is 744 bytes inside a block of size 8,272 
client-defined
==12452==at 0x6362D1: palloc (mcxt.c:858)
==12452==by 0x4C1DA6: BufFileCreateShared (buffile.c:249)
==12452==by 0x63B062: LogicalTapeSetCreate (logtape.c:571)
==12452==by 0x64489A: inittapes (tuplesort.c:2419)
==12452==by 0x644B1C: puttuple_common (tuplesort.c:1695)
==12452==by 0x644E01: tuplesort_putindextuplevalues (tuplesort.c:1545)
==12452==by 0x233391: _bt_spool (nbtsort.c:514)
==12452==by 0x2333CA: _bt_build_callback (nbtsort.c:574)
==12452==by 0x286004: IndexBuildHeapRangeScan (index.c:2879)
==12452==by 0x286366: IndexBuildHeapScan (index.c:2419)
==12452==by 0x23356F: _bt_parallel_scan_and_sort (nbtsort.c:1615)
==12452==by 0x234F71: _bt_parallel_build_main (nbtsort.c:1527)
==12452==  Uninitialised value was created by a heap allocation
==12452==at 0x6362D1: palloc (mcxt.c:858)
==12452==by 0x63B20E: LogicalTapeWrite (logtape.c:634)
==12452==by 0x63EA46: writetup_index (tuplesort.c:4206)
==12452==by 0x643769: dumptuples (tuplesort.c:2994)
==12452==by 0x64511A: tuplesort_performsort (tuplesort.c:1865)
==12452==by 0x23357D: _bt_parallel_scan_and_sort (nbtsort.c:1626)
==12452==by 0x234F71: _bt_parallel_build_main (nbtsort.c:1527)
==12452==by 0x252E2A: ParallelWorkerMain (parallel.c:1397)
==12452==by 0x4614DF: StartBackgroundWorker (bgworker.c:841)
==12452==by 0x46F400: do_start_bgworker (postmaster.c:5741)
==12452==by 0x46F541: maybe_start_bgworkers (postmaster.c:5954)
==12452==by 0x4700A6: sigusr1_handler (postmaster.c:5134)
==12452== 
==12452== VALGRINDERROR-END

Note that the above path doesn't appear to go through
LogicalTapeFreeze(), therfore not hitting the VALGRIND_MAKE_MEM_DEFINED
added in the above commit.

Greetings,

Andres Freund