Re: pgsql: Skip full index scan during cleanup of B-tree indexes when possi

2018-04-04 Thread Tom Lane
Peter Geoghegan  writes:
>>> TRAP: FailedAssertion("!(metad->btm_version == 3)", File:
>>> "/home/pg/postgresql/root/build/../source/src/backend/access/nbtree/nbtpage.c",
>>> Line: 619)

>> Hm, buildfarm's not complaining --- what's the test case?

> This was discovered while testing/reviewing the latest version of the
> INCLUDE covering indexes patch. It now seems to be unrelated.

Oh, wait ... I wonder if you saw that because you were running a new
backend without having re-initdb'd?  Once you had re-initdb'd, then
of course there would be no old-format btree indexes anywhere.  But
if you hadn't, then anyplace that was not prepared to cope with the
old header format would complain about pre-existing indexes.

In short, this sounds like a place that did not get the memo about
how to cope with un-upgraded indexes.

regards, tom lane



pgsql: doc: Improve indentation of SQL examples

2018-04-04 Thread Peter Eisentraut
doc: Improve indentation of SQL examples

Some of these were indented using 8 spaces whereas the rest uses 4
spaces.  Probably originally some difference in tab size.

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/a56e26784d7f418015a5be471eb500614a2f24ee

Modified Files
--
doc/src/sgml/queries.sgml | 72 +++
1 file changed, 36 insertions(+), 36 deletions(-)



Re: pgsql: Skip full index scan during cleanup of B-tree indexes when possi

2018-04-04 Thread Tom Lane
Peter Geoghegan  writes:
> I also see an assertion failure within _bt_getrootheight():

> TRAP: FailedAssertion("!(metad->btm_version == 3)", File:
> "/home/pg/postgresql/root/build/../source/src/backend/access/nbtree/nbtpage.c",
> Line: 619)

Hm, buildfarm's not complaining --- what's the test case?

regards, tom lane



Re: pgsql: Skip full index scan during cleanup of B-tree indexes when possi

2018-04-04 Thread Michael Paquier
On Wed, Apr 04, 2018 at 08:58:14PM -0400, Tom Lane wrote:
> Peter Geoghegan  writes:
> > I also see an assertion failure within _bt_getrootheight():
> 
> > TRAP: FailedAssertion("!(metad->btm_version == 3)", File:
> > "/home/pg/postgresql/root/build/../source/src/backend/access/nbtree/nbtpage.c",
> > Line: 619)
> 
> Hm, buildfarm's not complaining --- what's the test case?

Hm.  No problems here either with a56e267 and gcc 7.3.  The warnings are
here for sure, and any compiler would complain about those.
--
Michael


signature.asc
Description: PGP signature


Re: pgsql: Skip full index scan during cleanup of B-tree indexes when possi

2018-04-04 Thread Peter Geoghegan
On Wed, Apr 4, 2018 at 8:28 PM, Tom Lane  wrote:
>> This was discovered while testing/reviewing the latest version of the
>> INCLUDE covering indexes patch. It now seems to be unrelated.
>
> Oh, wait ... I wonder if you saw that because you were running a new
> backend without having re-initdb'd?

Yes. That's what happened.

> Once you had re-initdb'd, then
> of course there would be no old-format btree indexes anywhere.  But
> if you hadn't, then anyplace that was not prepared to cope with the
> old header format would complain about pre-existing indexes.
>
> In short, this sounds like a place that did not get the memo about
> how to cope with un-upgraded indexes.

Sounds plausible.

-- 
Peter Geoghegan



Re: pgsql: New files for MERGE

2018-04-04 Thread Michael Paquier
On Wed, Apr 04, 2018 at 10:10:46AM -0700, Andres Freund wrote:
> This needs at the very least a response to the issues pointed out in the
> referenced email that you chose to ignore without any sort of comment.

That's definitely not cool.
--
Michael


signature.asc
Description: PGP signature


pgsql: Install errcodes.txt for use by extensions.

2018-04-04 Thread Andrew Gierth
Install errcodes.txt for use by extensions.

Maintainers of out-of-tree PLs typically need access to the set of
error codes. To avoid the need to duplicate that information in some
form in PL source trees, provide errcodes.txt as part of a server
installation.

Thomas Munro, based on a suggestion from Andrew Gierth
Discussion: https://postgr.es/m/87woykk7mu.fsf%40news-spur.riddles.org.uk

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/1fd8690668635bab9dfa16b2885e6e474f8451ba

Modified Files
--
src/backend/Makefile   |  2 ++
src/backend/utils/Makefile | 10 ++
src/tools/msvc/Install.pm  |  3 +++
3 files changed, 15 insertions(+)



Re: pgsql: Skip full index scan during cleanup of B-tree indexes when possi

2018-04-04 Thread Peter Geoghegan
On Wed, Apr 4, 2018 at 5:58 PM, Tom Lane  wrote:
>> TRAP: FailedAssertion("!(metad->btm_version == 3)", File:
>> "/home/pg/postgresql/root/build/../source/src/backend/access/nbtree/nbtpage.c",
>> Line: 619)
>
> Hm, buildfarm's not complaining --- what's the test case?

This was discovered while testing/reviewing the latest version of the
INCLUDE covering indexes patch. It now seems to be unrelated.

Sorry for the noise.

-- 
Peter Geoghegan



Re: pgsql: Skip full index scan during cleanup of B-tree indexes when possi

2018-04-04 Thread Peter Geoghegan
On Wed, Apr 4, 2018 at 3:32 PM, Alexander Korotkov
 wrote:
> Hi!
>
> On Wed, Apr 4, 2018 at 7:29 PM, Teodor Sigaev  wrote:
>>
>> Skip full index scan during cleanup of B-tree indexes when possible
>
>
> Thank you for committing this.
>
> It appears that patch contains some redundant variabled.  See warnings
> produced
> by gcc-7.

I also see an assertion failure within _bt_getrootheight():

TRAP: FailedAssertion("!(metad->btm_version == 3)", File:
"/home/pg/postgresql/root/build/../source/src/backend/access/nbtree/nbtpage.c",
Line: 619)

-- 
Peter Geoghegan



pgsql: Fix the new ARMv8 CRC code for short and unaligned input.

2018-04-04 Thread Heikki Linnakangas
Fix the new ARMv8 CRC code for short and unaligned input.

The code before the main loop, to handle the possible 1-7 unaligned bytes
at the beginning of the input, was broken, and read past the input, if the
the input was very short.

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/3a5e0a91bb324ad2b2b1a0623a3f2e37772b43fc

Modified Files
--
src/port/pg_crc32c_armv8.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)



Re: pgsql: Transforms for jsonb to PL/Perl

2018-04-04 Thread Anthony Bykov
On Tue, 03 Apr 2018 17:37:04 -0400
Tom Lane  wrote:

> I wrote:
> > Hm, it fails on my own machine too (RHEL6, perl 5.10.1), with the
> > same "cannot transform this Perl type to jsonb" symptoms.  A bit
> > of tracing shows that SvTYPE(in) is returning SVt_PVIV in some
> > of the failing cases, and SVt_PVNV in others.  
> 
> I tried to fix this by reducing the amount of knowledge that function
> embeds about the possible SvTYPEs.  After the special cases for AV,
> HV, and NULL, the attached just tests SvIOK, SvNOK, and SvPOK, and
> does the right thing for each case.
> 
> This results in one change in the module's test results: the example
> that thinks it's returning a regexp match result no longer fails,
> but just returns the scalar result (0).  I'm inclined to think that
> this is correct/desirable and the existing behavior is an accidental
> artifact of not coping with Perl's various augmented representations
> of scalar values.
> 
> Thoughts?
> 
>   regards, tom lane
> 


Hello.
I think that there is a mistake in test:
CREATE FUNCTION testRegexpToJsonb() RETURNS jsonb
LANGUAGE plperl
TRANSFORM FOR TYPE jsonb
AS $$
return ('1' =~ m(0\t2));
$$;

=~ is the operator testing a regular expression match. 
Hence, testRegexpToJsonb function returns true/false values
(when used in scalar context, the return value
generally indicates the success of the operation).

I guess the right test will look a little bit different:
CREATE FUNCTION testRegexpToJsonb() RETURNS jsonb
LANGUAGE plperl
TRANSFORM FOR TYPE jsonb
AS $$
$a = qr//;
return ($a);
$$;

So, this may be the reason why the original testRegexpToJsonb returns
the scalar result.

--
Anthony Bykov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: pgsql: Transforms for jsonb to PL/Perl

2018-04-04 Thread Tom Lane
Anthony Bykov  writes:
> Tom Lane  wrote:
>> This results in one change in the module's test results: the example
>> that thinks it's returning a regexp match result no longer fails,
>> but just returns the scalar result (0).  I'm inclined to think that
>> this is correct/desirable and the existing behavior is an accidental
>> artifact of not coping with Perl's various augmented representations
>> of scalar values.

> I think that there is a mistake in test:
> CREATE FUNCTION testRegexpToJsonb() RETURNS jsonb
> LANGUAGE plperl
> TRANSFORM FOR TYPE jsonb
> AS $$
> return ('1' =~ m(0\t2));
> $$;

> =~ is the operator testing a regular expression match. 
> Hence, testRegexpToJsonb function returns true/false values
> (when used in scalar context, the return value
> generally indicates the success of the operation).

Right, that was my point: this is returning a scalar result that
just happens to have been derived from a regexp match.  So the
output ought to be 1 or 0, and the fact that (on some platforms?)
it isn't represents a bug.

> I guess the right test will look a little bit different:
> CREATE FUNCTION testRegexpToJsonb() RETURNS jsonb
> LANGUAGE plperl
> TRANSFORM FOR TYPE jsonb
> AS $$
> $a = qr//;
> return ($a);
> $$;

This is testing something else.  I don't object to adding it,
but we should keep the existing test in some form to verify
that the bug stays fixed.

regards, tom lane



Re: pgsql: Optimize btree insertions for common case of increasing values

2018-04-04 Thread Pavan Deolasee
On Thu, Mar 29, 2018 at 4:39 AM, Peter Geoghegan  wrote:

>
>
> Suggested next steps to deal with this:
>
> * A minor point: I don't think you should call
> RelationSetTargetBlock() when the page P_ISROOT(), which, as I
> mentioned, is a condition that can coexist with P_ISLEAF() with very
> small B-Trees. There can be no point in doing so. No need to add
> P_ISROOT() to the main "Is cached page stale?" test within
> _bt_doinsert(), though.
>

Ok. Adding a check for tree height and setting target block only if it's >=
2, as suggested by you and Simon later. Rahila helped me also ran another
round of tests and this does not lead to any performance regression (we
were worried about whether calling _bt_getrootheight will be expensive).

Also moved RelationSetTargetBlock() to a point where we are not holding any
locks and are outside the critical section.


>
> * An assertion would make me feel a lot better about depending on not
> having a page split from a significant distance.
>

Ok. I assume you mean an assertion to check that the target page doesn't
have an in-complete split. Added that though not sure if it's useful since
we concluded that right-most page can't have in-complete split.

Let me know if you mean something else.


> Your optimization should continue to not be used when it would result
> in a page split, but only because that would be slow. The comments
> should say so, IMV.


Added.


> Also, _bt_insertonpg() should have an assertion
> against a page split actually occurring when the optimization was
> used, just in case. When !BufferIsValid(cbuf), we know that we're
> being called from _bt_doinsert() (see existing assertion at the top of
> _bt_insertonpg() as an example of this), so at the point where it's
> clear a page split is needed, we should assert that there is no target
> block that we must have been passed as the target page.
>
>
You mean passing "fastpath" to _bt_insertonpg and then checking it's false
if page split is needed? But isn't page split is only needed if the page
doesn't have enough free space? If so, we have checked for that before
setting "fastpath".


> * The indentation of the main _bt_doinsert() test does not follow
> project guidelines. Please tweak that, too.
>

Ok. Fixed.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pg_btree_target_block_v4_delta3.patch
Description: Binary data


Re: pgsql: Validate page level checksums in base backups

2018-04-04 Thread Michael Banck
Hi,

On Wed, Apr 04, 2018 at 11:38:35AM +0200, Magnus Hagander wrote:
> On Tue, Apr 3, 2018 at 10:48 PM, Michael Banck 
> wrote:
> 
> > Hi,
> >
> > On Tue, Apr 03, 2018 at 08:48:08PM +0200, Magnus Hagander wrote:
> > > On Tue, Apr 3, 2018 at 8:29 PM, Tom Lane  wrote:
> > > I'd bet a good lunch that nondefault BLCKSZ would break it, as well,
> > > > since the way in which the corruption is induced is just guessing
> > > > as to where page boundaries are.
> > >
> > > Yeah, that might be a problem. Those should be calculated from the block
> > > size.
> > >
> > > Also, scribbling on tables as sensitive as pg_class is just asking for
> > > > trouble IMO.  I don't see anything in this test, for example, that
> > > > prevents autovacuum from running and causing a PANIC before the test
> > > > can complete.  Even with AV off, there's a good chance that clobber-
> > > > cache-always animals will fall over because they do so many more
> > > > physical accesses to the system catalogs.  I'd suggest inducing the
> > > > corruption in some user table(s) that we can more tightly constrain
> > > > the source server's accesses to.
> > >
> > > Yeah, that seems like a good idea. And probably also shut the server down
> > > while writing the corruption, just in case.
> > >
> > > Will stick looking into that on my todo for when I'm back, unless beaten
> > to
> > > it. Michael, you want a stab at it?
> >
> > Attached is a patch which does that hopefully:
> >
> > 1. creates two user tables, one large enough for at least 6 blocks
> > (around 360kb), the other just one block.
> >
> > 2. stops the cluster before scribbling over its data and starts it
> > afterwards.
> >
> > 3. uses the blocksize (and the pager header size) to determine offsets
> > for scribbling.
> >
> > I've tested it with blocksizes 8 and 32 now, the latter should make sure
> > that the first table is indeed large enough, but maybe something less
> > arbitrary than "1 integers" should be used?
> >
> > Anyway, sorry for the hassle.
> >
> 
> Applied, with the addition that I explicitly disabled autovacuum on those
> tables as well.
 
Thanks! It looks like there were no further builfarm failures so far,
let's see how this goes.

> We might want to enhance it further by calculating the figure 10,000 based
> on blocksize perhaps?

10,000 was roughly twice the size needed for 32k block sizes. If there
are concerns that this might not be enough, I am happy to invest some
more time here (next week probably). However, the pg_basebackup
testsuite takes up 800+ MB to run, so I don't see the urgent need of
optimizing away 50-100 KB (which clearly everybody else thought as well)
if we are talking about disk space overhead.


Michael

-- 
Michael Banck
Projektleiter / Senior Berater
Tel.: +49 2166 9901-171
Fax:  +49 2166 9901-100
Email: michael.ba...@credativ.de

credativ GmbH, HRB Mönchengladbach 12080
USt-ID-Nummer: DE204566209
Trompeterallee 108, 41189 Mönchengladbach
Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer



pgsql: Fix platform and Perl-version dependencies in new jsonb_plperl c

2018-04-04 Thread Tom Lane
Fix platform and Perl-version dependencies in new jsonb_plperl code.

Testing SvTYPE() directly is more fraught with problems than one might
think, because depending on context Perl might be storing a scalar value
in one of several forms, eg both numeric and string values.  This resulted
in Perl-version-dependent buildfarm test failures.  Instead use the SvTYPE
test only to distinguish non-scalar cases (AV, HV, NULL).  Disambiguate
scalars by testing SvIOK, SvNOK, then SvPOK.  This creates a preference
order for how we will resolve cases where the value is available in more
than one form, which seems fine to me.

Furthermore, because we're now dealing directly with a "double" value
in the SvNOK case, we can get rid of an inadequate and unportable
string-comparison test for infinities, and use isinf() instead.
(We do need some additional #include and "-lm" infrastructure to use
that in a contrib module, per prior experiences.)

In passing, prevent the regression test results from depending on DROP
CASCADE order; I've not seen that malfunction, but it's trouble waiting
to happen.

Discussion: https://postgr.es/m/e1f3mmj-0006bf...@gemulon.postgresql.org

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/331b2369c0ad1e51d5e50bf5dd75232e0160553a

Modified Files
--
contrib/jsonb_plperl/Makefile   |  2 +
contrib/jsonb_plperl/expected/jsonb_plperl.out  | 26 ++---
contrib/jsonb_plperl/expected/jsonb_plperlu.out | 26 ++---
contrib/jsonb_plperl/jsonb_plperl.c | 76 ++---
contrib/jsonb_plperl/sql/jsonb_plperl.sql   | 16 +-
contrib/jsonb_plperl/sql/jsonb_plperlu.sql  | 16 +-
6 files changed, 110 insertions(+), 52 deletions(-)



Re: pgsql: Transforms for jsonb to PL/Perl

2018-04-04 Thread Tom Lane
I wrote:
> Anthony Bykov  writes:
>> I guess the right test will look a little bit different:
>> CREATE FUNCTION testRegexpToJsonb() RETURNS jsonb
>> LANGUAGE plperl
>> TRANSFORM FOR TYPE jsonb
>> AS $$
>> $a = qr//;
>> return ($a);
>> $$;

> This is testing something else.  I don't object to adding it,
> but we should keep the existing test in some form to verify
> that the bug stays fixed.

Huh.  I put that in, and it turns out that on some Perl versions
we get a string out instead of "don't know what that is".

***
*** 48,55 
  return ($a);
  $$;
  SELECT testRegexpToJsonb();
! ERROR:  cannot transform this Perl type to jsonb
! CONTEXT:  PL/Perl function "testregexptojsonb"
  -- this revealed a bug in the original implementation
  CREATE FUNCTION testRegexpResultToJsonb() RETURNS jsonb
  LANGUAGE plperl
--- 48,58 
  return ($a);
  $$;
  SELECT testRegexpToJsonb();
!  testregexptojsonb 
! ---
!  "(?^:foo)"
! (1 row)
! 
  -- this revealed a bug in the original implementation
  CREATE FUNCTION testRegexpResultToJsonb() RETURNS jsonb
  LANGUAGE plperl

So that's probably useful for the people it works for,
but I don't think we want a Perl-version-dependent
regression test for this.  I'm inclined to just take
this test case out again.

regards, tom lane



pgsql: Remove less-portable-than-believed test case.

2018-04-04 Thread Tom Lane
Remove less-portable-than-believed test case.

In commit 331b2369c I added a test to see what jsonb_plperl would do
with a qr{} result.  Turns out the answer is Perl version dependent.
That fact doesn't bother me particularly, but coping with multiple
result possibilities is way more work than this test seems worth.
So remove it again.

Discussion: https://postgr.es/m/e1f3mmj-0006bf...@gemulon.postgresql.org

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/eac93e20afe434a79e81558c17a7a1408cf9d74a

Modified Files
--
contrib/jsonb_plperl/expected/jsonb_plperl.out  | 13 +
contrib/jsonb_plperl/expected/jsonb_plperlu.out | 13 +
contrib/jsonb_plperl/sql/jsonb_plperl.sql   | 12 
contrib/jsonb_plperl/sql/jsonb_plperlu.sql  | 12 
4 files changed, 2 insertions(+), 48 deletions(-)



Re: pgsql: New files for MERGE

2018-04-04 Thread Andres Freund
Hi,

On 2018-04-03 08:32:45 -0700, Andres Freund wrote:
> Hi,
> 
> On 2018-04-03 09:24:12 +, Simon Riggs wrote:
> > New files for MERGE
> > src/backend/executor/nodeMerge.c   |  575 +++
> > src/backend/parser/parse_merge.c   |  660 
> > src/include/executor/nodeMerge.h   |   22 +
> > src/include/parser/parse_merge.h   |   19 +
> 
> Getting a bit grumpy here.  So you pushed this, without responding in
> any way to the objections I made in
> http://archives.postgresql.org/message-id/20180403021800.b5nsgiclzanobiup%40alap3.anarazel.de
> and did it in a manner that doesn't even compile?

This needs at the very least a response to the issues pointed out in the
referenced email that you chose to ignore without any sort of comment.

Greetings,

Andres Freund



Re: pgsql: Optimize btree insertions for common case of increasing values

2018-04-04 Thread Peter Geoghegan
On Wed, Apr 4, 2018 at 5:33 AM, Pavan Deolasee  wrote:
> Ok. Adding a check for tree height and setting target block only if it's >=
> 2, as suggested by you and Simon later. Rahila helped me also ran another
> round of tests and this does not lead to any performance regression (we were
> worried about whether calling _bt_getrootheight will be expensive).

Right.

> Also moved RelationSetTargetBlock() to a point where we are not holding any
> locks and are outside the critical section.

Right.

>> * An assertion would make me feel a lot better about depending on not
>> having a page split from a significant distance.
>
>
> Ok. I assume you mean an assertion to check that the target page doesn't
> have an in-complete split. Added that though not sure if it's useful since
> we concluded that right-most page can't have in-complete split.
>
> Let me know if you mean something else.

I meant something else. I was talking about the assertion discussed
below. I don't see too much point in the !P_INCOMPLETE_SPLIT(lpageop)
assertion, though.

>> Your optimization should continue to not be used when it would result
>> in a page split, but only because that would be slow. The comments
>> should say so, IMV.
>
>
> Added.

I think the wording here could use some tweaking:

> /*
> -* Check if the page is still the rightmost leaf page, has enough
> -* free space to accommodate the new tuple, no split is in 
> progress
> -* and the scankey is greater than or equal to the first key on 
> the
> -* page.
> +* Check if the page is still the rightmost valid leaf page, has
> +* enough free space to accommodate the new tuple and the scankey
> +* is strictly greater than the first key on the page.
> +*
> +* NB: We could take the fastpath even when the target block
> +* doesn't have enough free space (but it's the right-most block)
> +* since _bt_insertonpg() is capable of working with a NULL stack
> +* and that's the only additional thing the slow path sets up. But
> +* we don't optimise for that case because while spliting and
> +* inserting into the parent without the stack is relatively slow
> +* operation.
>  */

I would cut this down, and just say "We only insert if it definitely
won't result in a pagesplit" -- no need for the second paragraph in
this high-level routine. The full details can go on top of the new
_bt_insertonpg() assertion I talk about later.

> You mean passing "fastpath" to _bt_insertonpg and then checking it's false
> if page split is needed? But isn't page split is only needed if the page
> doesn't have enough free space? If so, we have checked for that before
> setting "fastpath".

That's not exactly what I meant. I meant that if:

1. This is an insertion to the leaf level in _bt_insertonpg().

and

2. We don't have space on the page, and so must do a split (or at
least free LP_DEAD items).

and

3. RelationGetTargetBlock(rel) != InvalidBlockNumber

There should be an assertion failure. This new assertion within
_bt_insertonpg() makes it much less likely that the assumption breaks
later.

This is where you could point out the low-level details that I
suggested be omitted from _bt_doinsert() at the beginning of this
e-mail. You can mention here that it would actually work without a
pagesplit, but that is only intended for crash recovery, and is a much
slower path that would make the optimization totally
counter-productive. We add an assertion because without one it would
be easy to miss a regression where there is a page split with an empty
stack.

Finally, I'd like to see a small paragraph in the nbtree README, about
the high level theory behind this optimization and page recycling. The
assumption is that there can only be one non-ignorable leaf rightmost
page, and so even a RecentGlobalXmin style interlock isn't required.
We cannot fail to detect that our hint was invalidated, because there
can only be one such page in the B-Tree at any time. It's possible
that the page will be deleted and recycled without a backend's cached
page also being detected as invalidated, but only when we happen to
recycle a page that once again becomes the rightmost leaf page.

Once those changes are made, this should be fine to commit.

-- 
Peter Geoghegan



pgsql: Foreign keys on partitioned tables

2018-04-04 Thread Alvaro Herrera
Foreign keys on partitioned tables

Author: Álvaro Herrera
Discussion: https://postgr.es/m/20171231194359.cvojcour423ulha4@alvherre.pgsql
Reviewed-by: Peter Eisentraut

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/3de241dba86f3dd000434f70aebba725fb928032

Modified Files
--
doc/src/sgml/ref/alter_table.sgml  |   3 +-
doc/src/sgml/ref/create_table.sgml |  13 +-
src/backend/catalog/pg_constraint.c| 237 +
src/backend/commands/tablecmds.c   | 193 ++-
src/backend/parser/parse_utilcmd.c |  12 --
src/backend/utils/adt/ri_triggers.c|  59 ---
src/bin/pg_dump/pg_dump.c  |  42 +++--
src/include/catalog/pg_constraint_fn.h |  16 ++
src/include/commands/tablecmds.h   |   4 +
src/test/regress/expected/alter_table.out  |   4 -
src/test/regress/expected/create_table.out |  10 --
src/test/regress/expected/foreign_key.out  | 211 +
src/test/regress/expected/inherit.out  |  25 +++
src/test/regress/sql/alter_table.sql   |   1 -
src/test/regress/sql/create_table.sql  |   8 -
src/test/regress/sql/foreign_key.sql   | 154 +++
src/test/regress/sql/inherit.sql   |  12 ++
17 files changed, 895 insertions(+), 109 deletions(-)



pgsql: Skip full index scan during cleanup of B-tree indexes when possi

2018-04-04 Thread Teodor Sigaev
Skip full index scan during cleanup of B-tree indexes when possible

Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.

This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and 
GUC (default).

This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.

Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: 
https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uqegr9s1rkc3o4enc5...@mail.gmail.com

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/857f9c36cda520030381bd8c2af20adf0ce0e1d4

Modified Files
--
contrib/amcheck/verify_nbtree.c   |   8 +-
contrib/pageinspect/Makefile  |   3 +-
contrib/pageinspect/btreefuncs.c  |   4 +-
contrib/pageinspect/expected/btree.out|  16 +--
contrib/pageinspect/pageinspect--1.6--1.7.sql |  26 +
contrib/pageinspect/pageinspect.control   |   2 +-
contrib/pgstattuple/expected/pgstattuple.out  |  10 +-
doc/src/sgml/config.sgml  |  25 +
doc/src/sgml/pageinspect.sgml |  16 +--
doc/src/sgml/ref/create_index.sgml|  15 +++
src/backend/access/common/reloptions.c|  13 ++-
src/backend/access/nbtree/nbtinsert.c |  12 +++
src/backend/access/nbtree/nbtpage.c   | 150 --
src/backend/access/nbtree/nbtree.c| 118 ++--
src/backend/access/nbtree/nbtxlog.c   |   6 +-
src/backend/utils/init/globals.c  |   2 +
src/backend/utils/misc/guc.c  |  10 ++
src/include/access/nbtree.h   |  11 +-
src/include/access/nbtxlog.h  |   4 +
src/include/miscadmin.h   |   2 +
src/include/utils/rel.h   |   2 +
src/test/regress/expected/btree_index.out |  29 +
src/test/regress/sql/btree_index.sql  |  19 
23 files changed, 458 insertions(+), 45 deletions(-)



pgsql: Improve FSM management for BRIN indexes.

2018-04-04 Thread Tom Lane
Improve FSM management for BRIN indexes.

BRIN indexes like to propagate additions of free space into the upper pages
of their free space maps as soon as the new space is known, even when it's
just on one individual index page.  Previously this required calling
FreeSpaceMapVacuum, which is quite an expensive thing if the map is large.
Use the FreeSpaceMapVacuumRange function recently added by commit c79f6df75
to reduce the amount of work done for this purpose.

Fix a couple of places that neglected to do the upper-page vacuuming at all
after recording new free space.  If the policy is to be that BRIN should do
that, it should do it everywhere.

Do RecordPageWithFreeSpace unconditionally in brin_page_cleanup, and do
FreeSpaceMapVacuum unconditionally in brin_vacuum_scan.  Because of the
FSM's imprecise storage of free space, the old complications here seldom
bought anything, they just slowed things down.  This approach also
provides a predictable path for FSM corruption to be repaired.

Remove premature RecordPageWithFreeSpace call in brin_getinsertbuffer
where it's about to return an extended page to the caller.  The caller
should do that, instead, after it's inserted its new tuple.  Fix the
one caller that forgot to do so.

Simplify logic in brin_doupdate's same-page-update case by postponing
brin_initialize_empty_new_buffer to after the critical section; I see
little point in doing it before.

Avoid repeat calls of RelationGetNumberOfBlocks in brin_vacuum_scan.
Avoid duplicate BufferGetBlockNumber and BufferGetPage calls in
a couple of places where we already had the right values.

Move a BRIN_elog debug logging call out of a critical section; that's
pretty unsafe and I don't think it buys us anything to not wait till
after the critical section.

Move the "*extended = false" step in brin_getinsertbuffer into the
routine's main loop.  There's no actual bug there, since the loop can't
iterate with *extended still true, but it doesn't seem very future-proof
as coded; and it's certainly not documented as a loop invariant.

This is all from follow-on investigation inspired by commit c79f6df75.

Discussion: https://postgr.es/m/5801.1522429...@sss.pgh.pa.us

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/1383e2a1a937116e1367c9584984f0730f9ef4d5

Modified Files
--
src/backend/access/brin/brin.c |  29 +++---
src/backend/access/brin/brin_pageops.c | 155 ++---
src/include/access/brin_pageops.h  |   2 +-
3 files changed, 103 insertions(+), 83 deletions(-)



Re: pgsql: New files for MERGE

2018-04-04 Thread Pavan Deolasee
On Thu, Apr 5, 2018 at 12:16 AM, Andres Freund  wrote:

> Hi,
>
> On 2018-04-05 00:02:06 +0530, Pavan Deolasee wrote:
> > Apologies from my end. Simon checked with me regarding your referenced
> > email. I was in the middle of responding to it (with a add-on patch to
> take
> > care of your review comments), but got side tracked by some high priority
> > customer escalation. I shall respond soon.
>
> Hows that an explanation for just going ahead and committing? Without
> even commenting on why one thinks the pointed out issues are something
> that can be resolved later or somesuch?  This has an incredibly rushed
> feel to it.
>

While I don't want to answer that on Simon's behalf, my feeling is that he
may not seen your email since it came pretty late. He had probably planned
to commit the patch again first thing in the morning with the fixes I'd
sent.

Anyways, I think your reviews comments are useful and I've incorporated
most of those. Obviously certain things like creating a complete new
executor machinery is not practical given where we're in the release cycle
and I am not sure if that has any significant advantages over what we have
today.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pgsql: Rewrite pg_dump TAP tests

2018-04-04 Thread Stephen Frost
Rewrite pg_dump TAP tests

This reworks how the tests to run are defined.  Instead of having to
define all runs for all tests, we define those tests which should pass
(generally using one of the defined broad hashes), add in any which
should be specific for this test, and exclude any specific runs that
shouldn't pass for this test.  This ends up removing some 4k+ lines
(more than half the file) but, more importantly, greatly simplifies the
way runs-to-be-tested are defined.

As discussed in the updated comments, for example, take the test which
does CREATE TABLE test_table.  That CREATE TABLE should show up in all
'full' runs of pg_dump, except those cases where 'test_table' is
excluded, of course, and that's exactly how the test gets defined now
(modulo a few other related cases, like where we dump only that table,
or we dump the schema it's in, or we exclude the schema it's in):

like => {
%full_runs,
%dump_test_schema_runs,
only_dump_test_table=> 1,
section_pre_data=> 1, },
unlike => {
exclude_dump_test_schema => 1,
exclude_test_table => 1, }, },

Next, we no longer expect every run to be listed for every test.  If a
run is listed in 'like' (directly or through a hash) then it's a 'like',
unless it's listed in 'unlike' in which case it's an 'unlike'.  If it
isn't listed in either, then it's considered an 'unlike' automatically.

Lastly, this changes the code to no longer use like/unlike but rather to
use 'ok()' with 'diag()' which allows much more control over what gets
spit out to the screen.  Gone are the days of the entire dump being sent
to the console, now you'll just get a couple of lines for each failing
test which say the test that failed and the run that it failed on.

This covers both the pg_dump TAP tests in src/bin/pg_dump and those in
src/test/modules/test_pg_dump.

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/446f7f5d789fe9ecfacd998407b5bee70aaa64f7

Modified Files
--
src/bin/pg_dump/t/002_pg_dump.pl| 5037 ---
src/test/modules/test_pg_dump/t/001_base.pl |  443 +--
2 files changed, 780 insertions(+), 4700 deletions(-)



Re: pgsql: New files for MERGE

2018-04-04 Thread Pavan Deolasee
On Wed, Apr 4, 2018 at 10:40 PM, Andres Freund  wrote:

> Hi,
>
> On 2018-04-03 08:32:45 -0700, Andres Freund wrote:
> > Hi,
> >
> > On 2018-04-03 09:24:12 +, Simon Riggs wrote:
> > > New files for MERGE
> > > src/backend/executor/nodeMerge.c   |  575 +++
> > > src/backend/parser/parse_merge.c   |  660 
> > > src/include/executor/nodeMerge.h   |   22 +
> > > src/include/parser/parse_merge.h   |   19 +
> >
> > Getting a bit grumpy here.  So you pushed this, without responding in
> > any way to the objections I made in
> > http://archives.postgresql.org/message-id/20180403021800.
> b5nsgiclzanobiup%40alap3.anarazel.de
> > and did it in a manner that doesn't even compile?
>
> This needs at the very least a response to the issues pointed out in the
> referenced email that you chose to ignore without any sort of comment.
>
>
Apologies from my end. Simon checked with me regarding your referenced
email. I was in the middle of responding to it (with a add-on patch to take
care of your review comments), but got side tracked by some high priority
customer escalation. I shall respond soon.

Thanks,
Pavan

-- 
 Pavan Deolasee   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: pgsql: New files for MERGE

2018-04-04 Thread Andres Freund
Hi,

On 2018-04-05 00:02:06 +0530, Pavan Deolasee wrote:
> Apologies from my end. Simon checked with me regarding your referenced
> email. I was in the middle of responding to it (with a add-on patch to take
> care of your review comments), but got side tracked by some high priority
> customer escalation. I shall respond soon.

Hows that an explanation for just going ahead and committing? Without
even commenting on why one thinks the pointed out issues are something
that can be resolved later or somesuch?  This has an incredibly rushed
feel to it.

Greetings,

Andres Freund



pgsql: docs: update ltree URL for the DMOZ catalog

2018-04-04 Thread Bruce Momjian
docs:  update ltree URL for the DMOZ catalog

Reported-by: bbrin...@gmail.com

Discussion: 
https://postgr.es/m/152283596377.1441.11672249301622760...@wrigleys.postgresql.org

Author: Oleg Bartunov

Backpatch-through: 9.3

Branch
--
REL9_5_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/15188cb5d5454fdfb58a33156d4ed8fc24a8608a

Modified Files
--
doc/src/sgml/ltree.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)



pgsql: docs: update ltree URL for the DMOZ catalog

2018-04-04 Thread Bruce Momjian
docs:  update ltree URL for the DMOZ catalog

Reported-by: bbrin...@gmail.com

Discussion: 
https://postgr.es/m/152283596377.1441.11672249301622760...@wrigleys.postgresql.org

Author: Oleg Bartunov

Backpatch-through: 9.3

Branch
--
REL9_6_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/374204ce816d63a8d0a22696322042c015123f36

Modified Files
--
doc/src/sgml/ltree.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)



pgsql: docs: update ltree URL for the DMOZ catalog

2018-04-04 Thread Bruce Momjian
docs:  update ltree URL for the DMOZ catalog

Reported-by: bbrin...@gmail.com

Discussion: 
https://postgr.es/m/152283596377.1441.11672249301622760...@wrigleys.postgresql.org

Author: Oleg Bartunov

Backpatch-through: 9.3

Branch
--
REL_10_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/63f997931c5fc2974df33d77613e236434fba047

Modified Files
--
doc/src/sgml/ltree.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)



pgsql: docs: update ltree URL for the DMOZ catalog

2018-04-04 Thread Bruce Momjian
docs:  update ltree URL for the DMOZ catalog

Reported-by: bbrin...@gmail.com

Discussion: 
https://postgr.es/m/152283596377.1441.11672249301622760...@wrigleys.postgresql.org

Author: Oleg Bartunov

Backpatch-through: 9.3

Branch
--
REL9_4_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/9305257baffad0c212489b33616cdfd385d195b0

Modified Files
--
doc/src/sgml/ltree.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)



pgsql: docs: update ltree URL for the DMOZ catalog

2018-04-04 Thread Bruce Momjian
docs:  update ltree URL for the DMOZ catalog

Reported-by: bbrin...@gmail.com

Discussion: 
https://postgr.es/m/152283596377.1441.11672249301622760...@wrigleys.postgresql.org

Author: Oleg Bartunov

Backpatch-through: 9.3

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/cd1661bbcc7c20a5c4d00dd114263ea9afe36063

Modified Files
--
doc/src/sgml/ltree.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)



pgsql: docs: update ltree URL for the DMOZ catalog

2018-04-04 Thread Bruce Momjian
docs:  update ltree URL for the DMOZ catalog

Reported-by: bbrin...@gmail.com

Discussion: 
https://postgr.es/m/152283596377.1441.11672249301622760...@wrigleys.postgresql.org

Author: Oleg Bartunov

Backpatch-through: 9.3

Branch
--
REL9_3_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/4b9b3f5c437ecd544a2b8eac2835546039e2aa38

Modified Files
--
doc/src/sgml/ltree.sgml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)



pgsql: Restore erroneously removed ONLY from PK check

2018-04-04 Thread Alvaro Herrera
Restore erroneously removed ONLY from PK check

This is a blind fix, since I don't have SE-Linux to verify it.

Per unwanted change in rhinoceros, running sepgsql tests.  Noted by Tom
Lane.

Discussion: https://postgr.es/m/32347.1522865...@sss.pgh.pa.us

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/7d7c99790b2a7e6f4e5287a3fb29f73cedbb2105

Modified Files
--
src/backend/utils/adt/ri_triggers.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)



Re: pgsql: Restore erroneously removed ONLY from PK check

2018-04-04 Thread Tom Lane
Andres Freund  writes:
> Shouldn't the difference due to the ONLY be visible in cases with
> inheritance?  As in, spuriously succeeding or such?  Seems like
> something that a normal regression test would be good for?

Yeah, if it actually matters (which I think it does), it shouldn't be hard
to devise a regression test that shows a behavioral difference.

regards, tom lane



pgsql: Also fix the descriptions in pg_config.h.win32.

2018-04-04 Thread Heikki Linnakangas
Also fix the descriptions in pg_config.h.win32.

I missed pg_config.h.win32 in the previous commit that fixed these in
pg_config.h.in.

Branch
--
REL9_6_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/0d2012c9f04afe375200d16d539a9ec5c0093c07

Modified Files
--
src/include/pg_config.h.win32 | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)



pgsql: Also fix the descriptions in pg_config.h.win32.

2018-04-04 Thread Heikki Linnakangas
Also fix the descriptions in pg_config.h.win32.

I missed pg_config.h.win32 in the previous commit that fixed these in
pg_config.h.in.

Branch
--
REL_10_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/8ed5249afff499d79bb9414a0340c495fccf53b1

Modified Files
--
src/include/pg_config.h.win32 | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)



pgsql: Also fix the descriptions in pg_config.h.win32.

2018-04-04 Thread Heikki Linnakangas
Also fix the descriptions in pg_config.h.win32.

I missed pg_config.h.win32 in the previous commit that fixed these in
pg_config.h.in.

Branch
--
REL9_5_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/0b650285269d6055d0f1c8b10e96a49b118a6d0d

Modified Files
--
src/include/pg_config.h.win32 | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)



pgsql: Also fix the descriptions in pg_config.h.win32.

2018-04-04 Thread Heikki Linnakangas
Also fix the descriptions in pg_config.h.win32.

I missed pg_config.h.win32 in the previous commit that fixed these in
pg_config.h.in.

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/638a199fa9459dac42b588ccfcf7003539f37416

Modified Files
--
src/include/pg_config.h.win32 | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)



pgsql: Fix pg_bsaebackup checksum tests

2018-04-04 Thread Magnus Hagander
Fix pg_bsaebackup checksum tests

Hopefully fix the fact that these checks are unstable, by introducing
the corruption in a separate table from pg_class, and also explicitly
disable autovacuum on those tables. Also make sure PostgreSQL is
stopped while the corruption is introduced to avoid possible caching
effects.

Author: Michael Banck

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/ee9e1455310ec57774ca67158571bec5d3288cdf

Modified Files
--
src/bin/pg_basebackup/t/010_pg_basebackup.pl | 35 +++-
1 file changed, 24 insertions(+), 11 deletions(-)



pgsql: Use ARMv8 CRC instructions where available.

2018-04-04 Thread Heikki Linnakangas
Use ARMv8 CRC instructions where available.

ARMv8 introduced special CPU instructions for calculating CRC-32C. Use
them, when available, for speed.

Like with the similar Intel CRC instructions, several factors affect
whether the instructions can be used. The compiler intrinsics for them must
be supported by the compiler, and the instructions must be supported by the
target architecture. If the compilation target architecture does not
support the instructions, but adding "-march=armv8-a+crc" makes them
available, then we compile the code with a runtime check to determine if
the host we're running on supports them or not.

For the runtime check, use glibc getauxval() function. Unfortunately,
that's not very portable, but I couldn't find any more portable way to do
it. If getauxval() is not available, the CRC instructions will still be
used if the target architecture supports them without any additional
compiler flags, but the runtime check will not be available.

Original patch by Yuqi Gu, heavily modified by me. Reviewed by Andres
Freund, Thomas Munro.

Discussion: 
https://www.postgresql.org/message-id/HE1PR0801MB1323D171938EABC04FFE7FA9E3110%40HE1PR0801MB1323.eurprd08.prod.outlook.com

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/f044d71e331d77a0039cec0a11859b5a3c72bc95

Modified Files
--
config/c-compiler.m4   |  34 
configure  | 194 +++--
configure.in   |  87 +++--
src/Makefile.global.in |   1 +
src/include/pg_config.h.in |   9 +
src/include/port/pg_crc32c.h   |  26 ++-
src/port/Makefile  |   4 +
src/port/pg_crc32c_armv8.c |  72 
src/port/pg_crc32c_armv8_choose.c  |  55 ++
...pg_crc32c_choose.c => pg_crc32c_sse42_choose.c} |  13 +-
src/tools/msvc/Mkvcbuild.pm|   2 +-
11 files changed, 456 insertions(+), 41 deletions(-)



Re: pgsql: Transforms for jsonb to PL/Perl

2018-04-04 Thread Anthony Bykov
On Tue, 03 Apr 2018 17:37:04 -0400
Tom Lane  wrote:

> I wrote:
> > Hm, it fails on my own machine too (RHEL6, perl 5.10.1), with the
> > same "cannot transform this Perl type to jsonb" symptoms.  A bit
> > of tracing shows that SvTYPE(in) is returning SVt_PVIV in some
> > of the failing cases, and SVt_PVNV in others.  
> 
> I tried to fix this by reducing the amount of knowledge that function
> embeds about the possible SvTYPEs.  After the special cases for AV,
> HV, and NULL, the attached just tests SvIOK, SvNOK, and SvPOK, and
> does the right thing for each case.
> 
> This results in one change in the module's test results: the example
> that thinks it's returning a regexp match result no longer fails,
> but just returns the scalar result (0).  I'm inclined to think that
> this is correct/desirable and the existing behavior is an accidental
> artifact of not coping with Perl's various augmented representations
> of scalar values.
> 
> Thoughts?
> 
>   regards, tom lane
> 

Hello,
I don't think that user expect having 0 in jsonb when they have regexp:
it should have a possibility to convert resulting jsonb back to perl
with exact same type and data.


--
Anthony Bykov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



pgsql: Fix incorrect description of USE_SLICING_BY_8_CRC32C.

2018-04-04 Thread Heikki Linnakangas
Fix incorrect description of USE_SLICING_BY_8_CRC32C.

And a typo in the description of USE_SSE42_CRC32C_WITH_RUNTIME_CHECK,
spotted by Daniel Gustafsson.

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/8989f52b1b0636969545e6c8f6c813bc563ebcf5

Modified Files
--
configure.in   | 4 ++--
src/include/pg_config.h.in | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)



pgsql: Fix incorrect description of USE_SLICING_BY_8_CRC32C.

2018-04-04 Thread Heikki Linnakangas
Fix incorrect description of USE_SLICING_BY_8_CRC32C.

And a typo in the description of USE_SSE42_CRC32C_WITH_RUNTIME_CHECK,
spotted by Daniel Gustafsson.

Branch
--
REL9_5_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/127f489c2c7e9ff0e6cd5f990dab497ea3cf7e87

Modified Files
--
configure.in   | 4 ++--
src/include/pg_config.h.in | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)



pgsql: Fix incorrect description of USE_SLICING_BY_8_CRC32C.

2018-04-04 Thread Heikki Linnakangas
Fix incorrect description of USE_SLICING_BY_8_CRC32C.

And a typo in the description of USE_SSE42_CRC32C_WITH_RUNTIME_CHECK,
spotted by Daniel Gustafsson.

Branch
--
REL_10_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/a3c64ed6ce0da2d18e56179cac8bd752cf79f4b7

Modified Files
--
configure.in   | 4 ++--
src/include/pg_config.h.in | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)



Re: pgsql: Validate page level checksums in base backups

2018-04-04 Thread Magnus Hagander
On Tue, Apr 3, 2018 at 10:48 PM, Michael Banck 
wrote:

> Hi,
>
> On Tue, Apr 03, 2018 at 08:48:08PM +0200, Magnus Hagander wrote:
> > On Tue, Apr 3, 2018 at 8:29 PM, Tom Lane  wrote:
> > I'd bet a good lunch that nondefault BLCKSZ would break it, as well,
> > > since the way in which the corruption is induced is just guessing
> > > as to where page boundaries are.
> >
> > Yeah, that might be a problem. Those should be calculated from the block
> > size.
> >
> > Also, scribbling on tables as sensitive as pg_class is just asking for
> > > trouble IMO.  I don't see anything in this test, for example, that
> > > prevents autovacuum from running and causing a PANIC before the test
> > > can complete.  Even with AV off, there's a good chance that clobber-
> > > cache-always animals will fall over because they do so many more
> > > physical accesses to the system catalogs.  I'd suggest inducing the
> > > corruption in some user table(s) that we can more tightly constrain
> > > the source server's accesses to.
> >
> > Yeah, that seems like a good idea. And probably also shut the server down
> > while writing the corruption, just in case.
> >
> > Will stick looking into that on my todo for when I'm back, unless beaten
> to
> > it. Michael, you want a stab at it?
>
> Attached is a patch which does that hopefully:
>
> 1. creates two user tables, one large enough for at least 6 blocks
> (around 360kb), the other just one block.
>
> 2. stops the cluster before scribbling over its data and starts it
> afterwards.
>
> 3. uses the blocksize (and the pager header size) to determine offsets
> for scribbling.
>
> I've tested it with blocksizes 8 and 32 now, the latter should make sure
> that the first table is indeed large enough, but maybe something less
> arbitrary than "1 integers" should be used?
>
> Anyway, sorry for the hassle.
>

Applied, with the addition that I explicitly disabled autovacuum on those
tables as well.

We might want to enhance it further by calculating the figure 10,000 based
on blocksize perhaps?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/