Re: [HACKERS] Transaction control in procedures

2017-11-08 Thread Simon Riggs
ferent entry point into SPI, because SPI_exec cannot
> handle statements ending transactions.  But so far my assessment is that
> this can be added in a mostly independent way later on.

Sounds like a separate commit, but perhaps it influences the design?

> B) There needs to be some kind of call stack for procedure and function
> invocations, so that we can track when we are allowed to make
> transaction controlling calls.  The key term in the SQL standard is
> "non-atomic execution context", which seems specifically devised to
> cover this scenario.  So for example, if you have CALL -> CALL -> CALL,
> the third call can issue a transaction statement.  But if you have CALL
> -> SELECT -> CALL, then the last call cannot, because the SELECT
> introduces an atomic execution context.  I don't know if we have such a
> thing yet or something that we could easily latch on to.

Yeh. The internal_xact flag is only set at EoXact, so its not really
set as described in the .h
It would certainly be useful to have some state that allows sanity
checking on weird state transitions.

What we would benefit from is a README that gives the theory of
operation, so everyone can read and agree.

Presumably we would need to contact authors of main PL languages to
get them to comment on the API requirements for their languages.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-11-07 Thread Simon Riggs
On 31 October 2017 at 12:01, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Tue, Oct 31, 2017 at 2:00 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> On 30 October 2017 at 18:58, Simon Riggs <si...@2ndquadrant.com> wrote:
>>> On 30 October 2017 at 15:22, Simon Riggs <si...@2ndquadrant.com> wrote:
>>>
>>>>> You forgot to update this formula in xlog.c:
>>>>> distance = (2.0 + CheckPointCompletionTarget) * 
>>>>> CheckPointDistanceEstimate;
>>>>> /* add 10% for good measure. */
>>>>> distance *= 1.10;
>>>>> You can guess easily where the mistake is.
>>>>
>>>> Doh - fixed that before posting, so I must have sent the wrong patch.
>>>>
>>>> It's the 10%, right? ;-)
>>>
>>> So, there are 2 locations that mention 2.0 in xlog.c
>>>
>>> I had already fixed one, which is why I remembered editing it.
>>>
>>> The other one you mention has a detailed comment above it explaining
>>> why it should be 2.0 rather than 1.0, so I left it.
>>>
>>> Let me know if you still think it should be removed? I can see the
>>> argument both ways.
>>> Or maybe we need another patch to account for manual checkpoints.
>>
>> Revised patch to implement this
>
> Here is the comment and the formula:
>  * The reason this calculation is done from the prior
> checkpoint, not the
>  * one that just finished, is that this behaves better if some
> checkpoint
>  * cycles are abnormally short, like if you perform a manual 
> checkpoint
>  * right after a timed one. The manual checkpoint will make
> almost a full
>  * cycle's worth of WAL segments available for recycling, because the
>  * segments from the prior's prior, fully-sized checkpoint cycle are 
> no
>  * longer needed. However, the next checkpoint will make only
> few segments
>  * available for recycling, the ones generated between the timed
>  * checkpoint and the manual one right after that. If at the manual
>  * checkpoint we only retained enough segments to get us to
> the next timed
>  * one, and removed the rest, then at the next checkpoint we would not
>  * have enough segments around for recycling, to get us to the
> checkpoint
>  * after that. Basing the calculations on the distance from
> the prior redo
>  * pointer largely fixes that problem.
>  */
> distance = (2.0 + CheckPointCompletionTarget) *
> CheckPointDistanceEstimate;
>
> While the mention about a manual checkpoint happening after a timed
> one will cause a full range of WAL segments to be recycled, it is not
> actually true that segments of the prior's prior checkpoint are not
> needed, because with your patch the segments of the prior checkpoint
> are getting recycled. So it seems to me that based on that the formula
> ought to use 1.0 instead of 2.0...

I think the argument in the comment is right, in that
CheckPointDistanceEstimate is better if we use multiple checkpoint
cycles.

But the implementation of that is bogus and multiplying by 2.0
wouldn't make it better if CheckPointDistanceEstimate is wrong.

So I will change to 1.0 as you say and rewrite the comment. Thanks for
your review.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Exclude pg_internal.init from base backup

2017-11-07 Thread Simon Riggs
On 5 November 2017 at 11:55, Magnus Hagander <mag...@hagander.net> wrote:
> On Sat, Nov 4, 2017 at 4:04 AM, Michael Paquier <michael.paqu...@gmail.com>
> wrote:
>>
>> On Fri, Nov 3, 2017 at 4:04 PM, Petr Jelinek
>> <petr.jeli...@2ndquadrant.com> wrote:
>> > Not specific problem to this patch, but I wonder if it should be made
>> > more clear that those files (there are couple above of what you added)
>> > are skipped no matter which directory they reside in.
>>
>> Agreed, it is a good idea to tell in the docs how this behaves. We
>> could always change things so as the comparison is based on the full
>> path like what is done for pg_control, but that does not seem worth
>> complicating the code.
>
>
> pg_internal.init can, and do, appear in multiple different directories.
> pg_control is always in the same place. So they're not the same thing.
>
> So +1 for documenting the difference in how these are handled, as this is
> important to know for somebody writing an external tool for it.

Changes made, moving to commit the attached patch.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


pg_basebackup-exclusion-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL procedures

2017-11-06 Thread Simon Riggs
On 31 October 2017 at 17:23, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:
> I've been working on SQL procedures.  (Some might call them "stored
> procedures", but I'm not aware of any procedures that are not stored, so
> that's not a term that I'm using here.)

Looks good

> Everything that follows is intended to align with the SQL standard, at
> least in spirit.

+1

> This first patch does a bunch of preparation work.  It adds the
> CREATE/ALTER/DROP PROCEDURE commands and the CALL statement to call a
> procedure.

I guess it would be really useful to have a cut-down language to use
as an example, but its probably easier to just wait for PLpgSQL.

You mention PARALLEL SAFE is not used for procedures. Isn't it an
architectural restriction that procedures would not be able to execute
in parallel? (At least this year)

> It also adds ROUTINE syntax which can refer to a function or
> procedure.

I think we need an explanatory section of the docs, but there doesn't
seem to be one for Functions, so there is no place to add some text
that says the above.

I found it confusing that ALTER and DROP ROUTINE exists but not CREATE
ROUTINE. At very least we should say somewhere "there is no CREATE
ROUTINE", so its absence is clearly intentional. I did wonder whether
we should have it as well, but its just one less thing to review, so
good.

Was surprised that pg_dump didn't use DROP ROUTINE, when appropriate.

> I have extended that to include aggregates.  And then there
> is a bunch of leg work, such as psql and pg_dump support.  The
> documentation is a lot of copy-and-paste right now; that can be
> revisited sometime.  The provided procedural languages (an ever more
> confusing term) each needed a small touch-up to handle pg_proc entries
> with prorettype == 0.
>
> Right now, there is no support for returning values from procedures via
> OUT parameters.  That will need some definitional pondering; and see
> also below for a possible alternative.
>
> With this, you can write procedures that are somewhat compatible with
> DB2, MySQL, and to a lesser extent Oracle.
>
> Separately, I will send patches that implement (the beginnings of) two
> separate features on top of this:
>
> - Transaction control in procedure bodies
>
> - Returning multiple result sets

Both of those would be good, though my suggested priority would be
transaction control first and then multiple result sets, if we cannot
have both this release.

> (In various previous discussions on "real stored procedures" or
> something like that, most people seemed to have one of these two
> features in mind.  I think that depends on what other SQL systems one
> has worked with previously.)

Almost all of the meat happens in later patches, so no other review comments.

That seems seems strange in a patch of this size, but its true.
Procedures are just a new type of object with very little interaction
with replication, persistence or optimization.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-06 Thread Simon Riggs
On 6 November 2017 at 18:35, Peter Geoghegan <p...@bowt.ie> wrote:

>> APPROACH2 (modified from my original proposal slightly)
>
>
> This write-up actually begins to confront the issues that I've raised.
> I'm glad to see this.
>
>> 1. Join...
>> 2. Apply results for UPDATE, if present not visible via the snapshot
>> taken at 1, do EPQ to ensure we locate current live tuple
>> 3. If still not visible, do speculative insertion if we have a unique
>> index available, otherwise ERROR. If spec insertion fails, go to 2
>>
>> The loop created above can live-lock, meaning that an infinite loop
>> could be created.
>
>
> The loop is *guaranteed* to live-lock once you "goto 2". So you might as
> well just throw an error at that point, which is the behavior that I've
> been arguing for all along!
>
> If this isn't guaranteed to live-lock at "goto 2", then it's not clear
> why. The outcome of step 2 is clearly going to be identical if you don't
> acquire a new MVCC snapshot, but you don't address that.
>
> You might have meant "apply an equivalent ON CONFLICT DO UPDATE", or
> something like that, despite the fact that the use of ON CONFLICT DO
> NOTHING was clearly implied by the "goto 2". I also see problems with
> that, but I'll wait for you to clarify what you meant before going into
> what they are.

In step 3 we discover that an entry exists in the index for a committed row.

Since we have a unique index we use it to locate the row we know
exists and UPDATE that.

We don't use a new MVCC snapshot, we do what EPQ does. EPQ is already
violating MVCC for UPDATEs, so why does it matter if we do it for
INSERTs also?


Where hides the problem?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-06 Thread Simon Riggs
On 3 November 2017 at 16:35, Peter Geoghegan <p...@bowt.ie> wrote:
> Simon Riggs <si...@2ndquadrant.com> wrote:
>>>>
>>>> The *only* behavioural difference I have proposed would be the *lack*
>>>> of an ERROR in (some) concurrent cases.
>>>
>>>
>>> I think that's a big difference.  Error vs. non-error is a big deal by
>>> itself;
>>
>>
>> Are you saying avoiding an ERROR is a bad or good thing?
>
>
> Are you really asking Robert to repeat what has already been said about
> a dozen different ways?

I'm asking for clarity of explanation rather than assertions.


> That's *not* the only difference. You need to see a couple of steps
> ahead to see further differences, as the real dilemma comes when you
> have to reconcile having provided the UPSERT-guarantees with cases that
> that doesn't map on to (which can happen in a number of different ways).
>
> I don't understand why you'll talk about just about anything but that.
> This is a high-level concern about the overarching design. Do you really
> not understand the concern at this point?

You're either referring to what is in the docs, which is INSERT ... ON
CONFLICT violates MVCC in a particular way, or something as yet
unstated. If it is the former, then I still don't see the problem (see
later). If it is the latter, I need more. So either way I need more.

> Robert Haas said
>In the past, there have been objections to implementations of MERGE
>which would give rise to such serialization anomalies, but I'm not
>sure we should feel bound by those discussions.  One thing that's
>different is that the common and actually-useful case can now be made
>to work in a fairly satisfying way using INSERT .. ON CONFLICT UPDATE;
>if less useful cases are vulnerable to some weirdness, maybe it's OK
>to just document the problems.

I agreed with that, and still do.


We need a clear, explicit description of this situation so I will
attempt that in detail here.


The basic concurrency problem we have is this

APPROACH1
1. Join to produce results based upon snapshot at start of query
2. Apply results for INSERT, UPDATE or DELETE

Given there is a time delay between 1 and 2 there is a race condition
so that if another user concurrently inserts the same value into a
unique index then an INSERT will fail with a uniqueness violation.

Such failures are of great concern in practice because the time
between 1 and 2 could be very long for large statements, or for
smaller statements we might have sufficiently high concurrency to
allow us to see regular failures.

APPROACH2 (modified from my original proposal slightly)
1. Join...
2. Apply results for UPDATE, if present not visible via the snapshot
taken at 1, do EPQ to ensure we locate current live tuple
3. If still not visible, do speculative insertion if we have a unique
index available, otherwise ERROR. If spec insertion fails, go to 2

The loop created above can live-lock, meaning that an infinite loop
could be created.

In practice, such live-locks are rare and we could detect them by
falling out of the loop after a few tries. Approach2's purpose is to
alleviate errors in Approach1, so falling out of the loop merely takes
us back to the error we would have got if we didn't try, so Approach2
has considerable benefit over Approach1. This only applies if we do an
INSERT, so if there is a WHEN NOT MATCHED ... AND clause with
probability W, that makes the INSERT rare then we simply have the
probablility of error in Approach2 approach the probability of error
in Approach1 as the W drops to zero, but with W high we may avoid many
errors. Approach2 never generates more errors than Approach1.

I read that step 3 in Approach2 is some kind of problem in MVCC
semantics. My understanding is that SQL Standard allows us to define
what the semantics of the statement are in relation to concurrency, so
any semantic issue can be handled by defining it to work the way we
want. The semantics are:
a) when a unique index is available we avoid errors by using semantics
of INSERT .. ON CONFLICT UPDATE.
b) when a unique index is not available we use other semantics.
To me this is the same as INSERTs failing in the presence of unique
indexes, but not failing when no index is present. The presence of a
unique constraint alters the semantics of the query.
We can choose Approach2 - as Robert says "[we should not] feel bound
by those [earlier] discussions"

Please explain what is wrong with the above without merely asserting
there is a problem.

As you point out, whichever we choose, we will be bound by those
semantics. So if we take Approach1, as has been indicated currently,
what is the written explanation for that, so we can show that to the
people who ask in the future about our decisions?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Su

Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-03 Thread Simon Riggs
On 3 November 2017 at 08:26, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Nov 3, 2017 at 1:05 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>>> Therefore, if MERGE eventually uses INSERT .. ON CONFLICT
>>> UPDATE when a relevant unique index exists and does something else,
>>> such as your proposal of taking a strong lock, or Peter's proposal of
>>> doing this in a concurrency-oblivious manner, in other cases, then
>>> those two cases will behave very differently.
>>
>> The *only* behavioural difference I have proposed would be the *lack*
>> of an ERROR in (some) concurrent cases.
>
> I think that's a big difference.  Error vs. non-error is a big deal by
> itself;

Are you saying avoiding an ERROR is a bad or good thing?

> also, the non-error case involves departing from MVCC
> semantics just as INSERT .. ON CONFLICT UPDATE does.

Meaning what exactly? What situation occurs that a user would be concerned with?

Please describe exactly what you mean so we get it clear.

The concurrent behaviour for MERGE is allowed to be
implementation-specific, so we can define it any way we want.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-03 Thread Simon Riggs
On 3 November 2017 at 07:46, Thomas Kellerer <spam_ea...@gmx.net> wrote:
> PMFJI
>
>> We seem to have a few options for PG11
>>
>> 1. Do nothing, we reject MERGE
>>
>> 2. Implement MERGE for unique index situations only, attempting to
>> avoid errors (Simon OP)
>>
>> 3. Implement MERGE, but without attempting to avoid concurrent ERRORs
>> (Peter)
>>
>> 4. Implement MERGE, while attempting to avoid concurrent ERRORs in
>> cases where that is possible.
>
> From an end-users point of view I would prefer 3 (or 4 if that won't prevent
> this from going into 11)

Sounds reasonable approach.

> INSERT ... ON CONFLICT is great, but there are situations where the
> restrictions can get in the way and it would be nice to have an alternative
> - albeit with some (documented) drawbacks. As far as I know Oracle also
> doesn't guarantee that MERGE is safe for concurrent use - you can still wind
> up with a unique key violation.

Yes, Oracle allows some unique key violations. It's clear that we
would need to allow some also. So we clearly can't infer whether they
avoid some errors just because they allow some.

My approach will be to reduce the errors in the best way, not to try
to copy errors Oracle makes, if any. But that error avoidance can
easily be a later add-on if we prefer it that way.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-03 Thread Simon Riggs
On 2 November 2017 at 22:59, Nico Williams <n...@cryptonector.com> wrote:
> On Thu, Nov 02, 2017 at 03:25:48PM -0700, Peter Geoghegan wrote:
>> Nico Williams <n...@cryptonector.com> wrote:
>> >A MERGE mapped to a DML like this:
>>
>> This is a bad idea. An implementation like this is not at all
>> maintainable.
>
> Assuming the DELETE issue can be addressed, why would this not be
> maintainable?

It would only take one change to make this approach infeasible and
when that happened we would need to revert to the full-executor
version.

One difference that comes to mind is that MERGE doesn't behave the
same way as an UPDATE-join, according to SQL:2011 in that it must
throw an error if duplicate changes are requested. That would be hard
to emulate using a parser only version.

I would call it impressively clever but likely fragile, in this case,
though I encourage more ideas like that in the future.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-03 Thread Simon Riggs
On 2 November 2017 at 17:06, Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, Oct 31, 2017 at 5:14 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> I've proposed a SQL Standard compliant implementation that would do
>> much more than be new syntax over what we already have.
>>
>> So these two claims aren't accurate: "radical difference" and "syntax
>> sugar over a capability we have".
>
> I think those claims are pretty accurate.

No, because there is misunderstanding on some technical points.

They key point is at the end - what do we do next?

> The design of INSERT .. ON CONFLICT UPDATE and the supporting
> machinery only work if there is a unique index.  It provides radically
> different behavior than what you get with any other DML statement,
> with completely novel concurrency behavior, and there is no reasonable
> way to emulate that behavior in cases where no relevant unique index
> exists.

Agreed

> Therefore, if MERGE eventually uses INSERT .. ON CONFLICT
> UPDATE when a relevant unique index exists and does something else,
> such as your proposal of taking a strong lock, or Peter's proposal of
> doing this in a concurrency-oblivious manner, in other cases, then
> those two cases will behave very differently.

The *only* behavioural difference I have proposed would be the *lack*
of an ERROR in (some) concurrent cases.

We have a way to avoid some concurrency errors during MERGE, while
still maintaining exact SQL Standard behaviour. Peter has pointed out
that we could not catch errors in certain cases; that does nothing to
the cases where it will work well. It would be fallacious to imagine
it is an all or nothing situation.

> And if, in the meantime, MERGE can only handle the cases where there
> is a unique index,

I haven't proposed that MERGE would be forever limited to cases with a
unique index, only that MERGE-for-unique-indexes would be the version
in PG11, for good reason.

> then it can only handle the cases INSERT .. ON
> CONFLICT UPDATE can cover, which makes it, as far as I can see,
> syntactic sugar over what we already have.

SQL:2011 is greatly enhanced and this is no longer true; it was in
earlier versions but is not now.

MERGE allows you do DELETE, as well as conditional UPDATE and INSERT.
It is very clearly much more than UPSERT.

> Maybe it's not entirely -
> you might be planning to make some minor functional enhancements - but
> it's not clear what those are, and I feel like whatever it is could be
> done with less work and more elegance by just extending the INSERT ..
> ON CONFLICT UPDATE syntax.

The differences are clearly apparent in the Standard and in my submitted docs.

But I am interested in submitting a SQL Standard compliant feature.

> And it does seem to be your intention to only handle the cases which
> the INSERT .. ON CONFLICT UPDATE infrastructure can cover, because
> upthread you wrote this: "I didn't say it but my intention was to just
> throw an ERROR if no single unique index can be identified."  I don't
> think anybody's putting words into your mouth here.  We're just
> reading what you wrote.

Happy to have written it because that covers the common use case I was
hoping to deliver in PG11. Incremental development.

My proposal was to implement the common case, while avoiding
concurrency errors. Peter proposes that I cover both the common case
and more general cases, without avoiding concurrency errors.


All I have at the moment is that a few people disagree, but that
doesn't help determine the next action.

We seem to have a few options for PG11

1. Do nothing, we reject MERGE

2. Implement MERGE for unique index situations only, attempting to
avoid errors (Simon OP)

3. Implement MERGE, but without attempting to avoid concurrent ERRORs (Peter)

4. Implement MERGE, while attempting to avoid concurrent ERRORs in
cases where that is possible.

Stephen, Robert, please say which option you now believe we should pick.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-02 Thread Simon Riggs
On 2 November 2017 at 19:16, Peter Geoghegan <p...@bowt.ie> wrote:
> Simon Riggs <si...@2ndquadrant.com> wrote:
>>
>> So if I understand you correctly, in your view MERGE should just fail
>> with an ERROR if it runs concurrently with other DML?
>
>
> That's certainly my opinion on the matter. It seems like that might be
> the consensus, too.

Given that I only just found out what you've been talking about, I
don't believe that anybody else did either.

I think people imagined you had worked out how to make MERGE run
concurrently, I certainly did, but in fact you're just saying you
don't believe it ever should.

That is strange since the SQL Standard specifically allows the
implementation to decide upon concurrent behaviour.


> Without meaning to sound glib: we already did make it work for a
> special, restricted case that is important enough to justify introducing
> a couple of kludges -- ON CONFLICT DO UPDATE/upsert.
>
> I do agree that what I propose for MERGE will probably cause confusion;
> just look into Oracle's MERGE implementation for examples of this.  We
> ought to go out of our way to make it clear that MERGE doesn't provide
> these guarantees.

So in your view we should make no attempt to avoid concurrent errors,
even when we have the capability to do so (in some cases) and doing so
would be perfectly compliant with the SQLStandard.

Yes, that certainly will make an easier patch for MERGE.

Or are you arguing against allowing any patch for MERGE?

Now we have more clarity, who else agrees with this?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-02 Thread Simon Riggs
On 1 November 2017 at 18:20, Peter Geoghegan <p...@bowt.ie> wrote:

> In Postgres, you can avoid duplicate violations with MERGE by using a
> higher isolation level (these days, those are turned into a
> serialization error at higher isolation levels when no duplicate is
> visible to the xact's snapshot).

So if I understand you correctly, in your view MERGE should just fail
with an ERROR if it runs concurrently with other DML?

i.e. if a race condition between the query and an INSERT runs
concurrently with another INSERT

We have no interest in making that work?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statement-level rollback

2017-11-02 Thread Simon Riggs
On 2 November 2017 at 01:33, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:

> The proposed statement-level rollback feature works in a slightly
> different context.  It does not change when or how a transaction or
> transaction block begins and ends.  It only changes what happens inside
> explicit transaction blocks.

Yes, this is not the same thing as autocommit. There should be no
concerns there.

> The difference is how error recovery works.

Yes

> So this will necessarily be
> tied to how the client code or other surrounding code is structured or
> what the driver or framework is doing in the background to manage
> transactions.  It would also be bad if client code was not prepared for
> this new behavior, reported the transaction as complete while some
> commands in the middle were omitted.

This new feature allows a simplified development style because earlier
statements don't need to be re-executed, nor do we have to manually
wrap everything in savepoints.

It changes the assumptions of error recovery, so this will break code
already written for PostgreSQL. The purpose is to allow new code to be
written using the easier style.

Compare this with SERIALIZABLE mode - no need for time consuming
additional coding.

> Drivers can already achieve this behavior and do do that by issuing
> savepoint commands internally.  The point raised in this thread was that
> that creates too much network overhead, so a backend-based solution
> would be preferable.  We haven't seen any numbers or other evidence to
> quantify that claim, so maybe it's worth looking into that some more.
>
> In principle, a backend-based solution that drivers just have to opt
> into would save a lot of duplication.  But the drivers that care or
> require it according to their standards presumably already implement
> this behavior in some other way, so it comes back to whether there is a
> performance or other efficiency gain here.
>
> Another argument was that other SQL implementations have this behavior.
> This appears to be the case.  But as far as I can tell, it is also tied
> to their particular interfaces and the structure and flow control they
> provide.  So a client-side solution like psql already provides or
> something in the various drivers would work just fine here.
>
> So my summary for the moment is that a GUC or similar run-time setting
> might be fine, with appropriate explanation and warnings.  But it's not
> clear whether it's worth it given the existing alternatives.

This is about simplicity for the developer, not so much about performance.

A backend-based solution is required for PL procedures and functions.

We could put this as an option into PL/pgSQL, but it seems like it is
a function of the transaction manager rather than the driver.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-11-01 Thread Simon Riggs
On 31 October 2017 at 18:55, Peter Geoghegan <p...@bowt.ie> wrote:
> On Tue, Oct 31, 2017 at 2:25 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> If there are challenges ahead, its reasonable to ask for test cases
>> for that now especially if you think you know what they already are.
>> Imagine we go forwards 2 months - if you dislike my patch when it
>> exists you will submit a test case showing the fault. Why not save us
>> all the trouble and describe that now? Test Driven Development.
>
> I already have, on several occasions now. But if you're absolutely
> insistent on my constructing the test case in terms of a real SQL
> statement, then that's what I'll do.
>
> Consider this MERGE statement, from your mock documentation:
>
> MERGE INTO wines w
> USING wine_stock_changes s
> ON s.winename = w.winename
> WHEN NOT MATCHED AND s.stock_delta > 0 THEN
>   INSERT VALUES(s.winename, s.stock_delta)
> WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
>   UPDATE SET stock = w.stock + s.stock_delta
> ELSE
>   DELETE;
>
> Suppose we remove the WHEN NOT MATCHED case, leaving us with:
>
> MERGE INTO wines w
> USING wine_stock_changes s
> ON s.winename = w.winename
> WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
>   UPDATE SET stock = w.stock + s.stock_delta
> ELSE
>   DELETE;
>
> We now have a MERGE that will not INSERT, but will continue to UPDATE
> and DELETE.

Agreed

> (It's implied that your syntax cannot do this at all,
> because you propose use the ON CONFLICT infrastructure, but I think we
> have to imagine a world in which that restriction either never existed
> or has subsequently been lifted.)
>
> The problem here is: Iff the first statement uses ON CONFLICT
> infrastructure, doesn't the absence of WHEN NOT MATCHED imply
> different semantics for the remaining updates and deletes in the
> second version of the query?

Not according to the SQL Standard, no. I have no plans for such
differences to exist.

Spec says: If we hit HeapTupleSelfUpdated then we throw an ERROR.

> You've removed what seems like a neat
> adjunct to the MERGE, but it actually changes everything else too when
> using READ COMMITTED. Isn't that pretty surprising?

I think you're presuming things I haven't said and don't mean, so
we're both surprised.

> If you're not
> clear on what I mean, see my previous remarks on EPQ, live lock, and
> what a CONFLICT could be in READ COMMITTED mode. Concurrent activity
> at READ COMMITTED mode can be expected to significantly alter the
> outcome here.

And I still have questions about what exactly you mean, but at least
this post is going in the right direction and I'm encouraged. Thank
you,

I think we need some way of expressing the problems clearly.

> That is rather frustrating.

Guess so.

>> You've said its possible another way. Show that assertion is actually
>> true. We're all listening, me especially, for the technical details.
>
> My proposal, if you want to call it that, has the merit of actually
> being how MERGE works in every other system. Both Robert and Stephen
> seem to be almost sold on what I suggest, so I think that I've
> probably already explained my position quite well.

The only info I have is

"a general purpose solution is one that more or
less works like an UPDATE FROM, with an outer join, whose ModifyTable
node is capable of insert, update, or delete (and accepts quals for
MATCHED and NOT matched cases, etc). You could still get duplicate
violations due to concurrent activity in READ COMMITTED mode".

Surely the whole point of this is to avoid duplicate violations due to
concurrent activity?

I'm not seeing how either design sketch rigorously avoids live locks,
but those are fairly unlikely and easy to detect and abort.

Thank you for a constructive email, we are on the way to somewhere good.

I have more to add, but wanted to get back to you soonish.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQL procedures

2017-10-31 Thread Simon Riggs
On 31 October 2017 at 18:23, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:

> I've been working on SQL procedures.  (Some might call them "stored
> procedures", but I'm not aware of any procedures that are not stored, so
> that's not a term that I'm using here.)

I guess that the DO command might have a variant to allow you to
execute a procedure that isn't stored?

Not suggesting you implement that, just thinking about why/when the
"stored" word would be appropriate.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-10-31 Thread Simon Riggs
On 30 October 2017 at 18:58, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 30 October 2017 at 15:22, Simon Riggs <si...@2ndquadrant.com> wrote:
>
>>> You forgot to update this formula in xlog.c:
>>> distance = (2.0 + CheckPointCompletionTarget) * 
>>> CheckPointDistanceEstimate;
>>> /* add 10% for good measure. */
>>> distance *= 1.10;
>>> You can guess easily where the mistake is.
>>
>> Doh - fixed that before posting, so I must have sent the wrong patch.
>>
>> It's the 10%, right? ;-)
>
> So, there are 2 locations that mention 2.0 in xlog.c
>
> I had already fixed one, which is why I remembered editing it.
>
> The other one you mention has a detailed comment above it explaining
> why it should be 2.0 rather than 1.0, so I left it.
>
> Let me know if you still think it should be removed? I can see the
> argument both ways.
>
> Or maybe we need another patch to account for manual checkpoints.

Revised patch to implement this

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


remove_secondary_checkpoint.v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-31 Thread Simon Riggs
On 31 October 2017 at 12:56, Stephen Frost <sfr...@snowman.net> wrote:
> Simon,
>
> * Simon Riggs (si...@2ndquadrant.com) wrote:
>> On 30 October 2017 at 19:55, Stephen Frost <sfr...@snowman.net> wrote:
>> > I don't think MERGE should be radically different from other database
>> > systems and just syntax sugar over a capability we have.
>>
>> I've proposed a SQL Standard compliant implementation that would do
>> much more than be new syntax over what we already have.
>>
>> So these two claims aren't accurate: "radical difference" and "syntax
>> sugar over a capability we have".
>
> Based on the discussion so far, those are the conclusions I've come to.
> Saying they're isn't accurate without providing anything further isn't
> likely to be helpful.

I'm trying to be clear, accurate and non-confrontational.

I appreciate those are your conclusions, which is why I need to be
clear that the proposal has been misunderstood on those stated points.

What else would you like me to add to be helpful? I will try...

I've spent weeks looking at other implementations and combing the
SQLStandard with a fine tooth comb to see what is possible and what is
not. My proposal shows a new way forwards and yet follows the standard
in horrible detail. It has taken me nearly 2 months to write the doc
page submitted here. It is not simply new syntax stretched over
existing capability. The full syntax of MERGE is quite powerful and
will take some months to make work, even with my proposal.

I'm not sure what else to say, but I am happy to answer specific
technical questions that have full context to allow a rational
discussion.

>> > Time changes
>> > many things, but I don't think anything's changed in this from the prior
>> > discussions about it.
>>
>> My proposal is new, that is what has changed.
>
> I'm happy to admit that I've missed something in the discussion, but
> from what I read the difference appeared to be primairly that you're
> proposing it, and doing so a couple years later.
>
>> At this stage, general opinions can be misleading.
>
> I'd certainly love to see a MERGE capability that meets the standard and
> works in much the same way from a user's perspective as the other RDBMS'
> which already implement it.

The standard explains how it should work, with the proviso that the
standard allows us to define concurrent behavior. Serge has explained
that he sees that my proposal covers the main use cases. I don't yet
see how to cover all, but I'm looking to do the most important use
cases first and other things later.

> From prior discussions with Peter on
> exactly that subject, I'm also convinced that having that would be a
> largely independent piece of work from the INSERT .. ON CONFLICT
> capability that we have (just as MERGE is distinct from similar UPSERT
> capabilities in other RDBMSs).
>
> The goal here is really just to avoid time wasted developing MERGE based
> on top of the INSERT .. ON CONFLICT system only to have it be rejected
> later because multiple other committers and major contributors have said
> that they don't agree with that approach.  If, given all of this
> discussion, you don't feel that's a high risk with your approach then by
> all means continue on with what you're thinking and we can review the
> patch once it's posted.

It is certainly a risk and I don't want to waste time either. I am
happy to consider any complete proposal for how to proceed, including
details and/or code, but I will be prioritising things to ensure that
we have a candidate feature for PG11 from at least one author, rather
than a research project for PG12+.

The key point here is concurrency. I have said that there is an
intermediate step that is useful and achievable, but does not cover
every case. Serge has also stated this - who it should be noted is the
only person in this discussion that has already written the MERGE
statement in SQL, for DB2, so I give more weight to his technical
opinion than I do to others, at this time.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-31 Thread Simon Riggs
On 30 October 2017 at 19:55, Stephen Frost <sfr...@snowman.net> wrote:
> * Robert Haas (robertmh...@gmail.com) wrote:
>> On Sun, Oct 29, 2017 at 1:19 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> > Nothing I am proposing blocks later work.
>>
>> That's not really true.  Nobody's going to be happy if MERGE has one
>> behavior in one set of cases and an astonishingly different behavior
>> in another set of cases.  If you adopt a behavior for certain cases
>> that can't be extended to other cases, then you're blocking a
>> general-purpose MERGE.
>>
>> And, indeed, it seems that you're proposing an implementation that
>> adds no new functionality, just syntax compatibility.  Do we really
>> want or need two syntaxes  for the same thing in core?  I kinda think
>> Peter might have the right idea here.  Under his proposal, we'd be
>> getting something that is, in a way, new.
>
> +1.
>
> I don't think MERGE should be radically different from other database
> systems and just syntax sugar over a capability we have.

I've proposed a SQL Standard compliant implementation that would do
much more than be new syntax over what we already have.

So these two claims aren't accurate: "radical difference" and "syntax
sugar over a capability we have".

> Time changes
> many things, but I don't think anything's changed in this from the prior
> discussions about it.

My proposal is new, that is what has changed.

At this stage, general opinions can be misleading.

Hi ho, hi ho.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-31 Thread Simon Riggs
On 30 October 2017 at 19:17, Peter Geoghegan <p...@bowt.ie> wrote:
> On Mon, Oct 30, 2017 at 11:07 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> Please explain in detail the MERGE SQL statements that you think will
>> be problematic and why.
>
> Your proposal is totally incomplete, so I can only surmise its
> behavior in certain cases, to make a guess at what the problems might
> be (see my remarks on EPQ, live lock, etc). This makes it impossible
> to do what you ask right now.

Impossible, huh. Henry Ford was right.

If there are challenges ahead, its reasonable to ask for test cases
for that now especially if you think you know what they already are.
Imagine we go forwards 2 months - if you dislike my patch when it
exists you will submit a test case showing the fault. Why not save us
all the trouble and describe that now? Test Driven Development.

> Besides, you haven't answered the question from my last e-mail
> ("What's wrong with that [set of MERGE semantics]?"), so why should I
> go to further trouble? You're just not constructively engaging with me
> at this point.

It's difficult to discuss anything with someone that refuses to
believe that there are acceptable ways around things. I believe there
are.

If you can calm down the rhetoric we can work together, but if you
continue to grandstand it makes it more difficult.


> We're going around in circles.

Not really. You've said some things and I'm waiting for further
details about the problems you've raised.

You've said its possible another way. Show that assertion is actually
true. We're all listening, me especially, for the technical details.

I've got a fair amount of work to do to get the rest of the patch in
shape, so take your time and make it a complete explanation.

My only goal is the MERGE feature in PG11. For me this is a
collaborative engineering challenge not a debate and definitely not an
argument. If you describe your plan of how to do this, I may be able
to follow it and include that design. If you don't, then it will be
difficult for me to include your thoughts. If you or others wish to
write something as well, I have already said that is OK too.

If anybody's goal is to block development or to wait for perfection to
exist at some unstated time in the future, than I disagree with those
thoughts.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-30 Thread Simon Riggs
On 30 October 2017 at 18:59, Peter Geoghegan <p...@bowt.ie> wrote:

> It is most emphatically *not* just a "small matter of programming".

Please explain in detail the MERGE SQL statements that you think will
be problematic and why.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-10-30 Thread Simon Riggs
On 30 October 2017 at 15:22, Simon Riggs <si...@2ndquadrant.com> wrote:

>> You forgot to update this formula in xlog.c:
>> distance = (2.0 + CheckPointCompletionTarget) * 
>> CheckPointDistanceEstimate;
>> /* add 10% for good measure. */
>> distance *= 1.10;
>> You can guess easily where the mistake is.
>
> Doh - fixed that before posting, so I must have sent the wrong patch.
>
> It's the 10%, right? ;-)

So, there are 2 locations that mention 2.0 in xlog.c

I had already fixed one, which is why I remembered editing it.

The other one you mention has a detailed comment above it explaining
why it should be 2.0 rather than 1.0, so I left it.

Let me know if you still think it should be removed? I can see the
argument both ways.

Or maybe we need another patch to account for manual checkpoints.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-10-30 Thread Simon Riggs
On 30 October 2017 at 15:31, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Mon, Oct 30, 2017 at 2:22 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> On 25 October 2017 at 00:17, Michael Paquier <michael.paqu...@gmail.com> 
>> wrote:
>>> -* Delete old log files (those no longer needed even for previous
>>> -* checkpoint or the standbys in XLOG streaming).
>>> +* Delete old log files and recycle them
>>>  */
>>> Here that's more "Delete or recycle old log files". Recycling of a WAL
>>> segment is its renaming into a newer segment.
>>
>> Sometimes files are deleted if there are too many.
>
> Sure, but one segment is never deleted and then recycled, which is
> what your new comment implies as I understand it.

I guess it depends how you read it.

The function performs both deletion AND recycling

Yet one file is only ever deleted OR recycled


>>> -   checkPointLoc = ControlFile->prevCheckPoint;
>>> +   /*
>>> +* It appears to be a bug that we used to use
>>> prevCheckpoint here
>>> +*/
>>> +   checkPointLoc = ControlFile->checkPoint;
>>> Er, no. This is correct because we expect the prior checkpoint to
>>> still be present in the event of a failure detecting the latest
>>> checkpoint.
>>
>> I'm removing "prevCheckPoint", so not sure what you mean.
>
> I mean that there is no actual bug here. And changing the code as you
> do is correct, but the comment is not.

Thanks

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-10-30 Thread Simon Riggs
On 25 October 2017 at 00:17, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Mon, Oct 23, 2017 at 11:20 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> Remove the code that maintained two checkpoint's WAL files and all
>> associated stuff.
>>
>> Try to avoid breaking anything else
>>
>> This
>> * reduces disk space requirements on master
>> * removes a minor bug in fast failover
>> * simplifies code
>
> In short, yeah! I had a close look at the patch and noticed a couple of 
> issues.

Thanks for the detailed review

> +else
>  /*
> - * The last valid checkpoint record required for a streaming
> - * recovery exists in neither standby nor the primary.
> + * We used to attempt to go back to a secondary checkpoint
> + * record here, but only when not in standby_mode. We now
> + * just fail if we can't read the last checkpoint because
> + * this allows us to simplify processing around checkpoints.
>   */
>  ereport(PANIC,
>  (errmsg("could not locate a valid checkpoint record")));
> -}
> Using brackets in this case is more elegant style IMO. OK, there is
> one line, but the comment is long so the block becomes
> confusingly-shaped.

OK

>  /* Version identifier for this pg_control format */
> -#define PG_CONTROL_VERSION1003
> +#define PG_CONTROL_VERSION1100
> Yes, this had to happen anyway in this release cycle.
>
> backup.sgml describes the following:
> to higher segment numbers.  It's assumed that segment files whose
> contents precede the checkpoint-before-last are no longer of
> interest and can be recycled.
> However this is not true anymore with this patch.

Thanks, will fix.

> The documentation of pg_control_checkpoint() is not updated. func.sgml
> lists the tuple fields returned for this function.

Ah, OK.

> You forgot to update pg_control_checkpoint in pg_proc.h. prior_lsn is
> still listed in the list of arguments returned. And you need to update
> as well the output list of types.
>
>   * whichChkpt identifies the checkpoint (merely for reporting purposes).
> - * 1 for "primary", 2 for "secondary", 0 for "other" (backup_label)
> + * 1 for "primary", 0 for "other" (backup_label)
> + * 2 for "secondary" is no longer supported
>   */
> I would suggest to just remove the reference to "secondary".

Yeh, that was there for review.

> -* Delete old log files (those no longer needed even for previous
> -* checkpoint or the standbys in XLOG streaming).
> +* Delete old log files and recycle them
>  */
> Here that's more "Delete or recycle old log files". Recycling of a WAL
> segment is its renaming into a newer segment.

Sometimes files are deleted if there are too many.

> You forgot to update this formula in xlog.c:
> distance = (2.0 + CheckPointCompletionTarget) * 
> CheckPointDistanceEstimate;
> /* add 10% for good measure. */
> distance *= 1.10;
> You can guess easily where the mistake is.

Doh - fixed that before posting, so I must have sent the wrong patch.

It's the 10%, right? ;-)

> -   checkPointLoc = ControlFile->prevCheckPoint;
> +   /*
> +* It appears to be a bug that we used to use
> prevCheckpoint here
> +*/
> +   checkPointLoc = ControlFile->checkPoint;
> Er, no. This is correct because we expect the prior checkpoint to
> still be present in the event of a failure detecting the latest
> checkpoint.

I'm removing "prevCheckPoint", so not sure what you mean.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-30 Thread Simon Riggs
On 30 October 2017 at 09:44, Robert Haas <robertmh...@gmail.com> wrote:
> On Sun, Oct 29, 2017 at 1:19 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> Nothing I am proposing blocks later work.
>
> That's not really true.  Nobody's going to be happy if MERGE has one
> behavior in one set of cases and an astonishingly different behavior
> in another set of cases.  If you adopt a behavior for certain cases
> that can't be extended to other cases, then you're blocking a
> general-purpose MERGE.

If a general purpose solution exists, please explain what it is. The
problem is that nobody has ever done so, so its not like we are
discussing the difference between bad solution X and good solution Y,
we are comparing reasonable solution X with non-existent solution Y.

> And, indeed, it seems that you're proposing an implementation that
> adds no new functionality, just syntax compatibility.  Do we really
> want or need two syntaxes  for the same thing in core?  I kinda think
> Peter might have the right idea here.  Under his proposal, we'd be
> getting something that is, in a way, new.

Partitioning looked like "just new syntax", but it has been a useful
new feature.

MERGE provides new capabilities that we do not have and is much more
powerful than INSERT/UPDATE, in a syntax that follow what other
databases use today. Just like partitioning.

Will what I propose do everything in the first release? No, just like
partitioning.

If other developers are able to do things in phases, then I claim that
right also.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-30 Thread Simon Riggs
On 29 October 2017 at 21:25, Peter Geoghegan <p...@bowt.ie> wrote:

> The semantics that I suggest (the SQL standard's semantics) will
> require less code, and will be far simpler. Right now, I simply don't
> understand why you're insisting on using ON CONFLICT without even
> saying why. I can only surmise that you think that doing so will
> simplify the implementation, but I can guarantee you that it won't.

If you see problems in my proposal, please show the specific MERGE SQL
statements that you think will give problems and explain how and what
the failures will be.

We can then use those test cases to drive developments. If we end up
with code for multiple approaches we will be able to evaluate the
differences between proposals using the test cases produced.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-29 Thread Simon Riggs
On 28 October 2017 at 22:04, Peter Geoghegan <p...@bowt.ie> wrote:
> On Sat, Oct 28, 2017 at 12:49 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> Nothing I am proposing blocks later work.
>
> Actually, many things will block future work if you go down that road.
> You didn't respond to the specific points I raised, but that doesn't
> mean that they're not real.
>
>> Everything you say makes it clear that a fully generalized solution is
>> going to be many years in the making, assuming we agree.
>
> I think that it's formally impossible as long as you preserve the ON
> CONFLICT guarantees, unless you somehow define the problems out of
> existence. Those are guarantees which no other MERGE implementation
> has ever made, and which the SQL standard says nothing about. And for
> good reasons.
>
>> "The extent to which an SQL-implementation may disallow independent
>> changes that are not significant is implementation-defined”.
>>
>> So we get to choose. I recommend that we choose something practical.
>> We're approaching the 10 year anniversary of my first serious attempt
>> to do MERGE. I say that its time to move forwards with useful
>> solutions, rather than wait another 10 years for the perfect one, even
>> assuming it exists.
>
> As far as I'm concerned, you're the one arguing for an unobtainable
> solution over a good one, not me. I *don't* think you should solve the
> problems that I raise -- you should instead implement MERGE without
> any of the ON CONFLICT guarantees, just like everyone else has.
> Building MERGE on top of the ON CONFLICT guarantees, and ultimately
> arriving at something that is comparable to other implementations over
> many releases might be okay if anyone had the slightest idea of what
> that would look like. You haven't even _described the semantics_,
> which you could do by addressing the specific points that I raised.

I have no objection to you writing a better version than me and if my
work inspires you to complete that in a reasonable timescale then we
all win. I'm also very happy to work together on writing the version
described - you have already done much work in this area.

I don't see any major problems in points you've raised so far, though
obviously there is much detail.

All of this needs to be written and then committed, so I'll get on and
write my proposal. We can then see whether that is an 80% solution or
something less. There are more obvious barriers to completion at this
point, like time and getting on with it.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-28 Thread Simon Riggs
On 28 October 2017 at 20:39, Peter Geoghegan <p...@bowt.ie> wrote:
> On Sat, Oct 28, 2017 at 3:10 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> SQL:2011 specifically states "The extent to which an
>> SQL-implementation may disallow independent changes that are not
>> significant is implementation-defined”, so in my reading the above
>> behaviour would make us fully spec compliant. Thank you to Peter for
>> providing the infrastructure on which this is now possible for PG11.
>>
>> Serge puts this very nicely by identifying two different use cases for MERGE.
>
> MERGE benefits from having a join that is more or less implemented in
> the same way as any other join. It can be a merge join, hash join, or
> nestloop join. ON CONFLICT doesn't work using a join.
>
> Should I to take it that you won't be supporting any of these
> alternative join algorithms? If not, then you'll have something that
> really isn't comparable to MERGE as implemented in Oracle, SQL Server,
> or DB2. They *all* do this.
>
> Would the user be able to omit WHEN NOT MATCHED/INSERT, as is the case
> with every existing MERGE implementation? If so, what actually happens
> under the hood when WHEN NOT MATCHED is omitted? For example, would
> you actually use a regular "UPDATE FROM" style join, as opposed to the
> ON CONFLICT infrastructure? And, if that is so, can you justify the
> semantic difference for rows that are updated in each scenario
> (omitted vs. not omitted) in READ COMMITTED mode? Note that this could
> be the difference between updating a row when *no* version is visible
> to our MVCC snapshot, as opposed to doing the EPQ stuff and updating
> the latest row version if possible. That's a huge, surprising
> difference. On top of all this, you risk live-lock if INSERT isn't a
> possible outcome (this is also why ON CONFLICT can never accept a
> predicate on its INSERT portion -- again, quite unlike MERGE).
>
> Why not just follow what other systems do? It's actually easier to go
> that way, and you get a better outcome. ON CONFLICT involves what you
> could call a sleight of hand, and I fear that you don't appreciate
> just how specialized the internal infrastructure is.
>
>> Now, I accept that you might also want a MERGE statement that
>> continues to work even if there is no unique constraint, but it would
>> need to have different properties to the above. I do not in any way
>> argue against adding that.
>
> Maybe you *should* be arguing against it, though, and arguing against
> ever supporting anything but equijoins, because these things will
> *become* impossible if you go down that road. By starting with the ON
> CONFLICT infrastructure, while framing no-unique-index-support as work
> for some unspecified future release, you're leaving it up to someone
> else to resolve the problems. Someone else must square the circle of
> mixing ON CONFLICT semantics with fully generalized MERGE semantics.
> But who?

Nothing I am proposing blocks later work.

Everything you say makes it clear that a fully generalized solution is
going to be many years in the making, assuming we agree.

"The extent to which an SQL-implementation may disallow independent
changes that are not significant is implementation-defined”.

So we get to choose. I recommend that we choose something practical.
We're approaching the 10 year anniversary of my first serious attempt
to do MERGE. I say that its time to move forwards with useful
solutions, rather than wait another 10 years for the perfect one, even
assuming it exists.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-28 Thread Simon Riggs
On 28 October 2017 at 00:31, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Fri, Oct 27, 2017 at 9:00 AM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Fri, Oct 27, 2017 at 4:41 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>>> I didn't say it but my intention was to just throw an ERROR if no
>>> single unique index can be identified.
>>>
>>> It could be possible to still run MERGE in that situaton but we would
>>> need to take a full table lock at ShareRowExclusive. It's quite likely
>>> that such statements would throw duplicate update errors, so I
>>> wouldn't be aiming to do anything with that for PG11.
>>
>> Like Peter, I think taking such a strong lock for a DML statement
>> doesn't sound like a very desirable way forward.  It means, for
>> example, that you can only have one MERGE in progress on a table at
>> the same time, which is quite limiting.  It could easily be the case
>> that you have multiple MERGE statements running at once but they touch
>> disjoint groups of rows and therefore everything works.  I think the
>> code should be able to cope with concurrent changes, if nothing else
>> by throwing an ERROR, and then if the user wants to ensure that
>> doesn't happen by taking ShareRowExclusiveLock they can do that via an
>> explicit LOCK TABLE statement -- or else they can prevent concurrency
>> by any other means they see fit.
>
> +1, I would suspect users to run this query in parallel of the same
> table for multiple data sets.
>
> Peter has taken some time to explain me a bit his arguments today, and
> I agree that it does not sound much appealing to have constraint
> limitations for MERGE. Particularly using the existing ON CONFLICT
> structure gets the feeling of having twice a grammar for what's
> basically the same feature, with pretty much the same restrictions.
>
> By the way, this page sums up nicely the situation about many
> implementations of UPSERT taken for all systems:
> https://en.wikipedia.org/wiki/Merge_(SQL)

That Wikipedia article is badly out of date and regrettably does NOT
sum up the current situation nicely any more since MERGE has changed
in definition in SQL:2011 since its introduction in SQL:2003.

I'm proposing a MERGE statement for PG11 that
i) takes a RowExclusiveLock on rows, so can be run concurrently
ii) uses the ON CONFLICT infrastructure to do that
and so requires a unique constraint.

The above is useful behaviour that will be of great benefit to
PostgreSQL users. There are no anomalies remaining.

SQL:2011 specifically states "The extent to which an
SQL-implementation may disallow independent changes that are not
significant is implementation-defined”, so in my reading the above
behaviour would make us fully spec compliant. Thank you to Peter for
providing the infrastructure on which this is now possible for PG11.

Serge puts this very nicely by identifying two different use cases for MERGE.

Now, I accept that you might also want a MERGE statement that
continues to work even if there is no unique constraint, but it would
need to have different properties to the above. I do not in any way
argue against adding that. I also agree that adding RETURNING at a
later stage would be fine as well. I am proposing that those and any
other additional properties people come up with can be added in later
releases once we have the main functionality in core in PG11.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MERGE SQL Statement for PG11

2017-10-27 Thread Simon Riggs
On 27 October 2017 at 15:24, Robert Haas <robertmh...@gmail.com> wrote:
> On Fri, Oct 27, 2017 at 10:55 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> Questions?
>
> I think one of the reasons why Peter Geoghegan decided to pursue
> INSERT .. ON CONFLICT UPDATE was that, because it is non-standard SQL
> syntax, he felt free to mandate a non-standard SQL requirement, namely
> the presence of a unique index on the arbiter columns.  If MERGE's
> join clause happens to involve equality conditions on precisely the
> same set of columns as some unique index on the target table, then I
> think you can reuse the INSERT .. ON CONFLICT UPDATE infrastructure
> and I suspect there won't be too many problems.

Agreed

> However, if it
> doesn't, then what?  You could decree that such cases will fail, but
> that might not meet your use case for developing the feature.  Or you
> could try to soldier on without the INSERT .. ON CONFLICT UPDATE
> machinery, but that means, I think, that sometimes you will get
> serialization anomalies - at least, I think, you will sometimes obtain
> results that couldn't have been obtained under any serial order of
> execution, and maybe it would end up being possible to fail with
> serialization errors or unique index violations.
>
> In the past, there have been objections to implementations of MERGE
> which would give rise to such serialization anomalies, but I'm not
> sure we should feel bound by those discussions.  One thing that's
> different is that the common and actually-useful case can now be made
> to work in a fairly satisfying way using INSERT .. ON CONFLICT UPDATE;
> if less useful cases are vulnerable to some weirdness, maybe it's OK
> to just document the problems.

Good points.

I didn't say it but my intention was to just throw an ERROR if no
single unique index can be identified.

It could be possible to still run MERGE in that situaton but we would
need to take a full table lock at ShareRowExclusive. It's quite likely
that such statements would throw duplicate update errors, so I
wouldn't be aiming to do anything with that for PG11.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] MERGE SQL Statement for PG11

2017-10-27 Thread Simon Riggs
I'm working on re-submitting MERGE for PG11

Earlier thoughts on how this could/could not be done were sometimes
imprecise or inaccurate, so I have gone through the command per
SQL:2011 spec and produced a definitive spec in the form of an SGML
ref page. This is what I intend to deliver for PG11.

MERGE will use the same mechanisms as INSERT ON CONFLICT, so
concurrent behavior does not require further infrastructure changes,
just detailed work on the statement itself.

I'm building up the code from scratch based upon the spec, rather than
trying to thwack earlier attempts into shape. This looks more likely
to yield a commitable patch.

Major spanners or objections, please throw them in now cos I don't see any.

Questions?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Title: MERGE

MERGEPrev UpSQL CommandsHome NextMERGEMERGE — insert, update, or delete rows of a table based upon source dataSynopsisMERGE INTO target_table_name [ [ AS ] target_alias ]
USING data_source
ON join_condition
when_clause [...]

where data_source is

{ source_table_name |
  ( source_query )
}
[ [ AS ] source_alias ]

where when_clause is

{ WHEN MATCHED [ AND condition ] THEN { merge_update | merge_delete } |
  WHEN NOT MATCHED [ AND condition ] THEN { merge_insert | DO NOTHING }
}

where merge_update is

UPDATE SET { column_name = { _expression_ | DEFAULT } |
 ( column_name [, ...] ) = ( { _expression_ | DEFAULT } [, ...] )
   } [, ...]

and merge_insert is

INSERT [( column_name [, ...] )]
[ OVERRIDING { SYSTEM | USER } VALUE ]
{ VALUES ( { _expression_ | DEFAULT } [, ...] ) | DEFAULT VALUES }

and merge_delete is

DELETEDescription   MERGE performs actions that modify rows in the
   target_table_name,
   using the data_source.
   MERGE provides a single SQL
   statement that can conditionally INSERT,
   UPDATE or DELETE rows, a task
   that would otherwise require multiple procedural language statements.
 First, the MERGE command performs a left outer join
   from data_source to
   target_table_name
   producing zero or more candidate change rows.
   For each candidate change row, WHEN clauses are evaluated
   in the order specified; if one of them is activated, the specified
   action occurs. No more than one WHEN clause can be
   activated for any candidate change row.
 MERGE actions have the same effect as
   regular UPDATE, INSERT, or
   DELETE commands of the same names. The syntax of
   those commands is different, notably that there is no WHERE
   clause and no tablename is specified.  All actions refer to the
   target_table_name,
   though modifications to other tables may be made using triggers.
 The ON clause must join on all of the column(s) of the primary
   key, or if other columns are specified then another unique index on the
   target_table_name.
   Each candidate change row is locked via speculative insertion into the
   chosen unique index, ensuring that the status of MATCHED
   or NOT MATCHED is decided just once for each candidate change
   row, preventing interference from concurrent transactions. As a result of
   these requirements MERGE cannot yet work against
   partitioned tables.
 There is no MERGE privilege.  
   You must have the UPDATE privilege on the column(s)
   of the target_table_name
   referred to in the SET clause
   if you specify an update action, the INSERT privilege
   on the target_table_name
   if you specify an insert action and/or the DELETE
   privilege on the if you specify a delete action on the
   target_table_name.
   Privileges are tested once at statement start and are checked
   whether or not particular WHEN clauses are activated
   during the subsequent execution.
   You will require the SELECT privilege on the
   data_source and any column(s)
   of the target_table_name
   referred to in a condition.
  Parameterstarget_table_name  The name (optionally schema-qualified) of the target table to merge into.
 target_alias  A substitute name for the target table. When an alias is
  provided, it completely hides the actual name of the table.  For
  example, given MERGE foo AS f, the remainder of the
  MERGE statement must refer to this table as
  f not foo.
 source_table_name  The name (optionally schema-qualified) of the source table, view or
  transition table.
 source_query  A query (SELECT statement or VALUES
  statement) that supplies the rows to be merged into the
  target_table_name.
  Refer to the SELECT
  statement or VALUES
  statement for a description of the syntax.
 source_alias  A substitute name for the data source. When an alias is
  provided, it completely hides whether table or query was specified.
 join_condition  join_condition is
  an _expression_ resulting in a value of type
  boolean (similar to a WHERE
  clause) that specifies which 

Re: [HACKERS] WIP: BRIN bloom indexes

2017-10-27 Thread Simon Riggs
On 27 October 2017 at 07:20, Robert Haas <robertmh...@gmail.com> wrote:
> On Thu, Oct 19, 2017 at 10:15 PM, Tomas Vondra
> <tomas.von...@2ndquadrant.com> wrote:
>> Let's see a query like this:
>>
>> select * from bloom_test
>>  where id = '8db1d4a6-31a6-e9a2-4e2c-0e842e1f1772';
>>
>> The minmax index produces this plan
>>
>>Heap Blocks: lossy=2061856
>>  Execution time: 22707.891 ms
>>
>> Now, the bloom index:
>>
>>Heap Blocks: lossy=25984
>>  Execution time: 338.946 ms
>
> It's neat to see BRIN being extended like this.  Possibly we could
> consider making it a contrib module rather than including it in core,
> although I don't have strong feelings about it.

I see that SP-GIST includes two operator classes in core, one default.

Makes sense to do the same thing with BRIN and add this new op class
as a non-default option in core.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove secondary checkpoint

2017-10-24 Thread Simon Riggs
On 24 October 2017 at 09:50, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> Remove the code that maintained two checkpoint's WAL files and all
>> associated stuff.
>
>> Try to avoid breaking anything else
>
>> This
>> * reduces disk space requirements on master
>> * removes a minor bug in fast failover
>> * simplifies code
>
> Doesn't it also make crash recovery less robust?  The whole point
> of that mechanism is to be able to cope if the latest checkpoint
> record is unreadable.  If you want to toss that overboard, I think
> you need to make the case why we don't need it, not just post a
> patch removing it.  *Of course* the code is simpler without it.
> That's utterly irrelevant.  The code would be even simpler with
> no crash recovery at all ... but we're not going there.

Well, the mechanism has already been partially removed since we don't
maintain two checkpoints on a standby. So all I'm proposing is we
remove the other half.

I've not seen myself, nor can I find an example online where the
primary failed yet the secondary did not also fail from the same
cause.

If it is a possibility to do this, now we have pg_waldump we can
easily search for a different checkpoint and start from there instead
which is a more flexible approach. If you didn't save your WAL and
don't have any other form of backup, relying on the secondary
checkpoint is not exactly a safe bet.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Remove secondary checkpoint

2017-10-24 Thread Simon Riggs
Remove the code that maintained two checkpoint's WAL files and all
associated stuff.

Try to avoid breaking anything else

This
* reduces disk space requirements on master
* removes a minor bug in fast failover
* simplifies code

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


remove_secondary_checkpoint.v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix a typo in execReplication.c

2017-10-13 Thread Simon Riggs
On 13 October 2017 at 09:13, Masahiko Sawada <sawada.m...@gmail.com> wrote:
> On Thu, Oct 12, 2017 at 11:30 PM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Thu, Oct 12, 2017 at 6:55 AM, Petr Jelinek
>> <petr.jeli...@2ndquadrant.com> wrote:
>>> Thanks for the patch, looks correct to me.
>>
>> Committed and back-patched to v10.

Well spotted both of you!

Shows that reading code and correcting comments is useful activity.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Predicate Locks for writes?

2017-10-11 Thread Simon Riggs
On 11 October 2017 at 15:33, Robert Haas <robertmh...@gmail.com> wrote:
> On Sat, Oct 7, 2017 at 7:26 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> PredicateLockTuple() specifically avoids adding an SIRead lock if the
>> tuple already has a write lock on it, so surely it must also be
>> correct to skip the SIRead lock if we are just about to update the
>> row?
>
> I wonder if there's a race condition.  Can someone else read/update
> the tuple between the time when we would've taken the SIRead lock and
> the time when we would have gotten the actual tuple lock?

On 9 October 2017 at 13:23, Alexander Korotkov
<a.korot...@postgrespro.ru> wrote:

>> PredicateLockTuple() specifically avoids adding an SIRead lock if the
>> tuple already has a write lock on it, so surely it must also be
>> correct to skip the SIRead lock if we are just about to update the
>> row?
>>
>> I am suggesting that we ignore PredicateLockTuple() for cases where we
>> are about to update or delete the found tuple.
>
>
> If my thoughts above are correct, than it would be a reasonable
> optimization.  We would skip cycle of getting SIReadLock on tuple and then
> releasing it, without any change of behavior.

I'm inclined to believe Robert's hypothesis that there is some race
condition there.

The question is whether that still constitutes a serializability
problem since we haven't done anything with the data, just passed it
upwards to be modified.

If not we can just skip the SIread lock.

If it is an issue that still leaves the optimization in the case where
we choose to locate the row using an exclusive lock and just skip the
SIread lock altogether.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Predicate Locks for writes?

2017-10-07 Thread Simon Riggs
SERIALIZABLE looks for chains of rw cases.

When we perform UPDATEs and DELETEs we search for rows and then modify
them. The current implementation views that as a read followed by a
write because we issue PredicateLockTuple() during the index fetch.

Is it correct that a statement that only changes data will add
predicate locks for the rows that it modifies?

PredicateLockTuple() specifically avoids adding an SIRead lock if the
tuple already has a write lock on it, so surely it must also be
correct to skip the SIRead lock if we are just about to update the
row?

I am suggesting that we ignore PredicateLockTuple() for cases where we
are about to update or delete the found tuple.

ISTM that a Before Row Trigger would need to add an SIRead lock since
that is clearly a read.

Thoughts?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw super user checks

2017-10-05 Thread Simon Riggs
On 4 October 2017 at 18:13, Jeff Janes <jeff.ja...@gmail.com> wrote:
> On Thu, Sep 14, 2017 at 1:08 PM, Robert Haas <robertmh...@gmail.com> wrote:
>>
>> On Thu, Sep 14, 2017 at 2:33 PM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>> > I think that foreign tables ought to behave as views do, where they run
>> > as
>> > the owner rather than the invoker.  No one has talked me out of it, but
>> > no
>> > one has supported me on it either.  But I think it is too late to change
>> > that now.
>>
>> That's an interesting point.  I think that you can imagine use cases
>> for either method.  Obviously, if what you want to do is drill a hole
>> through the Internet to another server and then expose it to some of
>> your fellow users, having the FDW run with the owner's permissions
>> (and credentials) is exactly right.  But there's another use case too,
>> which is where you have something that looks like a multi-user
>> sharding cluster.  You want each person's own credentials to carry
>> over to everything they do remotely.
>
>
> OK.  And if you want the first one, you can wrap it in a view currently, but
> if it were changed I don't know what you would do if you want the 2nd one
> (other than having every user create their own set of foreign tables).  So I
> guess the current situation is more flexible.

Sounds like it would be a useful option on a Foreign Server to allow
it to run queries as either the invoker or the owner. We have that
choice for functions, so we already have the concept and syntax
available. We could have another default at FDW level that specifies
what the default is for that type of FDW, and if that is not
specified, we keep it like it currently is.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-09-27 Thread Simon Riggs
On 15 September 2017 at 16:34, Konstantin Knizhnik
<k.knizh...@postgrespro.ru> wrote:

> Attached please find yet another version of the patch.

Thanks. I'm reviewing it.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The case for removing replacement selection sort

2017-09-27 Thread Simon Riggs
On 14 July 2017 at 23:20, Peter Geoghegan <p...@bowt.ie> wrote:

> I think we should remove the replacement_sort_tuples GUC, and kill
> replacement selection entirely. There is no need to do this for
> Postgres 10. I don't feel very strongly about it. It just doesn't make
> sense to continue to support replacement selection.

Forgive me if I missed the explanation, but how will we handle bounded sorts?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH : Generational memory allocator (was PATCH: two slab-like memory allocators)

2017-09-24 Thread Simon Riggs
On 24 September 2017 at 21:32, Tomas Vondra
<tomas.von...@2ndquadrant.com> wrote:

> Attached is an updated version of the patch, tweaking the comments.

That looks good, thanks. Marking Ready for Committer to give notice
before commit.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Built-in plugin for logical decoding output

2017-09-24 Thread Simon Riggs
On 24 September 2017 at 15:15, Craig Ringer <cr...@2ndquadrant.com> wrote:
> On 23 September 2017 at 06:28, Gregory Brail <gregbr...@google.com> wrote:
>
>>
>> Would the community support the development of another plugin that is
>> distributed as part of "contrib" that addresses these issues?
>
>
> Petr Jelinek and I tried just that with pglogical. Our submission was
> knocked back with the complaint that there was no in-core user of the code,
> and it couldn't be evaluated usefully without an in-core consumer/receiver.
>
> It's possible we'd make more progress if we tried again now, since we could
> probably write a test suite using the TAP test framework and a small
> src/test/modules consumer. But now we'd probably instead get blocked with
> the complaint that the output plugin used for logical replication should be
> sufficient for any reasonable need. I anticipate that we'd have some
> disagreements about what a reasonable need is, but ... *shrug*.
>
> I personally think we _should_ have such a thing, and that it should be
> separate to the logical replication plugin to allow us to evolve that
> without worrying about out of core dependencies etc.

We plan to submit the next evolution of the code in 2018, in time for
the early cycle of PG12.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module

2017-09-19 Thread Simon Riggs
On 19 September 2017 at 15:22, Andrew Dunstan
<andrew.duns...@2ndquadrant.com> wrote:
>
>
> On 09/19/2017 08:35 AM, Andrew Dunstan wrote:
>> Add citext_pattern_ops for citext contrib module
>>
>> This is similar to text_pattern_ops.
>>
>
> This seems to have upset a number or animals in the buildfarm.
>
> I could create a third output file, but I am seriously questioning the
> point of all this, if we are prepared to accept any result from these
> functions and operators, depending on locale. Maybe it would be better
> simply to test that the result is not null and have done with it. Thoughts?

It makes sense to have a fully detailed output in at least one
parameter setting.

How about use the new psql feature for \if to skip tests if the local
is different to the one for which we have detailed test results?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.

2017-09-18 Thread Simon Riggs
On 16 September 2017 at 21:27, Andres Freund <and...@anarazel.de> wrote:
> On 2017-09-16 15:59:01 -0400, Tom Lane wrote:

>> This does not seem like a problem that justifies a system-wide change
>> that's much more delicate than you thought.

+1

The change/reason ratio is too high.


-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_control_recovery() return value when not in recovery

2017-09-18 Thread Simon Riggs
On 18 September 2017 at 05:50, Andres Freund <and...@anarazel.de> wrote:
> Hi,
>
> Just noticed that we're returning the underlying values for
> pg_control_recovery() without any checks:
> postgres[14388][1]=# SELECT * FROM pg_control_recovery();
> ┌──┬───┬──┬┬───┐
> │ min_recovery_end_lsn │ min_recovery_end_timeline │ backup_start_lsn │ 
> backup_end_lsn │ end_of_backup_record_required │
> ├──┼───┼──┼┼───┤
> │ 0/0  │ 0 │ 0/0  │ 0/0   
>  │ f │
> └──┴───┴──┴┴───┘
> (1 row)

Yes, that would have made sense for these to be NULL


> postgres[14388][1]=# SELECT pg_is_in_recovery();
> ┌───┐
> │ pg_is_in_recovery │
> ├───┤
> │ f │
> └───┘
> (1 row)

But not this, since it is a boolean and the answer is known.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Process startup infrastructure is a mess

2017-09-14 Thread Simon Riggs
On 14 September 2017 at 22:44, Andres Freund <and...@anarazel.de> wrote:

> The way we currently start and initialize individual postgres (sub-)
> processes is pretty complicated and duplicative.  I've a couple
> complaints:
...
> I think we should seriously consider doing a larger refactoring of this
> soon.  I've some ideas about what to do, but I'd welcome some thoughts
> on whether others consider this a serious problem or not, and what they
> think we should do about this, first.

Refactoring without a purpose is a negative for me. It takes time,
introduces bugs and means the greater code churn over time introduces
more bugs because fewer people have seen the code. That is arguable,
but when we compare the priority of that against things people want
and need there is little contest in my mind.

If we add something to an area then its a good time to refactor it
since we were going to get bugs anyway.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-09-14 Thread Simon Riggs
On 14 September 2017 at 16:37, Konstantin Knizhnik
<k.knizh...@postgrespro.ru> wrote:
>
>
> On 14.09.2017 13:19, Simon Riggs wrote:

>> This works by looking at overall stats, and only looks at the overall
>> HOT %, so its too heavyweight and coarse.
>>
>> I suggested storing stat info on the relcache and was expecting you
>> would look at how often the expression evaluates to new == old. If we
>> evaluate new against old many times, then if the success rate is low
>> we should stop attempting the comparison. (<10%?)
>>
>> Another idea:
>> If we don't make a check when we should have done then we will get a
>> non-HOT update, so we waste time extra time difference between a HOT
>> and non-HOT update. If we check and fail we waste time take to perform
>> check. So the question is how expensive the check is against how
>> expensive a non-HOT update is. Could we simply say we don't bother to
>> check functions that have a cost higher than 1? So if the user
>> doesn't want to perform the check they can just increase the cost of
>> the function above the check threshold?
>>
> Attached pleased find one more patch which calculates hot update check hit
> rate more precisely: I have to extended PgStat_StatTabEntry with two new
> fields:
> hot_update_hits and hot_update_misses.

It's not going to work, as already mentioned above. Those stats are at
table level and very little to do with this particular index.

But you've not commented on the design I mention that can work: index relcache.

> Concerning your idea to check cost of index function: it certainly makes
> sense.
> The only problems: I do not understand now how to calculate this cost.
> It can be easily calculated by optimizer when it is building query execution
> plan.
> But inside BuildIndexInfo I have just reference to Relation and have no idea
> how
> I can propagate here information about index expression cost from optimizer.

We could copy at create index, if we took that route. Or we can look
up the cost for the index expression and cache it.


Anyway, this is just jumping around because we still have a parameter
and the idea was to remove the parameter entirely by autotuning, which
I think is both useful and possible, just as HOT itself is autotuned.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH : Generational memory allocator (was PATCH: two slab-like memory allocators)

2017-09-14 Thread Simon Riggs
On 14 August 2017 at 01:35, Tomas Vondra <tomas.von...@2ndquadrant.com> wrote:
> Hi,
>
> Attached is a rebased version of the Generational context, originally
> submitted with SlabContext (which was already committed into Pg 10).
>
> The main change is that I've abandoned the pattern of defining a Data
> structure and then a pointer typedef, i.e.
>
> typedef struct GenerationContextData { ... } GenerationContextData;
> typedef struct GenerationContextData *GenerationContext;
>
> Now it's just
>
> typedef struct GenerationContext { ... } GenerationContext;
>
> mostly because SlabContext was committed like that, and because Andres was
> complaining about this code pattern ;-)
>
> Otherwise the design is the same as repeatedly discussed before.
>
> To show that this is still valuable change (even after SlabContext and
> adding doubly-linked list to AllocSet), I've repeated the test done by
> Andres in [1] using the test case described in [2], that is
>
>   -- generate data
>   SELECT COUNT(*) FROM (SELECT test1()
>   FROM generate_series(1, 5)) foo;
>
>   -- benchmark (measure time and VmPeak)
>   SELECT COUNT(*) FROM (SELECT *
>   FROM pg_logical_slot_get_changes('test', NULL,
> NULL, 'include-xids', '0')) foo;
>
> with different values passed to the first step (instead of the 5). The
> VmPeak numbers look like this:
>
>  N   masterpatched
> --
> 10   1155220 kB  361604 kB
> 20   2020668 kB  434060 kB
> 30   2890236 kB  502452 kB
> 40   3751592 kB  570816 kB
> 50   4621124 kB  639168 kB
>
> and the timing (on assert-enabled build):
>
>  N   masterpatched
> --
> 10  1103.182 ms 412.734 ms
> 20  2216.711 ms 820.438 ms
> 30  3320.095 ms1223.576 ms
> 40  4584.919 ms1621.261 ms
> 50  5590.444 ms2113.820 ms
>
> So it seems it's still a significant improvement, both in terms of memory
> usage and timing. Admittedly, this is a single test, so ideas of other
> useful test cases are welcome.

This all looks good.

What I think this needs is changes to
   src/backend/utils/mmgr/README
which decribe the various options that now exist (normal?, slab) and
will exist (generational)

Don't really care about the name, as long as its clear when to use it
and when not to use it.

This description of generational seems wrong: "When the allocated
chunks have similar lifespan, this works very well and is extremely
cheap."
They don't need the same lifespan they just need to be freed in groups
and in the order they were allocated.

For this patch specifically, we need additional comments in
reorderbuffer.c to describe the memory allocation pattern in that
module so that it is clear that the choice of allocator is useful and
appropriate, possibly with details of how that testing was performed,
so it can be re-tested later or tested on a variety of platforms.

Particularly in reorderbuffer, surely we will almost immediately reuse
chunks again, so is it worth issuing free() and then malloc() again
soon after? Does that cause additional overhead we could also avoid?
Could we possibly keep the last/few free'd chunks around rather than
re-malloc?

Seems like we should commit this soon.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-09-14 Thread Simon Riggs
On 14 September 2017 at 10:42, Konstantin Knizhnik
<k.knizh...@postgrespro.ru> wrote:
>
>
> On 13.09.2017 14:00, Simon Riggs wrote:
>>
>> On 13 September 2017 at 11:30, Konstantin Knizhnik
>> <k.knizh...@postgrespro.ru> wrote:
>>
>>> The only reason of all this discussion about terms is that I need to
>>> choose
>>> name for correspondent index option.
>>> Simon think that we do not need this option at all. In this case we
>>> should
>>> not worry about right term.
>>>  From my point of view, "projection" is quite clear notion and not only
>>> for
>>> mathematics. It is also widely used in IT and especially in DBMSes.
>>
>> If we do have an option it won't be using fancy mathematical
>> terminology at all, it would be described in terms of its function,
>> e.g. recheck_on_update
>>
>> Yes, I'd rather not have an option at all, just some simple code with
>> useful effect, like we have in many other places.
>>
> Attached please find new version of projection functional index optimization
> patch.
> I have implemented very simple autotune strategy: now I use table statistic
> to compare total number of updates with number of hot updates.
> If fraction of hot updates is relatively small, then there is no sense to
> spend time performing extra evaluation of index expression and comparing its
> old and new values.
> Right now the formula is the following:
>
> #define MIN_UPDATES_THRESHOLD 10
> #define HOT_RATIO_THRESHOLD   2
>
> if (stat->tuples_updated > MIN_UPDATES_THRESHOLD
> && stat->tuples_updated >
> stat->tuples_hot_updated*HOT_RATIO_THRESHOLD)
> {
> /* If percent of hot updates is small, then disable projection
> index function
>  * optimization to eliminate overhead of extra index expression
> evaluations.
>  */
> ii->ii_Projection = false;
> }
>
> This threshold values are pulled out of a hat: I am not sure if this
> heuristic is right.
> I will be please to get feedback if such approach to autotune is promising.

Hmm, not really, but thanks for trying.

This works by looking at overall stats, and only looks at the overall
HOT %, so its too heavyweight and coarse.

I suggested storing stat info on the relcache and was expecting you
would look at how often the expression evaluates to new == old. If we
evaluate new against old many times, then if the success rate is low
we should stop attempting the comparison. (<10%?)

Another idea:
If we don't make a check when we should have done then we will get a
non-HOT update, so we waste time extra time difference between a HOT
and non-HOT update. If we check and fail we waste time take to perform
check. So the question is how expensive the check is against how
expensive a non-HOT update is. Could we simply say we don't bother to
check functions that have a cost higher than 1? So if the user
doesn't want to perform the check they can just increase the cost of
the function above the check threshold?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-09-13 Thread Simon Riggs
On 13 September 2017 at 11:30, Konstantin Knizhnik
<k.knizh...@postgrespro.ru> wrote:

> The only reason of all this discussion about terms is that I need to choose
> name for correspondent index option.
> Simon think that we do not need this option at all. In this case we should
> not worry about right term.
> From my point of view, "projection" is quite clear notion and not only for
> mathematics. It is also widely used in IT and especially in DBMSes.

If we do have an option it won't be using fancy mathematical
terminology at all, it would be described in terms of its function,
e.g. recheck_on_update

Yes, I'd rather not have an option at all, just some simple code with
useful effect, like we have in many other places.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generated columns

2017-09-13 Thread Simon Riggs
On 13 September 2017 at 09:09, Andreas Karlsson <andr...@proxel.se> wrote:
> On 09/13/2017 04:04 AM, Simon Riggs wrote:
>>
>> On 31 August 2017 at 05:16, Peter Eisentraut
>> <peter.eisentr...@2ndquadrant.com> wrote:
>>>
>>> - index support (and related constraint support)
>>
>>
>> Presumably you can't index a VIRTUAL column. Or at least I don't think
>> its worth spending time trying to make it work.
>
>
> I think end users would be surprised if one can index STORED columns and
> expressions but not VIRTUAL columns. So unless it is a huge project I would
> say it is worth it.

It must be stored in the index certainly. I guess virtual is similar
to expression indexes then.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Race between SELECT and ALTER TABLE NO INHERIT

2017-09-12 Thread Simon Riggs
On 26 June 2017 at 10:16, Amit Langote <langote_amit...@lab.ntt.co.jp> wrote:

> BTW, in the partitioned table case, the parent is always locked first
> using an AccessExclusiveLock.  There are other considerations in that case
> such as needing to recreate the partition descriptor upon termination of
> inheritance (both the DETACH PARTITION and also DROP TABLE child cases).

Is this requirement documented or in comments anywhere?

I can't see anything about that, which is a fairly major usage point.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generated columns

2017-09-12 Thread Simon Riggs
On 31 August 2017 at 05:16, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:
> Here is another attempt to implement generated columns.  This is a
> well-known SQL-standard feature, also available for instance in DB2,
> MySQL, Oracle.  A quick example:
>
>   CREATE TABLE t1 (
> ...,
> height_cm numeric,
> height_in numeric GENERATED ALWAYS AS (height_cm * 2.54)
>   );

Cool

> - pg_dump produces a warning about a dependency loop when dumping these.
>  Will need to be fixed at some point, but it doesn't prevent anything
> from working right now.
>
> Open design issues:
>
> - COPY behavior: Currently, generated columns are automatically omitted
> if there is no column list, and prohibited if specified explicitly.
> When stored generated columns are implemented, they could be copied out.
>  Some user options might be possible here.

If the values are generated immutably there would be no value in
including them in a dump. If you did dump them then they couldn't be
reloaded without error, so again, no point in dumping them.

COPY (SELECT...) already allows you options to include or exclude any
columns you wish, so I don't see the need for special handling here.

IMHO, COPY TO would exclude generated columns of either kind, ensuring
that the reload would just work.

> - Catalog storage: I store the generation expression in pg_attrdef, like
> a default.  For the most part, this works well.  It is not clear,
> however, what pg_attribute.atthasdef should say.  Half the code thinks
> that atthasdef means "there is something in pg_attrdef", the other half
> thinks "column has a DEFAULT expression".  Currently, I'm going with the
> former interpretation, because that is wired in quite deeply and things
> start to crash if you violate it, but then code that wants to know
> whether a column has a traditional DEFAULT expression needs to check
> atthasdef && !attgenerated or something like that.
>
> Missing/future functionality:
>
> - STORED variant

For me, this option would be the main feature. Presumably if STORED
then we wouldn't need the functions to be immutable, making it easier
to have columns like last_update_timestamp or last_update_username
etc..

I think an option to decide whether the default is STORED or VIRTUAL
would be useful.

> - various ALTER TABLE variants

Adding a column with GENERATED STORED would always be a full table rewrite.
Hmm, I wonder if its worth having a mixed mode: stored for new rows,
only virtual for existing rows; that way we could add GENERATED
columns easily.

> - index support (and related constraint support)

Presumably you can't index a VIRTUAL column. Or at least I don't think
its worth spending time trying to make it work.

> These can be added later once the basics are nailed down.

I imagine that if a column is generated then it is not possible to
have column level INSERT | UPDATE | DELETE privs on it. The generation
happens automatically as part of the write action if stored, or not
until select for virtual. It should be possible to have column level
SELECT privs.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-09-12 Thread Simon Riggs
On 1 September 2017 at 09:47, Konstantin Knizhnik
<k.knizh...@postgrespro.ru> wrote:
>
> On 01.09.2017 09:25, Simon Riggs wrote:
>>
>> On 1 September 2017 at 05:40, Thomas Munro
>> <thomas.mu...@enterprisedb.com> wrote:
>>>
>>> On Fri, Jun 9, 2017 at 8:08 PM, Konstantin Knizhnik
>>> <k.knizh...@postgrespro.ru> wrote:
>>>>
>>>> Attached please find rebased version of the patch.
>>>> Now "projection" attribute is used instead of surjective/injective.
>>>
>>> Hi Konstantin,
>>>
>>> This still applies but it doesn't compile after commits 2cd70845 and
>>> c6293249.  You need to change this:
>>>
>>>Form_pg_attribute att = RelationGetDescr(indexDesc)->attrs[i];
>>>
>>> ... to this:
>>>
>>>Form_pg_attribute att = TupleDescAttr(RelationGetDescr(indexDesc),
>>> i);
>>>
>>> Thanks!
>>
>> Does the patch work fully with that change? If so, I will review.
>>
> Attached please find rebased version of the patch.
> Yes, I checked that it works after this fix.
> Thank you in advance for review.

Thanks for the patch. Overall looks sound and I consider that we are
working towards commit for this.

The idea is that we default "projection = on", and can turn it off in
case the test is expensive. Why bother to have the option? (No docs at
all then!) Why not just evaluate the test and autotune whether to make
the test again in the future? That way we can avoid having an option
completely. I am imagining collecting values on the relcache entry for
the index.

To implement autotuning we would need to instrument the execution. We
could then display the collected value via EXPLAIN, so we could just
then use EXPLAIN in your tests rather than implementing a special
debug mode just for testing. We could also pass that information thru
to stats as well.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MAIN, Uncompressed?

2017-09-12 Thread Simon Riggs
On 29 August 2017 at 07:58, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 26 August 2017 at 05:40, Mark Kirkwood <mark.kirkw...@catalyst.net.nz> 
> wrote:
>> On 26/08/17 12:18, Simon Riggs wrote:
>>
>>> On 25 August 2017 at 20:53, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>>>
>>>> Greg Stark <st...@mit.edu> writes:
>>>>>
>>>>> I think this is a particularly old piece of code and we're lucky the
>>>>> default heuristics have served well for all this time because I doubt
>>>>> many people fiddle with these storage attributes. The time may have
>>>>> come to come up with a better UI for the storage attributes because
>>>>> people are doing new things (like json) and wanting more control over
>>>>> this heuristic.
>>>>
>>>> Yeah, I could get behind a basic rethinking of the tuptoaster control
>>>> knobs.  I'm just not in love with Simon's specific proposal, especially
>>>> not if we can't think of a better name for it than "MAINU".
>>>
>>> Extended/External would be just fine if you could set the toast
>>> target, so I think a better suggestion would be to make "toast_target"
>>> a per-attribute option .
>>>
>>
>> +1, have thought about this myself previouslythank you for bringing it
>> up!
>
> OK, so table-level option for "toast_tuple_target", not attribute-level option
>
> The attached patch and test shows this concept is useful and doesn't
> affect existing data.
>
> For 4x 4000 byte rows:
> * by default we use 1 heap block and 3 toast blocks
> * toast_tuple_target=4080 uses 2 heap blocks and 0 toast blocks


New patch, v2, since one line in the docs failed to apply because of
recent changes.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


toast_tuple_target.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] Temporal query processing with range types

2017-09-12 Thread Simon Riggs
On 30 March 2017 at 13:11, Peter Moser <pitiz...@gmail.com> wrote:
> 2017-03-01 10:56 GMT+01:00 Peter Moser <pitiz...@gmail.com>:
>> A similar walkthrough for ALIGN will follow soon.
>>
>> We are thankful for any suggestion or ideas, to be used to write a
>> good SGML documentation.
>
> The attached README explains the ALIGN operation step-by-step with a
> TEMPORAL LEFT OUTER JOIN example. That is, we start from a query
> input, show how we rewrite it during parser stage, and show how the
> final execution generates result tuples.

Thanks for this contribution. I know what it takes to do significant
contributions and I know it can be frustrating when things languish
for months.

I am starting to look at temporal queries myself so I will begin an interest.

PostgreSQL tries really very hard to implement the SQL Standard and
just the standard. ISTM that the feedback you should have been given
is that this is very interesting but will not be committed in its
current form; I am surprised to see nobody has said that, though you
can see the truth of that since nobody is actively looking to review
or commit this. Obviously if the standard were changed to support
these things we'd suddenly be interested...

What I think I'm lacking is a clear statement of why we need to have
new syntax to solve the problem and why the problem itself is
important.

PostgreSQL supports the ability to produce Set Returning Functions and
various other extensions. Would it be possible to change this so that
we don't add new syntax to the parser but rather we do this as a set
of functions?

An alternative might be for us to implement a pluggable parser, so
that you can have an "alternate syntax" plugin. If we did that, you
can then maintain the plugin outside of core until the time when SQL
Standard is updated and we can implement directly. We already support
the ability to invent new plan nodes, so this could be considered as
part of the plugin.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan

2017-09-08 Thread Simon Riggs
On 6 September 2017 at 07:43, Robert Haas <robertmh...@gmail.com> wrote:

> LET custom_plan_tries = 0 IN SELECT ...

Tom has pointed me at this proposal, since on another thread I asked
for something very similar. (No need to reprise that discussion, but I
wanted prepared queries to be able to do SET work_mem = X; SELECT).
This idea looks a good way forward to me.

Since we're all in roughly the same place, I'd like to propose that we
proceed with the following syntax... whether or not this precisely
solves OP's issue on this thread.

1. Allow SET to set multiple parameters...
SET guc1 = x, guc2 = y
This looks fairly straightforward

2. Allow a SET to apply only for a single statement
SET guc1 = x, guc2 = y FOR stmt
e.g. SET max_parallel_workers = 4 FOR SELECT count(*) FROM bigtable
Internally a GUC setting already exists for a single use, via
GUC_ACTION_SAVE, so we just need to invoke it.

Quick prototype seems like it will deliver quite quickly. I couldn't
see a reason to use "LET" rather than just "SET" which would be the
POLA choice.

Thoughts?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan

2017-09-08 Thread Simon Riggs
On 6 September 2017 at 07:43, Robert Haas <robertmh...@gmail.com> wrote:

> LET custom_plan_tries = 0 IN SELECT ...

Tom has pointed me at this proposal, since on another thread I asked
for something very similar. (No need to reprise that discussion, but I
wanted prepared queries to be able to do SET work_mem = X; SELECT).
This idea looks a good way forward to me.

Since we're all in roughly the same place, I'd like to propose that we
proceed with the following syntax... whether or not this precisely
solves OP's issue on this thread.

1. Allow SET to set multiple parameters...
SET guc1 = x, guc2 = y
This looks fairly straightforward

2. Allow a SET to apply only for a single statement
SET guc1 = x, guc2 = y FOR stmt
e.g. SET max_parallel_workers = 4 FOR SELECT count(*) FROM bigtable
Internally a GUC setting already exists for a single use, via
GUC_ACTION_SAVE, so we just need to invoke it.

Quick prototype seems like it will deliver quite quickly. I couldn't
see a reason to use "LET" rather than just "SET" which would be the
POLA choice.

Thoughts?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-07 Thread Simon Riggs
On 7 September 2017 at 11:31, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> I would like to relax the restriction to allow this specific use case...
>>   SET work_mem = X; SET max_parallel_workers = 4; SELECT ...
>> so we still have only one command (the last select), yet we have
>> multiple GUC settings beforehand.
>
> On what basis do you claim that's only one command?  It would return
> multiple CommandCompletes, for starters, so that it breaks the protocol
> just as effectively as any other loosening.
>
> Moreover, I imagine the semantics you really want is that the SETs only
> apply for the duration of the command.  This wouldn't provide that
> result either.

> Haas' idea of some kind of syntactic extension, like "LET guc1 = x,
> guc2 = y FOR statement" seems more feasible to me.  I'm not necessarily
> wedded to that particular syntax, but I think it has to look like
> a single-statement construct of some kind.

Always happy to use a good idea... (any better way to re-locate that
discussion?)

1. Allow SET to set multiple parameters...
SET guc1 = x, guc2 = y
This looks fairly straightforward

2. Allow SET to work only for a single command...
SET guc1 = x, guc2 = y FOR query
Don't see anything too bad about that...
Requires a new GUC mode for "statement local" rather than "transaction local"

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-07 Thread Simon Riggs
On 7 September 2017 at 11:24, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> On 5 September 2017 at 10:22, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> Does anyone want to do further review on this patch?  If so, I'll
>>> set the CF entry back to "Needs Review".
>
>> OK, I'll review Michael's patch (and confirm my patch is dead)
>
> Not hearing anything, I already pushed my patch an hour or three ago.

Yes, I saw. Are you saying that doc commit is all we need? ISTM we
still had an actual bug.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-07 Thread Simon Riggs
On 7 September 2017 at 11:07, Tom Lane <t...@sss.pgh.pa.us> wrote:
> I wrote:
>> Yeah, it seems like we have now made this behavior official enough that
>> it's time to document it better.  My thought is to create a new subsection
>> in the FE/BE Protocol chapter that explains how multi-statement Query
>> messages are handled, and then to link to that from appropriate places
>> elsewhere.  If anyone thinks the reference section would be better put
>> somewhere else than Protocol, please say where.
>
> I've pushed up an attempt at this:
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b976499480bdbab6d69a11e47991febe53865adc
>
> Feel free to suggest improvements.

Not so much an improvement as a follow-on thought:

All of this applies to simple queries.

At present we restrict using multi-statement requests in extended
protocol, saying that we don't allow it because of a protocol
restriction. The precise restriction is that we can't return more than
one reply. The restriction is implemented via this test
if (list_length(parsetree_list) > 1)
ereport(ERROR,
 (errcode(ERRCODE_SYNTAX_ERROR),
  errmsg("cannot insert multiple commands into a prepared
statement")));
at line 1277 of exec_parse_message()
which is actually more restrictive than it needs to be.

I would like to relax the restriction to allow this specific use case...
  SET work_mem = X; SET max_parallel_workers = 4; SELECT ...
so we still have only one command (the last select), yet we have
multiple GUC settings beforehand.

Any reason to disallow that?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-07 Thread Simon Riggs
On 5 September 2017 at 10:22, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Michael Paquier <michael.paqu...@gmail.com> writes:
>> On Mon, Sep 4, 2017 at 11:15 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> I don't want to go there, and was thinking we should expand the new
>>> comment in DefineSavepoint to explain why not.
>
>> Okay.
>
> Does anyone want to do further review on this patch?  If so, I'll
> set the CF entry back to "Needs Review".

OK, I'll review Michael's patch (and confirm my patch is dead)

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER INDEX .. SET STATISTICS ... behaviour

2017-09-06 Thread Simon Riggs
On 6 September 2017 at 10:27, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> Other than that, I'm good to commit.
>> Any last minute objections?
>
> The docs and comments could stand a bit closer copy-editing by a
> native English speaker.  Other than that, it seemed sane in a
> quick read-through.

Will do

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER INDEX .. SET STATISTICS ... behaviour

2017-09-06 Thread Simon Riggs
On 4 September 2017 at 10:30, Adrien Nayrat <adrien.nay...@dalibo.com> wrote:
> On 09/04/2017 06:16 PM, Alexander Korotkov wrote:
>> Looks good for me.  I've integrated those changes in the patch.
>> New revision is attached.
>
> Thanks, I changed status to "ready for commiter".

This looks useful and good to me.

I've changed this line of code to return NULL rather than return tuple
if (!HeapTupleIsValid(tuple))
return tuple;

Other than that, I'm good to commit.

Any last minute objections?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Add --no-comments to skip COMMENTs with pg_dump

2017-09-06 Thread Simon Riggs
On 1 September 2017 at 22:08, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Sat, Sep 2, 2017 at 1:53 AM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Mon, Aug 21, 2017 at 5:30 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>>> Thinking ahead, are we going to add a new --no-objecttype switch every
>>> time someone wants it?
>>
>> I'd personally be fine with --no-whatever for any whatever that might
>> be a subsidiary property of database objects.  We've got
>> --no-security-labels, --no-tablespaces, --no-owner, and
>> --no-privileges already, so what's wrong with --no-comments?
>>
>> (We've also got --no-publications; I think it's arguable whether that
>> is the same kind of thing.)
>
> And --no-subscriptions in the same bucket.

Yes, it is. I was suggesting that we remove those as well.

But back to the main point which is that --no-comments discards ALL
comments simply to exclude one pointless and annoying comment. That
runs counter to our stance that we do not allow silent data loss. I
want to solve the problem too. I accept that not everyone uses
comments, but if they do, spilling them all on the floor is a user
visible slip up that we should not be encouraging. Sorry y'all.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add psql variables showing server version and psql version.

2017-09-06 Thread Simon Riggs
On 6 September 2017 at 06:38, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> Based upon input from Tom and Fabien, I propose this additional doc patch.
>
> I do not think any of this is appropriate, particularly not the reference
> to 7.0.3.

OK, no problem.

SERVER_VERSION_NUM is a great new feature. I think these points need
further changes

* An example of the intended use of SERVER_VERSION_NUM in psql
* Clarification that this will work for current AND past server versions
* Clarification to avoid confusion between VERSION and SERVER_VERSION

Thanks

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add psql variables showing server version and psql version.

2017-09-06 Thread Simon Riggs
On 5 September 2017 at 11:58, Fabien COELHO <coe...@cri.ensmp.fr> wrote:
>
> Hello Simon,
>
>> Does raise the further question of how psql behaves when we connect to
>> a pre-10 server, so we have SERVER_VERSION_NUM but yet it is not set.
>> How does this
>> \if SERVER_VERSION_NUM < x
>
>
> The if does not have expressions (yet), it just handles TRUE/ON/1 and
> FALSE/0/OFF.
>
>> Do we need some macro or suggested special handling?
>
>
> If "SERVER_VERSION_NUM" is available in 10, then:
>
>   -- exit if version < 10 (\if is ignored and \q is executed)
>   \if false \echo "prior 10" \q \endif
>
>   -- then test version through a server side expression, will work
>   SELECT :SERVER_VERSION_NUM < 11 AS prior_11 \gset
>   \if :prior_11
> -- version 10
>   \else
>     -- version 11 or more
>   \endif


Based upon input from Tom and Fabien, I propose this additional doc patch.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


clarify_psql_server_version.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] <> join selectivity estimate question

2017-09-06 Thread Simon Riggs
On 6 September 2017 at 04:14, Ashutosh Bapat
<ashutosh.ba...@enterprisedb.com> wrote:
> On Fri, Jul 21, 2017 at 4:10 AM, Thomas Munro
> <thomas.mu...@enterprisedb.com> wrote:
>>
>> Thanks.  Bearing all that in mind, I ran through a series of test
>> scenarios and discovered that my handling for JOIN_ANTI was wrong: I
>> thought that I had to deal with inverting the result, but I now see
>> that that's handled elsewhere (calc_joinrel_size_estimate() I think).
>> So neqjoinsel should just treat JOIN_SEMI and JOIN_ANTI exactly the
>> same way.
>
> I agree, esp. after looking at eqjoinsel_semi(), which is used for
> both semi and anti joins, it becomes more clear.
>
>>
>> That just leaves the question of whether we should try to handle the
>> empty RHS and single-value RHS cases using statistics.  My intuition
>> is that we shouldn't, but I'll be happy to change my intuition and
>> code that up if that is the feedback from planner gurus.
>
> Empty RHS can result from dummy relations also, which are produced by
> constraint exclusion, so may be that's an interesting case. Single
> value RHS may be interesting with partitioned table with all rows in a
> given partition end up with the same partition key value. But may be
> those are just different patches. I am not sure.
>
>>
>> Please find attached a new version, and a test script I used, which
>> shows a bunch of interesting cases.  I'll add this to the commitfest.
>
> I added some "stable" tests to your patch taking inspiration from the
> test SQL file. I think those will be stable across machines and runs.
> Please let me know if those look good to you.

Why isn't this an open item for PG10?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix performance of generic atomics

2017-09-06 Thread Simon Riggs
On 5 September 2017 at 21:23, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Jeff Janes <jeff.ja...@gmail.com> writes:
>> What scale factor and client count? How many cores per socket?  It looks
>> like Sokolov was just starting to see gains at 200 clients on 72 cores,
>> using -N transaction.

...
> Moreover, it matters which primitive you're testing, on which platform,
> with which compiler, because we have a couple of layers of atomic ops
> implementations.
...

I think Sokolov was aiming at 4-socket servers specifically, rather
than as a general performance gain.

If there is no gain on 2-socket, at least there is no loss either.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add psql variables showing server version and psql version.

2017-09-05 Thread Simon Riggs
On 5 September 2017 at 09:58, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> On Tue, Sep 5, 2017 at 12:16 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> Hm.  I think it would be a fine idea to push this change into v10,
>>> so that all psql versions that have \if would have these variables.
>>> However, I'm less enthused about adding them to the 9.x branches.
>>> Given the lack of \if, I'm not seeing a use-case that would justify
>>> taking any compatibility risk for.
>>>
>>> Opinions anyone?
>
>> As I see it, the risks of back-patching are:
>
>> 1. Different minor versions will have different behavior, and that
>> tends to create more problems than it solves.
>
> Yeah.  Whatever use-case you think might exist for these variables in,
> say, 9.6 is greatly weakened by the fact that releases through 9.6.5
> don't have them.

Makes sense

> However, since v10 is still in beta I don't think that argument
> applies to it.

OK


Does raise the further question of how psql behaves when we connect to
a pre-10 server, so we have SERVER_VERSION_NUM but yet it is not set.
How does this
\if SERVER_VERSION_NUM < x
behave if unset? Presumably it fails, even though the version *is* less than x
Do we need some macro or suggested special handling?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal for changes to recovery.conf API

2017-09-05 Thread Simon Riggs
On 5 September 2017 at 06:47, Daniel Gustafsson <dan...@yesql.se> wrote:
>> On 27 Mar 2017, at 10:27, Simon Riggs <si...@2ndquadrant.com> wrote:
>>
>> On 7 March 2017 at 23:31, Josh Berkus <j...@berkus.org> wrote:
>>> On 03/02/2017 07:13 AM, David Steele wrote:
>>>> Hi Simon,
>>>>
>>>> On 2/25/17 2:43 PM, Simon Riggs wrote:
>>>>> On 25 February 2017 at 13:58, Michael Paquier <michael.paqu...@gmail.com> 
>>>>> wrote:
>>>>>
>>>>>> - trigger_file is removed.
>>>>>> FWIW, my only complain is about the removal of trigger_file, this is
>>>>>> useful to detect a trigger file on a different partition that PGDATA!
>>>>>> Keeping it costs also nothing..
>>>>>
>>>>> Sorry, that is just an error of implementation, not intention. I had
>>>>> it on my list to keep, at your request.
>>>>>
>>>>> New version coming up.
>>>>
>>>> Do you have an idea when the new version will be available?
>>>
>>> Please?  Having yet another PostgreSQL release go by without fixing
>>> recovery.conf would make me very sad.
>>
>> I share your pain, but there are various things about this patch that
>> make me uncomfortable. I believe we are looking for an improved design
>> not just a different design.
>>
>> I think the best time to commit such a patch is at the beginning of a
>> new cycle, so people have a chance to pick out pieces they don't like
>> and incrementally propose changes.
>>
>> So I am going to mark this MovedToNextCF, barring objections from
>> committers willing to make it happen in this release.
>
> Next CF has now become This CF, has there been any work on this highly sought
> after patch?  Would be good to get closure on this early in the v11 cycle.

I've not worked on this at all, so it isn't ready for re-review and commit.

I get the "lets do it early" thing and will try to extract a subset in October.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept

2017-09-05 Thread Simon Riggs
On 4 September 2017 at 09:06, Alexander Korotkov
<a.korot...@postgrespro.ru> wrote:

> Aborting read-only query on standby because of vacuum on master seems to be
> rather unexpected behaviour for user when hot standby feedback is on.
> I think we should work on this problem for v11.

Happy to help. My suggestion would be to discuss a potential theory of
operation and then code a patch.

As Alexander says, simply skipping truncation if standby is busy isn't
a great plan.

If we defer an action on standby replay, when and who will we apply
it? What happens if the standby is shutdown or crashes while an action
is pending.

Perhaps altering the way truncation requires an AccessExclusiveLock
would help workloads on both master and standby? If a Relation knew it
had a pending truncation then scans wouldn't need to go past newsize.
Once older lockers have gone we could simply truncate without the
lock. Would need a few pushups but seems less scary then trying to
make pending standby actions work well enough to commit.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] Allow users to specify multiple tables in VACUUM commands

2017-09-04 Thread Simon Riggs
On 5 September 2017 at 02:16, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Mon, Sep 4, 2017 at 11:47 PM, Bossart, Nathan <bossa...@amazon.com> wrote:
>> I've made this change in v14 of the main patch.
>>
>> In case others had opinions regarding the de-duplication patch, I've
>> attached that again as well.
>
> +   /*
> +* Create the relation list in a long-lived memory context so that it
> +* survives transaction boundaries.
> +*/
> +   old_cxt = MemoryContextSwitchTo(AutovacMemCxt);
> +   rangevar = makeRangeVar(tab->at_nspname, tab->at_relname, -1);
> +   rel = makeVacuumRelation(rangevar, NIL, tab->at_relid);
> +   rel_list = list_make1(rel);
> +   MemoryContextSwitchTo(old_cxt);
> That's way better, thanks for the new patch.
>
> So vacuum_multiple_tables_v14.patch is good for a committer in my
> opinion. With this patch, if the same relation is specified multiple
> times, then it gets vacuum'ed that many times. Using the same column
> multi-times results in an error as on HEAD, but that's not a new
> problem with this patch.
>
> So I would tend to think that the same column specified multiple times
> should cause an error, and that we could let VACUUM run work N times
> on a relation if it is specified this much. This feels more natural,
> at least to me, and it keeps the code simple.

ISTM there is no difference between
  VACUUM a, b
and
  VACUUM a; VACUUM b;

If we want to keep the code simple we must surely consider whether the
patch has any utility.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
On 4 September 2017 at 12:11, Robert Haas <robertmh...@gmail.com> wrote:
>
> It's not really true that the only alternatives to pglogical are
> proprietary.  Really, one could use any logical replication solution,
> and there have been multiple open source alternatives for a decade.

True, but it is by far the best solution out of the many choices.

Which is why next year when upgrading from PG10 -> PG11 we will
mention it and that point not mention the other older solutions, which
were once our best.

>> Our docs mention pglogical already, so don't see an issue with
>> mentioning it here.
>
> The existing reference is alongside a bunch of other third-party
> tools.  Mentioning it at the very top of our release notes would give
> it a pride of place with which I'm not too comfortable.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
On 4 September 2017 at 10:34, Andreas Joseph Krogh <andr...@visena.com> wrote:

> I'd like at big red warning "Logical decoding doesn't support Large Objects"
> in that case;
>
> "If upgrading from a 9.4 server or later, and you don't use Large Objects,
> external utilities using logical decoding, such as pglogical or
> proprietary alternatives, can also provide an alternate route,
> often with lower downtime."
>
> pg_upgrade or pg_dump is really the only option for us using LOs.

Sounds like we could change that, or at least add a WARNING that there are LOs.

Patches welcome.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
On 4 September 2017 at 12:39, Robert Haas <robertmh...@gmail.com> wrote:
> On Mon, Sep 4, 2017 at 7:14 AM, Magnus Hagander <mag...@hagander.net> wrote:
>> I agree that singling it out there is probably not the best idea. But a
>> sentence/paragraph saying that there are third party replication solutions
>> that can solve the problem, along with linking to the page with the list,
>> perhaps?
>
> Yeah, that seems fine.

A link to our docs page that covers those would be fine.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Release Note changes

2017-09-04 Thread Simon Riggs
Migration to Version 10

"A dump/restore using pg_dumpall, or use of pg_upgrade, is required
for those wishing to migrate data from any previous release."

This isn't true and is seriously misleading since pglogical and other
proprietary tools exist that were designed specifically to allow this.
Suggested additional wording would be...

"If upgrading from a 9.4 server or later, external utilities using
logical decoding, such as pglogical or proprietary alternatives, can
also provide an alternate route, often with lower downtime."

Our docs mention pglogical already, so don't see an issue with
mentioning it here.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-01 Thread Simon Riggs
On 1 September 2017 at 15:19, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> I've added tests to the recent patch to show it works.
>
> I don't think those test cases prove anything (ie, they work fine
> on an unpatched server).  With a backslash maybe they would.
>
>> Any objection to me backpatching this, please say.
>
> This patch makes me itch.  Why is it correct for these three checks,
> and only these three checks out of the couple dozen uses of isTopLevel
> in standard_ProcessUtility, to instead do something else?

No problem, it was a quick fix, not a deep one.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More replication race conditions

2017-09-01 Thread Simon Riggs
On 31 August 2017 at 12:54, Simon Riggs <si...@2ndquadrant.com> wrote:

>> The above-described topic is currently a PostgreSQL 10 open item.  Simon,
>> since you committed the patch believed to have created it, you own this open
>> item.  If some other commit is more relevant or if this does not belong as a
>> v10 open item, please let us know.  Otherwise, please observe the policy on
>> open item ownership[1] and send a status update within three calendar days of
>> this message.  Include a date for your subsequent status update.  Testers may
>> discover new open items at any time, and I want to plan to get them all fixed
>> well in advance of shipping v10.  Consequently, I will appreciate your 
>> efforts
>> toward speedy resolution.  Thanks.
>>
>> [1] 
>> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com
>
> I acknowledge receipt of the reminder and will fix by end of day tomorrow.

Applied. Thanks for the patch, Petr.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-01 Thread Simon Riggs
On 1 September 2017 at 08:09, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Fri, Sep 1, 2017 at 3:05 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> I'm not sure I see the use case for anyone using SAVEPOINTs in this
>> context, so simply throwing a good error message is enough.
>>
>> Clearly nobody is using this, so lets just lock the door. I don't
>> think fiddling with the transaction block state machine is anything
>> anybody wants to do in back branches, at least without a better reason
>> than this.
>
> I don't think you can say that, per se the following recent report:
> https://www.postgresql.org/message-id/cah2-v61vxnentfj2v-zd+ma-g6kqmjgd5svxou3jbvdzqh0...@mail.gmail.com

AIUI, nobody is saying this should work, we're just discussing how to
produce an error message. We should fix it, but not spend loads of
time on it.

I've added tests to the recent patch to show it works.

Any objection to me backpatching this, please say.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


prevent_multistatement_savepoints.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statement-level rollback

2017-09-01 Thread Simon Riggs
On 14 August 2017 at 23:58, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:
> On 2/28/17 02:39, Tsunakawa, Takayuki wrote:
>> The code for stored functions is not written yet, but I'd like your feedback 
>> for the specification and design based on the current patch.  I'll add this 
>> patch to CommitFest 2017-3.
>
> This patch needs to be rebased for the upcoming commit fest.

I'm willing to review this if the patch is going to be actively worked on.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Surjective functional indexes

2017-09-01 Thread Simon Riggs
On 1 September 2017 at 05:40, Thomas Munro
<thomas.mu...@enterprisedb.com> wrote:
> On Fri, Jun 9, 2017 at 8:08 PM, Konstantin Knizhnik
> <k.knizh...@postgrespro.ru> wrote:
>> Attached please find rebased version of the patch.
>> Now "projection" attribute is used instead of surjective/injective.
>
> Hi Konstantin,
>
> This still applies but it doesn't compile after commits 2cd70845 and
> c6293249.  You need to change this:
>
>   Form_pg_attribute att = RelationGetDescr(indexDesc)->attrs[i];
>
> ... to this:
>
>   Form_pg_attribute att = TupleDescAttr(RelationGetDescr(indexDesc), i);
>
> Thanks!

Does the patch work fully with that change? If so, I will review.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] Savepoint-related statements terminates connection

2017-09-01 Thread Simon Riggs
On 17 May 2017 at 08:38, Tsunakawa, Takayuki
<tsunakawa.ta...@jp.fujitsu.com> wrote:
> From: Michael Paquier [mailto:michael.paqu...@gmail.com]
>> On Fri, Mar 31, 2017 at 9:58 PM, Ashutosh Bapat
>> <ashutosh.ba...@enterprisedb.com> wrote:
>> > Then the question is why not to allow savepoints as well? For that we
>> > have to fix transaction block state machine.
>>
>> I agree with this argument. I have been looking at the patch, and what it
>> does is definitely incorrect. Any query string including multiple queries
>> sent to the server is executed as a single transaction. So, while the current
>> behavior of the server is definitely incorrect for savepoints in this case,
>> the proposed patch does not fix anything but actually makes things worse.
>> I think that instead of failing, savepoints should be able to work properly.
>> As you say cursors are handled correctly, savepoints should fall under the
>> same rules.
>
> Yes, I'm in favor of your opinion.  I'll put more thought into whether it's 
> feasible with invasive code.

I'm not sure I see the use case for anyone using SAVEPOINTs in this
context, so simply throwing a good error message is enough.

Clearly nobody is using this, so lets just lock the door. I don't
think fiddling with the transaction block state machine is anything
anybody wants to do in back branches, at least without a better reason
than this.

Simpler version of original patch attached.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


prevent_multistatement_savepoints.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More replication race conditions

2017-08-31 Thread Simon Riggs
On 27 August 2017 at 03:32, Noah Misch <n...@leadboat.com> wrote:
> On Fri, Aug 25, 2017 at 12:09:00PM +0200, Petr Jelinek wrote:
>> On 24/08/17 19:54, Tom Lane wrote:
>> > sungazer just failed with
>> >
>> > pg_recvlogical exited with code '256', stdout '' and stderr 
>> > 'pg_recvlogical: could not send replication command "START_REPLICATION 
>> > SLOT "test_slot" LOGICAL 0/0 ("include-xids" '0', "skip-empty-xacts" 
>> > '1')": ERROR:  replication slot "test_slot" is active for PID 8913148
>> > pg_recvlogical: disconnected
>> > ' at 
>> > /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/recovery/../../../src/test/perl/PostgresNode.pm
>> >  line 1657.
>> >
>> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer=2017-08-24%2015%3A16%3A10
>> >
>> > Looks like we're still not there on preventing replication startup
>> > race conditions.
>>
>> Hmm, that looks like "by design" behavior. Slot acquiring will throw
>> error if the slot is already used by somebody else (slots use their own
>> locking mechanism that does not wait). In this case it seems the
>> walsender which was using slot for previous previous step didn't finish
>> releasing the slot by the time we start new command. We can work around
>> this by changing the test to wait perhaps.
>
> [Action required within three days.  This is a generic notification.]
>
> The above-described topic is currently a PostgreSQL 10 open item.  Simon,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> v10 open item, please let us know.  Otherwise, please observe the policy on
> open item ownership[1] and send a status update within three calendar days of
> this message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping v10.  Consequently, I will appreciate your efforts
> toward speedy resolution.  Thanks.
>
> [1] 
> https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com

I acknowledge receipt of the reminder and will fix by end of day tomorrow.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MAIN, Uncompressed?

2017-08-29 Thread Simon Riggs
On 26 August 2017 at 05:40, Mark Kirkwood <mark.kirkw...@catalyst.net.nz> wrote:
> On 26/08/17 12:18, Simon Riggs wrote:
>
>> On 25 August 2017 at 20:53, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>>
>>> Greg Stark <st...@mit.edu> writes:
>>>>
>>>> I think this is a particularly old piece of code and we're lucky the
>>>> default heuristics have served well for all this time because I doubt
>>>> many people fiddle with these storage attributes. The time may have
>>>> come to come up with a better UI for the storage attributes because
>>>> people are doing new things (like json) and wanting more control over
>>>> this heuristic.
>>>
>>> Yeah, I could get behind a basic rethinking of the tuptoaster control
>>> knobs.  I'm just not in love with Simon's specific proposal, especially
>>> not if we can't think of a better name for it than "MAINU".
>>
>> Extended/External would be just fine if you could set the toast
>> target, so I think a better suggestion would be to make "toast_target"
>> a per-attribute option .
>>
>
> +1, have thought about this myself previouslythank you for bringing it
> up!

OK, so table-level option for "toast_tuple_target", not attribute-level option

The attached patch and test shows this concept is useful and doesn't
affect existing data.

For 4x 4000 byte rows:
* by default we use 1 heap block and 3 toast blocks
* toast_tuple_target=4080 uses 2 heap blocks and 0 toast blocks

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


toast_tuple_target.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MAIN, Uncompressed?

2017-08-25 Thread Simon Riggs
On 25 August 2017 at 20:53, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Greg Stark <st...@mit.edu> writes:
>> I think this is a particularly old piece of code and we're lucky the
>> default heuristics have served well for all this time because I doubt
>> many people fiddle with these storage attributes. The time may have
>> come to come up with a better UI for the storage attributes because
>> people are doing new things (like json) and wanting more control over
>> this heuristic.
>
> Yeah, I could get behind a basic rethinking of the tuptoaster control
> knobs.  I'm just not in love with Simon's specific proposal, especially
> not if we can't think of a better name for it than "MAINU".

Extended/External would be just fine if you could set the toast
target, so I think a better suggestion would be to make "toast_target"
a per-attribute option .

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MAIN, Uncompressed?

2017-08-25 Thread Simon Riggs
On 25 August 2017 at 14:08, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> On 25 August 2017 at 13:21, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> If you know compression isn't useful, but you don't want to fail on
>>> wide values, then "external" should serve the purpose.
>
>> Well, almost. External toasts at 2048-ish bytes whereas Main toasts at
>> 8160 bytes.
>> The rows are typically near 4kB long, so if marked External they would
>> always be toasted.
>> It's desirable to have the full row in the heap block, rather than
>> have to access heap-toastindex-toastblocks in all cases.
>> The data is also incompressible, so Main just wastes time on insert.
>> Hence, we have a missing option.
>
> Maybe, but the use case seems mighty narrow.

JSON blobs between 2kB and 8160 bytes are very common.

String length is maybe a poisson distribution, definitely not uniform.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MAIN, Uncompressed?

2017-08-25 Thread Simon Riggs
On 25 August 2017 at 13:21, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> Main is roughly what is wanted, yet it always tries to compress. If
>> you already know that won't be useful it should be possible to turn
>> compression off.
>
> If you know compression isn't useful, but you don't want to fail on
> wide values, then "external" should serve the purpose.

Well, almost. External toasts at 2048-ish bytes whereas Main toasts at
8160 bytes.

The rows are typically near 4kB long, so if marked External they would
always be toasted.

It's desirable to have the full row in the heap block, rather than
have to access heap-toastindex-toastblocks in all cases.

The data is also incompressible, so Main just wastes time on insert.

Hence, we have a missing option.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MAIN, Uncompressed?

2017-08-25 Thread Simon Riggs
On 25 August 2017 at 12:24, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Simon Riggs <si...@2ndquadrant.com> writes:
>> It looks like we need a new Column Storage option for MAIN, Uncompressed.
>> We have these Column Storage options
>> Plain - inline only, uncompressed
>> Main - inline until external as last resort, compressible
>> External - external, uncompressed
>> Extended - external, compressible
>
>> So there is no option for Main, but not compressible...
>
> Doesn't Plain serve the purpose?

No, because that just gives an error if you try to insert a large column value.

> In point of fact, though, "never inline and never compress" is not really
> a useful option, as it can be more easily read as "fail on wide values".

Agreed, but that is not what I am proposing.

Main is roughly what is wanted, yet it always tries to compress. If
you already know that won't be useful it should be possible to turn
compression off.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] MAIN, Uncompressed?

2017-08-25 Thread Simon Riggs
It looks like we need a new Column Storage option for MAIN, Uncompressed.

We have these Column Storage options
Plain - inline only, uncompressed
Main - inline until external as last resort, compressible
External - external, uncompressed
Extended - external, compressible

So there is no option for Main, but not compressible...

With reference to code... there seems to be no way to skip step 3

/* --
 * Compress and/or save external until data fits into target length
 *
 *  1: Inline compress attributes with attstorage 'x', and store very
 * large attributes with attstorage 'x' or 'e' external immediately
 *  2: Store attributes with attstorage 'x' or 'e' external
 *  3: Inline compress attributes with attstorage 'm'
 *  4: Store attributes with attstorage 'm' external
 * --
 */

Not sure what to call this new option? MAINU?

Objections?

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Explicit relation name in VACUUM VERBOSE log

2017-08-23 Thread Simon Riggs
On 23 August 2017 at 08:18, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Wed, Aug 23, 2017 at 10:59 AM, Masahiko Sawada <sawada.m...@gmail.com> 
> wrote:
>> On Tue, Aug 22, 2017 at 3:23 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>>> e.g.
>>> replace RelationGetRelationName() with
>>> RelationGetOptionallyQualifiedRelationName()
>>> and then control whether we include this new behaviour with
>>> log_qualified_object_names = on | off
>>
>> Is there any case where we don't want to get non-qualified object
>> names? If users want to get the same log message as what they got so
>> far, it would be better to have a GUC that allows us to switch between
>> the existing behavior and the forcibly logging qualified object names.
>
> I can imagine plenty of cases where providing more information is
> valuable, but not really any where it makes more sense to provide less
> information, so -1 for a GUC to control such behavior. I would imagine
> that people are not going to set it anyway. A RangeVar may not set the
> schema_name, so I would suggest to rely on that to decide if the error
> messages show the schema name or name.

We can put in the GUC if there are objections about backwards
compaibility, so I am OK to leave it out.

> Still we are only talking about
> two messages in the vacuum code paths, which are the ones close to the
> checks where is assigned the OID of the relation with a RangeVar.

The proposal is to change all log messages so we have consistency, not
just VACUUM.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why not parallel seq scan for slow functions

2017-08-22 Thread Simon Riggs
On 21 August 2017 at 11:42, Amit Kapila <amit.kapil...@gmail.com> wrote:

>> or of 2)
>> treating cost = speed, so we actually reduce the cost of a parallel
>> plan rather than increasing it so it is more likely to be picked.
>>
>
> Yeah, this is what is being currently followed for costing of parallel
> plans and this patch also tries to follow the same.

OK, I understand this better now, thanks.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Explicit relation name in VACUUM VERBOSE log

2017-08-22 Thread Simon Riggs
On 15 August 2017 at 02:27, Masahiko Sawada <sawada.m...@gmail.com> wrote:

> Is there any reasons why we don't
> write an explicit name in vacuum verbose logs?

None. Sounds like a good idea.

> If not, can we add
> schema names to be more clearly?

Yes, we can. I'm not sure why you would do this only for VACUUM
though? I see many messages in various places that need same treatment

I would also be inclined to do this by changing only the string
presented, not the actual message string.
e.g.
replace RelationGetRelationName() with
RelationGetOptionallyQualifiedRelationName()

and then control whether we include this new behaviour with
log_qualified_object_names = on | off

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: Add --no-comments to skip COMMENTs with pg_dump

2017-08-21 Thread Simon Riggs
On 7 August 2017 at 16:14, Fabrízio de Royes Mello
<fabriziome...@gmail.com> wrote:
>
> On Mon, Aug 7, 2017 at 10:43 AM, Robins Tharakan <thara...@gmail.com> wrote:
>>
>> On 20 July 2017 at 05:14, Robins Tharakan <thara...@gmail.com> wrote:
>>>
>>> On 20 July 2017 at 05:08, Michael Paquier <michael.paqu...@gmail.com>
>>> wrote:
>>>>
>>>> On Wed, Jul 19, 2017 at 8:59 PM,
>>>> Fabrízio de Royes Mello
>>>> > You should add the properly sgml docs for this pg_dumpall change also.
>>>>
>>>> Tests of pg_dump go to src/bin/pg_dump/t/ and tests for objects in
>>>> extensions are in src/test/modules/test_pg_dump, but you just care
>>>> about the former with this patch. And if you implement some new tests,
>>>> look at the other tests and base your work on that.
>>>
>>>
>>> Thanks Michael /
>>> Fabrízio.
>>>
>>> Updated patch (attached) additionally adds SGML changes for pg_dumpall.
>>> (I'll try to work on the tests, but sending this
>>> nonetheless
>>> ).
>>>
>>
>> Attached is an updated patch (v4) with basic tests for pg_dump /
>> pg_dumpall.
>> (Have zipped it since patch size jumped to ~40kb).
>>
>
> The patch applies cleanly to current master and all tests run without
> failures.
>
> I also test against all current supported versions (9.2 ... 9.6) and didn't
> find any issue.
>
> Changed status to "ready for commiter".

I get the problem, but not this solution. It's too specific and of
zero other value, yet not even exactly specific to the issue. We
definitely don't want --no-extension-comments, but --no-comments
removes ALL comments just to solve a weird problem. (Meta)Data loss,
surely?

Thinking ahead, are we going to add a new --no-objecttype switch every
time someone wants it?

It would make more sense to add something more general and extensible
such as --exclude-objects=comment
or similar name

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why not parallel seq scan for slow functions

2017-08-21 Thread Simon Riggs
On 21 August 2017 at 10:08, Amit Kapila <amit.kapil...@gmail.com> wrote:

> Thoughts?

This seems like a very basic problem for parallel queries.

The problem seems to be that we are calculating the cost of the plan
rather than the speed of the plan.

Clearly, a parallel task has a higher overall cost but a lower time to
complete if resources are available.

We have the choice of 1) adding a new optimizable quantity, or of 2)
treating cost = speed, so we actually reduce the cost of a parallel
plan rather than increasing it so it is more likely to be picked.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Account for catalog snapshot in PGXACT->xmin updates.

2017-08-21 Thread Simon Riggs
On 19 August 2017 at 20:54, Noah Misch <n...@leadboat.com> wrote:
> On Tue, Dec 06, 2016 at 01:59:18PM -0500, Robert Haas wrote:
>> On Tue, Dec 6, 2016 at 1:36 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> > Robert Haas <robertmh...@gmail.com> writes:
>> >> On Tue, Nov 15, 2016 at 3:55 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> >>> Account for catalog snapshot in PGXACT->xmin updates.
>> >
>> >> It seems to me that this makes it possible for
>> >> ThereAreNoPriorRegisteredSnapshots() to fail spuriously.  The only
>> >> core code that calls that function is in copy.c, and apparently we
>> >> never reach that point with the catalog snapshot set.  But that seems
>> >> fragile.
>
> I recently noticed this by way of the copy2 regression test failing, in 9.4
> and later, under REPEATABLE READ and SERIALIZABLE.  That failure started with
> $SUBJECT.  Minimal example:
>
> CREATE TABLE vistest (a text);
> BEGIN ISOLATION LEVEL REPEATABLE READ;
> TRUNCATE vistest;
> COPY vistest FROM stdin CSV FREEZE;
> x
> \.
>
> Before $SUBJECT, the registered snapshot count was one.  Since $SUBJECT, the
> catalog snapshot raises the count to two.
>
>> > Hm.  If there were some documentation explaining what
>> > ThereAreNoPriorRegisteredSnapshots is supposed to do, it would be possible
>> > for us to have a considered argument as to whether this patch broke it or
>> > not.  As things stand, it is not obvious whether it ought to be ignoring
>> > the catalog snapshot or not; one could construct plausible arguments
>
> The rows COPY FREEZE writes will be visible (until deleted) to every possible
> catalog snapshot, so whether CatalogSnapshot==NULL doesn't matter.  It may be
> useful to error out if a catalog scan is in-flight, but the registration for
> CatalogSnapshot doesn't confirm or refute that.
>
>> > either way.  It's not even very obvious why both 0 and 1 registered
>> > snapshots should be considered okay; that looks a lot more like a hack
>> > than reasoned-out behavior.  So I'm satisfied if COPY FREEZE does what
>
> Fair summary.  ThereAreNoPriorRegisteredSnapshots() has functioned that way
> since an early submission[1], and I don't see that we ever discussed it
> explicitly.  Adding Simon for his recollection.

The intention, IIRC, was to allow the current snapshot (i.e. 1) but no
earlier (i.e. prior) snapshots (so not >=2)

The name was chosen so it was (hopefully) clear what it did --
ThereAreNoPriorRegisteredSnapshots
...but that was before we had MVCC scans on catalogs, which changed
things in unforeseen ways.

The right fix is surely to do a more principled approach and renaming
of the function so that it reflects the current snapshot types.

As Noah mentions there are potential anomalies in other transactions
but this was understood and hence why we have a specific command
keyword to acknowledge user acceptance of these behaviours.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: global index

2017-08-20 Thread Simon Riggs
On 18 August 2017 at 15:40, Alvaro Herrera <alvhe...@2ndquadrant.com> wrote:
> Ildar Musin wrote:
>
>> While we've been developing pg_pathman extension one of the most frequent
>> questions we got from our users was about global index support. We cannot
>> provide it within an extension. And I couldn't find any recent discussion
>> about someone implementing it. So I'm thinking about giving it a shot and
>> start working on a patch for postgres.
>>
>> One possible solution is to create an extended version of item pointer which
>> would store relation oid along with block number and position:
>
> I've been playing with the index code in order to allow indirect tuples,
> which are stored in a format different from IndexTupleData.
>
> I've been adding an "InMemoryIndexTuple" (maybe there's a better name)
> which internally has pointers to both IndexTupleData and
> IndirectIndexTupleData, which makes it easier to pass around the index
> tuple in either format.

> It's very easy to add an OID to that struct,
> which then allows to include the OID in either an indirect index tuple
> or a regular one.

If there is a unique index then there is no need for that. Additional
data to the index makes it even bigger and even less useful, so we
need to count that as a further disadvantage of global indexes.

I have a very clear statement from a customer recently that "We will
never use global indexes", based upon their absolute uselessness in
Oracle.

> Then, wherever we're using IndexTupleData in the index AM code, we would
> replace it with InMemoryIndexTuple.  This should satisfy both your use
> case and mine.

Global indexes are a subset of indirect indexes use case but luckily
not the only use.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recovery_target_time = 'now' is not an error but still impractical setting

2017-08-18 Thread Simon Riggs
On 18 August 2017 at 07:30, Michael Paquier <michael.paqu...@gmail.com> wrote:
> On Thu, Aug 17, 2017 at 6:24 PM, Simon Riggs <si...@2ndquadrant.com> wrote:
>> On 15 August 2017 at 15:37, Piotr Stefaniak <postg...@piotr-stefaniak.me> 
>> wrote:
>>
>>> One thing I tried was a combination of recovery_target_action =
>>> 'shutdown' and recovery_target_time = 'now'. The result is surprising
>>
>> Indeed, special timestamp values were never considered in the design,
>> so I'm not surprised they don't work and have never been tested.
>
> We could always use a TRY/CATCH block and add an error in
> GetCurrentDateTime and GetCurrentTimeUsec if they are called out of a
> transaction context. Rather-safe-than-sorry.
>
>> Your suggestion of "furthest" is already the default behaviour.
>>
>> Why are you using 'now'? Why would you want to pick a randomly
>> selected end time?
>
> "now" is not much interesting, targets in the past are more, like
> 'yesterday'. This could create back an instance back to the beginning
> of the previous day, simplifying scripts creating recovery.conf a bit,
> even if that's not much simplification as we are talking about
> creating a timestamp string.

I can't see any value in allowing imprecise and effective random timestamps.

ISTM if we care, it would be better to simply exclude the 7 named
timestamps prior to their being sent, as in the attached patch.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


exclude_special_values_in_recovery_target_time.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recovery_target_time = 'now' is not an error but still impractical setting

2017-08-17 Thread Simon Riggs
On 15 August 2017 at 15:37, Piotr Stefaniak <postg...@piotr-stefaniak.me> wrote:

> One thing I tried was a combination of recovery_target_action =
> 'shutdown' and recovery_target_time = 'now'. The result is surprising

Indeed, special timestamp values were never considered in the design,
so I'm not surprised they don't work and have never been tested.

Your suggestion of "furthest" is already the default behaviour.

Why are you using 'now'? Why would you want to pick a randomly
selected end time?

recovery_target_time = 'allballs' sounds fun for recovering corrupted databases

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM salt length

2017-08-17 Thread Simon Riggs
On 16 August 2017 at 14:10, Peter Eisentraut
<peter.eisentr...@2ndquadrant.com> wrote:
> The SCRAM salt length is currently set as
>
> /* length of salt when generating new verifiers */
> #define SCRAM_DEFAULT_SALT_LEN 12
>
> without further comment.
>
> I suspect that this length was chosen based on the example in RFC 5802
> (SCRAM-SHA-1) section 5.  But the analogous example in RFC 7677
> (SCRAM-SHA-256) section 3 uses a length of 16.  Should we use that instead?

16 preferred, IMHO

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign table creation and NOT VALID check constraints

2017-08-01 Thread Simon Riggs
On 1 August 2017 at 08:37, Amit Langote <langote_amit...@lab.ntt.co.jp> wrote:
> On 2017/08/01 15:22, Simon Riggs wrote:
>> On 1 August 2017 at 07:16, Amit Langote <langote_amit...@lab.ntt.co.jp> 
>> wrote:
>>> In f27a6b15e656 (9.6 & later), we decided to "Mark CHECK constraints
>>> declared NOT VALID valid if created with table."  In retrospect,
>>> constraints on foreign tables should have been excluded from consideration
>>> in that commit, because the thinking behind the aforementioned commit
>>> (that the constraint is trivially validated because the newly created
>>> table contains no data) does not equally apply to the foreign tables case.
>>>
>>> Should we do something about that?
>>
>> In what way does it not apply? Do you have a failure case?
>
> Sorry for not mentioning the details.
>
> I was thinking that a foreign table starts containing the data of the
> remote object it points to the moment it's created (unlike local tables
> which contain no data to begin with).  If a user is not sure whether a
> particular constraint being created in the same command holds for the
> remote data, they may mark it as NOT VALID and hope that the system treats
> the constraint as such until such a time that they mark it valid by
> running ALTER TABLE VALIDATE CONSTRAINT.  Since the planner is the only
> consumer of pg_constraint.convalidated, that means the user expects the
> planner to ignore such a constraint.  Since f27a6b15e656, users are no
> longer able to expect so.

For Foreign Tables, it sounds like an issue. Don't think it exists for
normal tables.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   3   4   5   6   7   8   9   10   >