Re: [HACKERS] Fix typo in src/backend/utils/mmgr/README

2013-12-25 Thread Etsuro Fujita
Peter Eisentraut wrote:
 On 12/24/13, 1:33 AM, Etsuro Fujita wrote:
  This is a small patch to fix a typo in src/backend/utils/mmgr/README.

 I don't think that change is correct.

I've fixed the patch, though that might be still wrong.  I'm not a native
English speaker ;).  Please find attached an updated version of the patch.

Thanks,

Best regards,
Etsuro Fujita


typofix-2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] varattno remapping

2013-12-25 Thread Craig Ringer
On 12/24/2013 11:17 PM, Dean Rasheed wrote:
 I don't think this bit is quite right.
 
 It's not correct to assume that all the view columns are simple
 references to columns of the base relation --- auto-updatable views
 may now contain a mix of updatable and non-updatable columns, so some
 of the view columns may be arbitrary expressions.

Ah - it looks like I'd checked against 9.3 and missed the relaxation of
those requirements.

 There is already code in rewriteTargetView() that does something very
 similar (to the whole parsetree, rather than just the returning list)
 with 2 function calls:

Copying the view tlist and then adjusting it is a much smarter way to do
it. I should've seen that the pull-up code could be adapted to deal with
the RETURNING list, so thankyou.

It's a cleaner way to do it.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [bug fix] pg_ctl stop times out when it should respond quickly

2013-12-25 Thread MauMau

From: Peter Eisentraut pete...@gmx.net

This patch breaks the regression tests:

xml  ... ok
test stats... ok
== shutting down postmaster   ==
waits a long time
pg_ctl: server does not shut down

pg_regress: could not stop postmaster: exit code was 256


pg_ctl timed out waiting for the zombie postgres.

maumau 19621 18849  0 15:21 pts/900:00:00 [postgres] defunct
maumau 20253 18849  0 15:22 pts/900:00:00 
/maumau/postgresql-9.4/src/test/regress/./tmp_check/install//maumau/pgsql/bin/pg_ctl 
stop -D /maumau/postgresql-9.4/src/test/regress/./tmp_check/data -s -m fast


pg_regress must wait for postgres to terminate by calling waitpid(), because 
it invoked postgres directly.  The attached pg_regress_pg_stop.patch does 
this.  If you like the combination of this and the original fix for pg_ctl 
in one patch, please use pg_stop_fail_v3.patch.


Sorry for causing trouble.


Regards
MauMau


pg_stop_fail_v3.patch
Description: Binary data


pg_regress_pg_stop.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] BDR-project

2013-12-25 Thread Andreas Joseph Krogh
Hi hackers.   Ref: http://wiki.postgresql.org/wiki/BDR_Project   Is 
implementing main BDR features into core Postgres a probable objective to 
version 9.4?   Thanks.   --
 Andreas Joseph Krogh andr...@officenet.no      mob: +47 909 56 963
 Senior Software Developer / CTO - OfficeNet AS - http://www.officenet.no
 Public key: http://home.officenet.no/~andreak/public_key.asc

[HACKERS] [BUG FIX] Compare returned value by socket() against PGINVALID_SOCKET instead of 0

2013-12-25 Thread Joel Jacobson
Hi,

I've tried to fix some bugs reported by Andrey Karpov in an article at
http://www.viva64.com/en/b/0227/

The value returned by socket() is unsigned on Windows and can thus not
be checked if less than zero to detect an error, instead
PGINVALID_SOCKET should be used, which is hard-coded to -1 on
non-windows platforms.

Joel Jacobson


use-PGINVALID_SOCKET.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [BUG FIX] Version number expressed in octal form by mistake

2013-12-25 Thread Joel Jacobson
As reported by Andrey Karpov in his article
http://www.viva64.com/en/b/0227/, the version number is expressed in
octal form 070100 should be changed to 70100.

Attached patch fixes the reported issue.


octal-typo.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [BUGFIX] Typos when using the memcmp() function.

2013-12-25 Thread Joel Jacobson
As reported by Andrey Karpov in his article
http://www.viva64.com/en/b/0227/, a parenthesis has been misplaced.

Attached patch fixes the problem.


memcmp-parenthesis-typo.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Negative Transition Aggregate Functions (WIP)

2013-12-25 Thread David Rowley
On Sun, Dec 22, 2013 at 2:42 AM, David Rowley dgrowle...@gmail.com wrote:

 On Sun, Dec 22, 2013 at 1:01 AM, Erik Rijkers e...@xs4all.nl wrote:

 On Sat, December 21, 2013 12:52, David Rowley wrote:
  On Sun, Dec 22, 2013 at 12:49 AM, Erik Rijkers e...@xs4all.nl wrote:
 
  On Sat, December 21, 2013 12:38, David Rowley wrote:
   [ inverse_transition_functions_v1.2.patch.gz  ]
   Please find attached an updated patch which should remove the
 duplicate
  OID
   problem you saw.
 
  That fixes it, thanks
 
  There is 1 of 141 failed tests:
 
  window   ... FAILED
 
 
  That's strange, it passes here.
 
  Would you be able to send me the regression diff file?
 

 attached...


 Thanks

 This was just down to some missing trailing white space on the expected
 results. I've fixed these up and attached another patch.


I've attached an updated patch which is re-merged to current head.

I've also done a bit more work to the patch:
Added inverse transition functions for all the stddev() type aggregates for
all int types.
Added inverse transition function for SUM with bigint
Added pg_dump support.

I've yet to do SUM or AVG for numeric types due to some concern with having
more digits after the decimal point after inverse transitions take place.

Regards

David Rowley


Regards

 David Rowley




inverse_transition_functions_v1.5.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-12-25 Thread Andres Freund
Hi,

On 2013-12-24 13:18:36 -0800, Peter Geoghegan wrote:
 On Tue, Dec 24, 2013 at 4:09 AM, Andres Freund and...@2ndquadrant.com wrote:
  I don't really see the lack of review as being crucial at this point. At
  least I have quite some doubts about the approach you've chosen and I
  have voiced it - so have others.
 
 Apparently you haven't been keeping up with this thread. The approach
 that Heikki outlined with his POC patch was demonstrated to deadlock
 in an unprincipled manner - it took me a relatively long time to
 figure this out because I didn't try a simple enough test-case.

So? I still have the fear that you approach will end up being way too
complicated and full of layering violations. I didn't say it's a no-go
(not that I have veto powers, even if I'd consider it one).

And yes, I still think that promise tuples might be a better solution
regardless of the issues you mentioned, but you know what? That doesn't
matter. Me thinking it's the better approach is primarily based on gut
feeling, and I clearly haven't voiced clear enough reasons to convince
you. So you going with your own, possibly more substantiated, gut
feeling is perfectly alright. Unless I go ahead and write a POC of my
own at least ;)

 In hindsight I should have known better than to think that people
 would be willing to defer discussion of a more acceptable value
 locking implementation to consider the interactions between the
 different subsystems, which I felt were critical and warranted
 up-front discussion, a belief which has now been borne out.
 Lesson learned. It's a pity that that's the way things are, because that
 discussion could have been really useful, and saved us all some time.

Whoa? What? Not convincing everyone is far from it being a useless
discussion. Such an attitude sure is not the way to go to elicit more
feedback.
And it clearly gave you the feedback that most people regard holding
buffer locks across other nontrivial operations, in a potentially
unbounded number, as a fundamental problem.

  I don't think there's too much reviewers can do before you've provided a
  POC implementation of real value locking.
 
 I don't see what is functionally insufficient about the value locking
 that you've already seen.

I still think it's fundamentally unacceptable to hold buffer locks
across any additional complex operations. So yes, I think the current
state is fundamentally insufficient.
Note that the case of the existing uniqueness checking already is bad,
but it at least will never run any user defined code in that context,
just HeapTupleSatisfies* and HOT code. So I don't think arguments of the
we're already doing it in uniqueness checking ilk have much merit.

 If you're  still of the opinion that it is necessary to hold value locks of 
 some
 form on earlier unique indexes, as you wait maybe for hours on some
 conflicting xid, then I still disagree with you for reasons recently
 re-stated [1].

I guess you're referring to:

On 2013-12-23 14:59:31 -0800, Peter Geoghegan wrote:
 Holding value locks for more than an instant doesn't make sense. The
 reason is simple: when upserting, we're tacitly only really looking
 for violations on one particular unique index. We just lock them all
 at once because the implementation doesn't know *which* unique index.
 So in actuality, it's really no different from existing
 potential-violation handling for unique indexes, except we have to do
 some extra work in addition to the usual restart from scratch stuff
 (iff we have multiple unique indexes).

I think the point here really is that that you assume that we're always
only looking for conflicts with one unique index. If that's all we want
to support - sure, only the keys in that index need to be locked.
I don't think that's necessarily a given, especially when you just want
to look at the conflict in detail, without using a subtransaction.

 You never stated a reason why you thought it was
 necessary. If you have one now, please share it. Note that I release
 all value locks before row locking too, which is necessary because to
 do any less will cause unprincipled deadlocking, as we've seen.

I can't sensibly comment upon that right now, I'd need to read more code
to understand what you're doing there.

 Other than that, I have no idea what your continued objection to my
 design would be once the buffer level exclusive locks are replaced
 with page level heavyweight locks across complex (though brief)
 operations

Well, you haven't delivered that part yet, that's pretty much my point,
no?
I don't think you can easily do this by just additionally taking a new
kind of heavyweight locks in the new codepaths - that will still allow
deadlocks with the old codepaths taking only lwlocks. So this is a
nontrivial sub-project which very well might influence whether the
approach is deemed acceptable or not.

 (I guess you might not like the visibility stuff or the
 syntax, but that isn't what you're talking about here).

I don't 

Re: [HACKERS] [BUG FIX] Version number expressed in octal form by mistake

2013-12-25 Thread Oleg Bartunov
Yes, we got temp licence key from them and will provide full report.

On Wed, Dec 25, 2013 at 4:38 PM, Joel Jacobson j...@trustly.com wrote:
 As reported by Andrey Karpov in his article
 http://www.viva64.com/en/b/0227/, the version number is expressed in
 octal form 070100 should be changed to 70100.

 Attached patch fixes the reported issue.


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Negative Transition Aggregate Functions (WIP)

2013-12-25 Thread Erik Rijkers
On Wed, December 25, 2013 14:49, David Rowley wrote:

 [ inverse_transition_functions_v1.5.patch.gz ]

I ran into the following problem which is, I think, NOT a problem with your 
patch but with my setup.  Still, if anyone can
enlighten me on its cause I'd be thankful (it shows up every now and then for 
me).

I have a 200 GB dev database running under 9.4devel  that I thought I could 
now, for test purposes, compile a patched
postgres binary for (i.e.: a HEAD + inverse_transition_functions_v1.5.patch.gz 
binary), so as to avoid an initdb and use
the existing 200 GB data.

I know the patch is valid, because a separately built, newly initdb-ed instance 
runs the below statement without fail.

But running  from existing 200 gb dev database I get:

$  echo \\set VERBOSITY verbose  SELECT SUM(n::integer) OVER (ORDER BY n 
ROWS BETWEEN CURRENT ROW AND UNBOUNDED
FOLLOWING) FROM generate_series(1, 100) g(n) | psql
ERROR:  XX000: cache lookup failed for type 0
LOCATION:  get_typlenbyval, lsyscache.c:1895
Time: 2.752 ms

Is there a way I can use the existing database and avoid both initdb and this 
error?

Thanks,

Erik Rijkers


See also:
[1]
http://www.postgresql.org/message-id/flat/e5f4c5a18cab7a4da23080de9ce8158603e67...@eaubrmw001.eapac.ericsson.se#e5f4c5a18cab7a4da23080de9ce8158603e67...@eaubrmw001.eapac.ericsson.se










-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Negative Transition Aggregate Functions (WIP)

2013-12-25 Thread Tom Lane
Erik Rijkers e...@xs4all.nl writes:
 I have a 200 GB dev database running under 9.4devel  that I thought I could 
 now, for test purposes, compile a patched
 postgres binary for (i.e.: a HEAD + 
 inverse_transition_functions_v1.5.patch.gz binary), so as to avoid an initdb 
 and use
 the existing 200 GB data.

That's not going to work for any patch that touches the initial contents
of the system catalogs.

 Is there a way I can use the existing database and avoid both initdb and this 
 error?

Use pg_upgrade to move the data into a newly initdb'd directory.  Note
you will need both unpatched and patched executables for this.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

2013-12-25 Thread Peter Geoghegan
On Wed, Dec 25, 2013 at 6:25 AM, Andres Freund and...@2ndquadrant.com wrote:
 So? I still have the fear that you approach will end up being way too
 complicated and full of layering violations. I didn't say it's a no-go
 (not that I have veto powers, even if I'd consider it one).

Apart from not resulting in unprincipled deadlocking, it respects the
AM abstraction more than all other approaches outlined. Inserting
tuples as value locks just isn't a great approach, even if you ignore
the fact you must come up with a whole new way to release your value
locks without ending your xact.

 And yes, I still think that promise tuples might be a better solution
 regardless of the issues you mentioned, but you know what? That doesn't
 matter. Me thinking it's the better approach is primarily based on gut
 feeling, and I clearly haven't voiced clear enough reasons to convince
 you. So you going with your own, possibly more substantiated, gut
 feeling is perfectly alright. Unless I go ahead and write a POC of my
 own at least ;)

My position is not based on a gut feeling. It is based on carefully
considering the interactions of the constituent parts, plus the
experience of actually building a working prototype.

 Whoa? What? Not convincing everyone is far from it being a useless
 discussion. Such an attitude sure is not the way to go to elicit more
 feedback.
 And it clearly gave you the feedback that most people regard holding
 buffer locks across other nontrivial operations, in a potentially
 unbounded number, as a fundamental problem.

Uh, I knew that it was a problem all along. While I explored ways of
ameliorating the problem, I specifically stated that we should discuss
the subsystems interactions/design, which you were far too quick to
dismiss. The overall design is far more pertinent than one specific
mechanism. While I certainly welcome your participation, if you want
to be an effective reviewer I suggest examining your own attitude.
Everyone wants this feature.

 I don't see what is functionally insufficient about the value locking
 that you've already seen.

 I still think it's fundamentally unacceptable to hold buffer locks
 across any additional complex operations. So yes, I think the current
 state is fundamentally insufficient.

I said *functionally* insufficient. Buffer locks demonstrably do a
perfectly fine job of value locking. Of course the current state is
insufficient, but I'm talking about design here.

 Holding value locks for more than an instant doesn't make sense. The
 reason is simple: when upserting, we're tacitly only really looking
 for violations on one particular unique index. We just lock them all
 at once because the implementation doesn't know *which* unique index.
 So in actuality, it's really no different from existing
 potential-violation handling for unique indexes, except we have to do
 some extra work in addition to the usual restart from scratch stuff
 (iff we have multiple unique indexes).

 I think the point here really is that that you assume that we're always
 only looking for conflicts with one unique index. If that's all we want
 to support - sure, only the keys in that index need to be locked.
 I don't think that's necessarily a given, especially when you just want
 to look at the conflict in detail, without using a subtransaction.

Why would I not assume that? It's perfectly obvious from the syntax
that you can't do much if you don't know ahead of time where the
conflict might be. It's just like the MySQL feature - the user had
better know where it might be. Now, at least with my syntax as a user
you have some capacity to recover if you consider ahead of time that
you might get it wrong. But clearly rejected, and not conflicting rows
are projected, and multiple conflicts per row are not accounted for.
We lock on the first conflict, which with idiomatic usage will be the
only possible conflict.

That isn't the only reason why value locks don't need to be held for
more than an instant. It's just the most obvious one.

Incidentally, there are many implementation reasons why true value
locking, where value locks are held indefinitely is extremely
difficult. When I referred to an SLRU, I was just exploring the idea
of making value locks (still only held for an instant) more granular.
On closer examination it looks to me like premature optimization,
though.

 You never stated a reason why you thought it was
 necessary. If you have one now, please share it. Note that I release
 all value locks before row locking too, which is necessary because to
 do any less will cause unprincipled deadlocking, as we've seen.

 I can't sensibly comment upon that right now, I'd need to read more code
 to understand what you're doing there.

You could have looked at it back in September, if only you'd given
these interactions the up-front consideration that they warranted.
Nothing has changed there at all.

 Well, you haven't delivered that part yet, that's pretty much my point,
 no?
 I don't