Re: [HACKERS] Commitfest problems

2014-12-15 Thread Craig Ringer
On 12/16/2014 03:08 AM, Robert Haas wrote:
> On Sun, Dec 14, 2014 at 2:24 PM, Mark Cave-Ayland
>  wrote:
>> However if it were posted as part of patchset with a subject of "[PATCH]
>> gram.y: add EXCLUDED pseudo-alias to bison grammar" then this may pique
>> my interest enough to review the changes to the grammar rules, with the
>> barrier for entry reduced to understanding just the PostgreSQL-specific
>> parts.
> 
> Meh.  Such a patch couldn't possibly compile or work without modifying
> other parts of the tree.  And I'm emphatically opposed to splitting a
> commit that would have taken the tree from one working state to
> another working state into a series of patches that would have taken
> it from a working state through various non-working states and
> eventually another working state.

Absolutely. I think that'd be awful.

I'm a fan of fairly fine grained patch series, but only where each patch
makes some sense by its self and doesn't break the tree. Splitting stuff
up purely for the sake of it or having patches that are non-working
intermediate states is painful and pointless.

Occasoinally it can be worth having a patch that introduces intermediate
compatibility code that's removed later, especially when it's small or
very self-contained. Most of the time I'm inclined to think that's not
worth it and it can create annoying noise in the history.

I'm only advocating it where the individual parts make a reasonable
amount of sense standing alone, and actually work by themselves.

Anyway, I'm not contributing much to the real topic here, which is
commitfest process issues. It's probably getting toward time for me to
butt out.

> Furthermore, if you really want to
> review that specific part of the patch, just look for what changed in
> gram.y, perhaps by applying the patch and doing git diff
> src/backend/parser/gram.y.  This is really not hard.

Yup ... and with 'git am' and then 'git diff' on subtrees, it's pretty
convenient.

It's even easier when someone pushes work to GitHub working trees, so
you don't have to mess about with applying the changes, but that's a
trivial and unimportant convenience.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TABLESAMPLE patch

2014-12-15 Thread Jaime Casanova
On Wed, Dec 10, 2014 at 6:24 PM, Petr Jelinek  wrote:
> Hello,
>
> Attached is a basic implementation of TABLESAMPLE clause. It's SQL standard
> clause and couple of people tried to submit it before so I think I don't
> need to explain in length what it does - basically returns "random" sample
> of a table using a specified sampling method.
>

Tablesample, yay!

Sadly when the jsonb functions patch was committed a few oids where
used, so you should update the ones you are using. at least to make
the patch easier for testing.

The test added for this failed, attached is the diff. i didn't looked
up why it failed

Finally, i created a view with a tablesample clause. i used the view
and the tablesample worked, then dumped and restored and the
tablesample clause went away... actually pg_get_viewdef() didn't see
it at all.

will look at the patch more close tomorrow when my brain wake up ;)

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


regression.diffs
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread David Rowley
On 16 December 2014 at 18:18, Josh Berkus  wrote:
>
> > Man. You're equating stuff that's not the same. You didn't get your way
> > (and I'm tentatively on your side onthat one) and take that to imply
> > that we don't want more reviewers.
>
> During that thread a couple people said that novice reviewers added no
> value to the review process, and nobody argued with them then.  I've
> also been told this to my face at pgCon, and when I've tried organizing
> patch review events.  I got the message, which is why I stopped trying
> to get new reviewers.
>
> And frankly: if we're opposed to giving credit to patch reviewers, we're
> opposed to having them.
>
>
>
I'd just like to add something which might be flying below the radar of
more senior people. There are people out there  (ike me)  working on
PostgreSQL more for the challenge and perhaps the love of the product, who
make absolutely zero money out of it. For these people getting credit where
it's due is very important. I'm pretty happy with this at the moment and I
can't imagine any situation where not crediting reviewers would be
beneficial to anyone.

Regards

David Rowley


Re: [HACKERS] pg_basebackup vs. Windows and tablespaces

2014-12-15 Thread Amit Kapila
On Sun, Dec 14, 2014 at 11:54 AM, Amit Kapila 
wrote:
> On Sat, Dec 13, 2014 at 10:48 PM, Tom Lane  wrote:
> > Andrew Dunstan  writes:
> > > On 11/20/2014 02:27 AM, Amit Kapila wrote:
> > >> Now the part where I would like to receive feedback before revising
the
> > >> patch is on the coding style.  It seems to me from Tom's comments
that
> > >> he is not happy with the code, now I am not sure which part of the
patch
> > >> he thinks needs change.  Tom if possible, could you be slightly more
> > >> specific about your concern w.r.t code?
> >
> > > In view of the request above for comments from Tom, I have moved this
> > > back to "Needs Review".

I have updated the patch and handled the review comments as below:

1. Change the name of file containing tablespace path information to
tablespace_map. I have changed the reference to file name in whole patch.

2. I have not added tablespace name in tablespace_map file as I am not
sure how important it is for user readability aspect and what format should
we use and another point is not many people have asked for it.  However
if you feel it is important to have the same for this patch, then I will
propose some new format.

3. Made the code generic (for all platforms) such that a tablespace_map
file will be created to restore tablespaces for base backup.

> > Sorry, I was not paying very close attention to this thread and missed
> > the request for comments.  A few such:
> >
> > 1. The patch is completely naive about what might be in the symlink
> > path string; eg embedded spaces in the path would break it.  On at
> > least some platforms, newlines could be in the path as well.  I'm not
> > sure about how to guard against this while maintaining human readability
> > of the file.
> >
>
> I will look into this and see what best can be done.
>

I have chosen #3 (Make pg_basebackup check for and fail on symlinks
containing characters (currently newline only) it can't handle) from the
different options suggested by Tom.  This keeps the format same as
previous and human readable.

> > 2. There seems to be more going on here than what is advertised, eg
> > why do we need to add an option to the BASE_BACKUP command
>
> This is to ensure that symlink file is generated only for tar format;
> server is not aware of whether the backup is generated for plain format
> or tar format.  We don't want to do it for plain format as for that
> client (pg_basebackup) can update the symlinks via -T option and backing
> up symlink file during that operation can lead to spurious symlinks after
> archive recovery.  I have given the reason why we want to accomplish it
> only for tar format in my initial mail.
>
> > (and if
> > we do need it, doesn't it need to be documented in protocol.sgml)?
>
> I shall take care of it in next version of patch.
>

Added the description in protocol.sgml

> > And why is the RelationCacheInitFileRemove call relocated?
> >
>
> Because it assumes that tablespace directory pg_tblspc is in
> place and it tries to remove the files by reading pg_tblspc
> directory as well.  Now as we setup the symlinks in pg_tblspc
> after reading symlink file, so we should remove relcache init
> file once the symlinks are setup in pg_tblspc directory.
>
> > 3. Not terribly happy with the changes made to the API of
> > do_pg_start_backup, eg having to be able to parse "DIR *" in its
> > arguments seems like a lot of #include creep.  xlog.h is pretty
> > central so I'm not happy about plastering more #includes in it.
> >
>
> The reason of adding new include in xlog.c is for use of tablespaceinfo
> structure which I have now kept in basebackup.h.
>
> The reason why I have done this way is because do_pg_start_backup has
> some functionality common to both non-exclusive and exclusive backups and
> for this feature we have to do some work common for both non-exclusive
> and exclusive backup which is to generate the symlink label file for
> non-exclusive backups and write the symlink label file for exclusive
> backups using that information. Doing this way seems right to me
> as we are already doing something like that for backup label file.
>
> Another possible way could be to write a new function in xlogutils.c
> to do the symlink label stuff and then use the same in xlog.c, I think
> that way we could avoid any new include in xlog.c.  However for this we
> need to have include in xlogutils.c to make it aware of tablespaceinfo
> structure.
>

Are you okay with the alternative I have suggested to avoid the
new include in xlog.c or do you feel the alternative will make the
code worse than the current patch?


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


extend_basebackup_to_include_symlink_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] moving Orafce from pgFoundry - pgFoundry management

2014-12-15 Thread Pavel Stehule
2014-12-16 6:15 GMT+01:00 Marc Fournier :
>
>
> Project disabled on pgfoundry … do you want me to remove the mailing lists
> and all those archives?   Or leave them there … ?
>

please, drop it - there is almost spam only

Pavel


>
>
> > On Dec 13, 2014, at 9:18 AM, David Fetter  wrote:
> >
> > On Sat, Dec 13, 2014 at 11:19:08AM +0100, Pavel Stehule wrote:
> >> Hi
> >>
> >> a Orafce package on pgFoundry is obsolete - we migrated to github few
> years
> >> ago. Please, can somebody modify a description about this migration? Or
> >> drop this project there.
> >
> > Pavel,
> >
> > I've removed pgFoundry references from the IRC documentation bot and
> > replaced them with references to github.
> >
> > Marc,
> >
> > Could you please remove the Orafce project from pgFoundry?
> >
> > Cheers,
> > David.
> > --
> > David Fetter  http://fetter.org/
> > Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
> > Skype: davidfetter  XMPP: david.fet...@gmail.com
> >
> > Remember to vote!
> > Consider donating to Postgres: http://www.postgresql.org/about/donate
>
>


Re: [HACKERS] moving from contrib to bin

2014-12-15 Thread Michael Paquier
On Mon, Dec 15, 2014 at 11:53 AM, Michael Paquier
 wrote:
> On Mon, Dec 15, 2014 at 11:45 AM, Peter Eisentraut  wrote:
>> On 12/14/14 9:08 PM, Michael Paquier wrote:
 - no Windows build system support yet
>>> Do you need some help here?
>> Sure.
Peter,

Attached is a patch that can be applied on your existing set to
complete support on Windows for the move to src/bin. For MinGW there
was nothing to do, but on MSVC we have to declare frontend utilities
in the build scripts, which is roughly what this patch does. There are
as well some updates needed for clean.bat and the regression test
script. I did the following checks btw:
- Checked the builds worked correctly
- Checked that file version information was generated when building
with either MinGW or MSVC
- Checked that clean.bat ran correctly
- Checked that vcregress upgradecheck was working correctly
- Checked as well the other regression tests, but that's minor here...
I also noticed when looking at your patches that we actually forgot to
ignore the *.bat scripts generated by regressions of pg_upgrade when
running vcregress upgradecheck with MSVC stuff, but that's a different
issue that I reported on a new thread. It is included in my patch btw
for completeness.
Regards,
-- 
Michael


20141216_srcbin_win.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] NUMERIC private methods?

2014-12-15 Thread David Fetter
Folks,

While noodling with some weighted statistics
, I noticed I was
having to jump through a lot of hoops because of all the private
methods in numeric.c, especially NumericVar.  Would there be some
major objection to exposing NumericVar as an opaque blob?

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] analyze_new_cluster.bat and delete_old_cluster.bat not ignored with vcregress upgradecheck

2014-12-15 Thread Michael Paquier
Hi all,

As mentioned in $subject, I noticed that those automatically-generated
files are not ignored in the tree when running vcregress on Windows,
we do ignore their .sh version though. I think that it would be good
to back-patch the patch attached to prevent the inclusion of those
files in the future.
Regards,
-- 
Michael
diff --git a/contrib/pg_upgrade/.gitignore b/contrib/pg_upgrade/.gitignore
index 9555f54..d24ec60 100644
--- a/contrib/pg_upgrade/.gitignore
+++ b/contrib/pg_upgrade/.gitignore
@@ -1,6 +1,8 @@
 /pg_upgrade
 # Generated by test suite
-analyze_new_cluster.sh
-delete_old_cluster.sh
+/analyze_new_cluster.sh
+/delete_old_cluster.sh
+/analyze_new_cluster.bat
+/delete_old_cluster.bat
 /log/
 /tmp_check/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Craig Ringer

On 16 Dec 2014 7:43 am, Andres Freund  wrote:
>
> On 2014-12-15 21:18:40 -0800, Josh Berkus wrote: 
> > On 12/15/2014 07:34 PM, Andres Freund wrote: 
> > > On 2014-12-15 16:14:30 -0800, Josh Berkus wrote: 
> > >> Read the thread on this list where I suggested crediting reviewers in 
> > >> the release notes. 
> > > 
> > > Man. You're equating stuff that's not the same. You didn't get your way 
> > > (and I'm tentatively on your side onthat one) and take that to imply 
> > > that we don't want more reviewers. 
> > 
> > During that thread a couple people said that novice reviewers added no 
> > value to the review process, and nobody argued with them then.  I've 
> > also been told this to my face at pgCon, and when I've tried organizing 
> > patch review events.  I got the message, which is why I stopped trying 
> > to get new reviewers. 
>
> I think there's a very large difference in what novice reviewers do. A 
> schematic 'in context format, compiles and survives make check' type of 
> test indeed doesn't seem to be particularly useful to me. A novice 
> reviewer that tries out the feature by reading the docs noticing 
> shortages there on the way, and then verifies that the feature works 
> outside of the two regression tests added is something entirely 
> different. Novice reviewers *can* review the code quality as well - it's 
> just that many we had didn't. 
>
> I think the big problem is that a good review takes time - and that's 
> what many of the novice reviewers I've observed weren't really aware of. 


The review docs also over-emphasise the mechanical parts of review around make 
check etc, which may make it seem like that alone is quite useful. When really 
it's just the beginning.
>
> Greetings, 
>
> Andres Freund 
>
> -- 
> Andres Freund    http://www.2ndQuadrant.com/ 
> PostgreSQL Development, 24x7 Support, Training & Services 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Some modes of vcregress not mentioned in MSVC scripts

2014-12-15 Thread Michael Paquier
Hi all,

While looking at the Windows docs and the MSVC scripts, I noticed that
the following run modes of vcregress.pl are either not mentioned in
the WIndows installation docs or in the help output of vcregress.pl:
- ecpgcheck
- isolationcheck
- upgradecheck
Attached is a patch fixing that.
Regards,
-- 
Michael
diff --git a/doc/src/sgml/install-windows.sgml b/doc/src/sgml/install-windows.sgml
index 71a5c2e..9b77648 100644
--- a/doc/src/sgml/install-windows.sgml
+++ b/doc/src/sgml/install-windows.sgml
@@ -436,6 +436,9 @@ $ENV{CONFIG}="Debug";
 vcregress installcheck
 vcregress plcheck
 vcregress contribcheck
+vcregress ecpgcheck
+vcregress isolationcheck
+vcregress upgradecheck
 
 
To change the schedule used (default is parallel), append it to the
diff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl
index b84f70d..699c286 100644
--- a/src/tools/msvc/vcregress.pl
+++ b/src/tools/msvc/vcregress.pl
@@ -405,6 +405,6 @@ sub usage
 {
 	print STDERR
 	  "Usage: vcregress.pl ",
-	  " [schedule]\n";
+	  " [schedule]\n";
 	exit(1);
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Postgres TR for missing chunk

2014-12-15 Thread M Tarkeshwar Rao
Hello Friends,

Can you please tell me the how can I track the which bugs are fixed in which 
release and when they will be fixed,
If I want to track the analysis and status of the bug raised on Postgres. Can I 
get this information.

>From last few days we are struggling with following issue:

1.   Additionally we found that few operations on this table is getting 
failed like select or truncate and a more specific error is thrown as per 
below:-

ERROR:  missing chunk number 0 for toast value 54787 in pg_toast_2619

** Error **

We done all the suggested things on Google but not able to resolve it. I want 
to know how to avoid this issue?

Can you please suggest upto when following bugs will be resolved?

There are the known Bug on Postgres. Bugs detail are mentioned below.

BUG #9187: corrupt toast tables

http://www.postgresql.org/message-id/30154.1392153...@sss.pgh.pa.us
http://www.postgresql.org/message-id/cafj8praufpttn5+ohfqpbcd1jzkersck51uakhcwd8nt4os...@mail.gmail.com
http://www.postgresql.org/message-id/20140211162408.2713.81...@wrigleys.postgresql.org

BUG #7819: missing chunk number 0 for toast value 1235919 in pg_toast_35328

http://www.postgresql.org/message-id/C62EC84B2D3CF847899CCF4B589CCF70B20AA08F@BBMBX.backbone.local

Thanks !!
Tarkeshwar


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andres Freund
On 2014-12-15 21:18:40 -0800, Josh Berkus wrote:
> On 12/15/2014 07:34 PM, Andres Freund wrote:
> > On 2014-12-15 16:14:30 -0800, Josh Berkus wrote:
> >> Read the thread on this list where I suggested crediting reviewers in
> >> the release notes.
> > 
> > Man. You're equating stuff that's not the same. You didn't get your way
> > (and I'm tentatively on your side onthat one) and take that to imply
> > that we don't want more reviewers.
> 
> During that thread a couple people said that novice reviewers added no
> value to the review process, and nobody argued with them then.  I've
> also been told this to my face at pgCon, and when I've tried organizing
> patch review events.  I got the message, which is why I stopped trying
> to get new reviewers.

I think there's a very large difference in what novice reviewers do. A
schematic 'in context format, compiles and survives make check' type of
test indeed doesn't seem to be particularly useful to me. A novice
reviewer that tries out the feature by reading the docs noticing
shortages there on the way, and then verifies that the feature works
outside of the two regression tests added is something entirely
different. Novice reviewers *can* review the code quality as well - it's
just that many we had didn't.

I think the big problem is that a good review takes time - and that's
what many of the novice reviewers I've observed weren't really aware of.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-15 Thread Kyotaro HORIGUCHI
Hello, I have a favor for you committers.

I confirmed that this issue the another have been fixed in the
repository, thank you.

Then, could you please give me when the next release of 9.2.10 is
to come?

The bugs are found in some system under developing, which is to
make use of PG9.2 and it wants to adopt the new release.

regards,

At Thu, 11 Dec 2014 20:37:34 -0500, Tom Lane  wrote in 
<17534.1418348...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > So I replaced the get_parse_rowmark() with get_plan_rowmark() as
> > the attached patch and the problem disappeared.
> 
> Yeah, this is clearly a thinko: really, nothing in the planner should
> be using get_parse_rowmark().  I looked around for other errors of the
> same type and found that postgresGetForeignPlan() is also using
> get_parse_rowmark().  While that's harmless at the moment because we
> don't support foreign tables as children, it's still wrong.  Will
> fix that too.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Josh Berkus
On 12/15/2014 07:34 PM, Andres Freund wrote:
> On 2014-12-15 16:14:30 -0800, Josh Berkus wrote:
>> Read the thread on this list where I suggested crediting reviewers in
>> the release notes.
> 
> Man. You're equating stuff that's not the same. You didn't get your way
> (and I'm tentatively on your side onthat one) and take that to imply
> that we don't want more reviewers.

During that thread a couple people said that novice reviewers added no
value to the review process, and nobody argued with them then.  I've
also been told this to my face at pgCon, and when I've tried organizing
patch review events.  I got the message, which is why I stopped trying
to get new reviewers.

And frankly: if we're opposed to giving credit to patch reviewers, we're
opposed to having them.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Noah Misch
On Mon, Dec 15, 2014 at 03:29:19PM -0500, Andrew Dunstan wrote:
> On 12/15/2014 03:16 PM, Andres Freund wrote:
> >On 2014-12-15 11:52:35 -0800, Josh Berkus wrote:
> >>On 12/15/2014 11:27 AM, Robert Haas wrote:
> >>>I feel like we used to be better at encouraging people to participate
> >>>in the CF even if they were not experts, and to do the best they can
> >>>based on what they did know.  That was a helpful dynamic.  Sure, the
> >>>reviews weren't perfect, but more people helped, and reviewing some of
> >>>the patch well and some of it in a more cursory manner is way better
> >>>than reviewing none of it at all.
> >>Well, it was strongly expressed to me by a number of senior contributors
> >>on this list and at the developer meeting that inexpert reviews were not
> >>really wanted, needed or helpful.
> >I think that's pretty far away from what was said.
> >
> 
> 
> I welcome reviews at all levels, both as a developer and as a committer.
> 
> It is true that we are very short on reviewers with in depth knowledge and
> experience, and this is the real problem we have far more than any
> technological issues people might have.
> 
> But that doesn't mean we should be turning anyone away. We should not.

+1.  Some of the best reviews I've seen are ones where the reviewer expressed
doubts about the review's quality, so don't let such doubts keep you from
participating.  Every defect you catch saves a committer time; a review that
finds 3 of the 10 defects in a patch is still a big help.  Some patch
submissions waste the community's time, but it's almost impossible to waste
the community's time by posting a patch review.

Confusion may have arisen due to statements that we need more expert
reviewers, which is also true.  (When an expert writes a sufficiently-complex
patch, it's important that a second expert examine the patch at some point.)
If you're a novice reviewer, your reviews do help to solve that problem by
reducing the workload on expert reviewers.

Thanks,
nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tracking commit timestamps

2014-12-15 Thread Alvaro Herrera
Noah Misch wrote:
> On Mon, Dec 15, 2014 at 12:12:10AM -0800, Michael Paquier wrote:

> > FWIW, I just tried that with MinGW-32 and I can see the error on Win7.
> > I also checked that changing "< now()" to "<= now()" fixed the
> > problem, so your assumption was right, Petr.
> 
> Committed, after fixing the alternate expected output.

Thanks.  I admit I don't understand what the issue is.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-15 Thread Ali Akbar
2014-12-16 10:47 GMT+07:00 Ali Akbar :
>
>
> 2014-12-16 6:27 GMT+07:00 Tomas Vondra :
>>
>> On 15.12.2014 22:35, Jeff Janes wrote:
>> > On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra > > > wrote:
>> >
>> > Hi,
>> >
>> > Attached is v2 of the patch lowering array_agg memory requirements.
>> > Hopefully it addresses the issues issues mentioned by TL in this
>> thread
>> > (not handling some of the callers appropriately etc.).
>> >
>> >
>> > Hi Tomas,
>> >
>> > When configured --with-libxml I get compilation errors:
>> >
>> > xml.c: In function 'xml_xpathobjtoxmlarray':
>> > xml.c:3684: error: too few arguments to function 'accumArrayResult'
>> > xml.c:3721: error: too few arguments to function 'accumArrayResult'
>> > xml.c: In function 'xpath':
>> > xml.c:3933: error: too few arguments to function 'initArrayResult'
>> > xml.c:3936: error: too few arguments to function 'makeArrayResult'
>> >
>> > And when configured --with-perl, I get:
>> >
>> > plperl.c: In function 'array_to_datum_internal':
>> > plperl.c:1196: error: too few arguments to function 'accumArrayResult'
>> > plperl.c: In function 'plperl_array_to_datum':
>> > plperl.c:1223: error: too few arguments to function 'initArrayResult'
>> >
>> > Cheers,
>>
>> Thanks, attached is a version that fixes this.
>>
>
> Just fast-viewing the patch.
>
> The patch is not implementing the checking for not creating new context in
> initArrayResultArr. I think we should implement it also there for
> consistency (and preventing future problems).
>

Looking at the modification in accumArrayResult* functions, i don't really
comfortable with:

   1. Code that calls accumArrayResult* after explicitly calling
   initArrayResult* must always passing subcontext, but it has no effect.
   2. All existing codes that calls accumArrayResult must be changed.

Just an idea: why don't we minimize the change in API like this:

   1. Adding parameter bool subcontext, only in initArrayResult* functions
   but not in accumArrayResult*
   2. Code that want to not creating subcontext must calls initArrayResult*
   explicitly.

Other codes that calls directly to accumArrayResult can only be changed in
the call to makeArrayResult* (with release=true parameter). In places that
we don't want to create subcontext (as in array_agg_transfn), modify it to
use initArrayResult* before calling accumArrayResult*.

What do you think?

Regards,
--
Ali Akbar


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-15 Thread Ali Akbar
2014-12-16 6:27 GMT+07:00 Tomas Vondra :
>
> On 15.12.2014 22:35, Jeff Janes wrote:
> > On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra  > > wrote:
> >
> > Hi,
> >
> > Attached is v2 of the patch lowering array_agg memory requirements.
> > Hopefully it addresses the issues issues mentioned by TL in this
> thread
> > (not handling some of the callers appropriately etc.).
> >
> >
> > Hi Tomas,
> >
> > When configured --with-libxml I get compilation errors:
> >
> > xml.c: In function 'xml_xpathobjtoxmlarray':
> > xml.c:3684: error: too few arguments to function 'accumArrayResult'
> > xml.c:3721: error: too few arguments to function 'accumArrayResult'
> > xml.c: In function 'xpath':
> > xml.c:3933: error: too few arguments to function 'initArrayResult'
> > xml.c:3936: error: too few arguments to function 'makeArrayResult'
> >
> > And when configured --with-perl, I get:
> >
> > plperl.c: In function 'array_to_datum_internal':
> > plperl.c:1196: error: too few arguments to function 'accumArrayResult'
> > plperl.c: In function 'plperl_array_to_datum':
> > plperl.c:1223: error: too few arguments to function 'initArrayResult'
> >
> > Cheers,
>
> Thanks, attached is a version that fixes this.
>

Just fast-viewing the patch.

The patch is not implementing the checking for not creating new context in
initArrayResultArr. I think we should implement it also there for
consistency (and preventing future problems).

Regards,
-- 
Ali Akbar


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andres Freund
On 2014-12-15 16:14:30 -0800, Josh Berkus wrote:
> Read the thread on this list where I suggested crediting reviewers in
> the release notes.

Man. You're equating stuff that's not the same. You didn't get your way
(and I'm tentatively on your side onthat one) and take that to imply
that we don't want more reviewers.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Kouhei Kaigai
Hanada-san,

One other question from my side:
How postgres_fdw tries to solve the varno/varattno mapping when it
replaces relations join by foreign-scan?

Let me explain the issue using an example. If SELECT has a target-
list that references 2nd-column of the inner relation and 2nd-column
of the outer relation, how varno/varattno of ForeignScan shall be
assigned on?
Unless FDW driver does not set fdw_ps_tlist, setrefs.c deals with
this ForeignScan as usual relation scan, then varno of Var will
have non-special varno (even though it may be shifted by rtoffset
in setrefs.c).
Then, ExecEvalScalarVar() is invoked on executor to evaluate the
value of fetched tuple. At that time, which slot and attribute will
be referenced? The varattno of Var-node is neutral on setrefs.c, so
both of the var-nodes that references 2nd-column of the inner relation
and 2nd-column of the outer relation will reference the 2nd-column
of the econtext->ecxt_scantuple, however, it is uncertain which
column of foreign-table is mapped to 2nd-column of the ecxt_scantuple.
So, it needs to inform the planner which underlying column is
mapped to the pseudo scan tlist.

Another expression of what I'm saying is:

  SELECT
ft_1.second_col X,   --> varno=1 / varattno=2
ft_2.second_col Y--> varno=2 / varattno=2
  FROM
ft_1 NATURAL JOIN ft_2;

When FROM-clause is replaced by ForeignScan, the relevant varattno
also needs to be updated, according to the underlying remote query.
If postgres_fdw runs the following remote query, X should have varattno=1
and Y should have varattno=2 on the pseudo scan tlist.
  remote: SELECT t_1.second_col, t_2.second_col
FROM t_1 NATURAL JOIN t_2;

You can inform the planner this mapping using fdw_ps_tlist field of
ForeignScan, if FDW driver put a list of TargetEntry.
In above example, fdw_ps_tlist will have two elements and both of then
has Var-node of the underlying foreign tables.

The patch to replace join by foreign-/custom-scan adds a functionality
to fix-up varno/varattno in these cases.

Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei 


> -Original Message-
> From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Kouhei Kaigai
> Sent: Tuesday, December 16, 2014 9:01 AM
> To: Robert Haas; Shigeru Hanada
> Cc: PostgreSQL-development
> Subject: Re: [HACKERS] Join push-down support for foreign tables
> 
> Hanada-san,
> 
> Thanks for proposing this great functionality people waited for.
> 
> > On Mon, Dec 15, 2014 at 3:40 AM, Shigeru Hanada
> > 
> > wrote:
> > > I'm working on $SUBJECT and would like to get comments about the
> > > design.  Attached patch is for the design below.
> >
> > I'm glad you are working on this.
> >
> > > 1. Join source relations
> > > As described above, postgres_fdw (and most of SQL-based FDWs) needs
> > > to check that 1) all foreign tables in the join belong to a server,
> > > and
> > > 2) all foreign tables have same checkAsUser.
> > > In addition to that, I add extra limitation that both inner/outer
> > > should be plain foreign tables, not a result of foreign join.  This
> > > limiation makes SQL generator simple.  Fundamentally it's possible
> > > to join even join relations, so N-way join is listed as enhancement
> > > item below.
> >
> > It seems pretty important to me that we have a way to push the entire
> > join nest down.  Being able to push down a 2-way join but not more
> > seems like quite a severe limitation.
> >
> As long as we don't care about simpleness/gracefulness of the remote query,
> what we need to do is not complicated. All the optimization jobs are
> responsibility of remote system.
> 
> Let me explain my thought:
> We have three cases to be considered; (1) a join between foreign tables
> that is the supported case, (2) a join either of relations is foreign join,
> and (3) a join both of relations are foreign joins.
> 
> In case of (1), remote query shall have the following form:
>   SELECT  FROM FT1 JOIN FT2 ON  WHERE 
> 
> In case of (2) or (3), because we already construct inner/outer join, it
> is not difficult to replace the FT1 or FT2 above by sub-query, like:
>   SELECT  FROM FT3 JOIN
> (SELECT  FROM FT1 JOIN FT2 ON  WHERE ) as FJ1
> ON  WHERE 
> 
> How about your thought?
> 
> 
> Let me comment on some other points at this moment:
> 
> * Enhancement in src/include/foreign/fdwapi.h
> 
> It seems to me GetForeignJoinPath_function and
> GetForeignJoinPlan_function are not used anywhere. Is it an oversight to
> remove definitions from your previous work, isn't it?
> Now ForeignJoinPath is added on set_join_pathlist_hook, but not callback
> of FdwRoutine.
> 
> 
> * Is ForeignJoinPath really needed?
> 
> I guess the reason why you added ForeignJoinPath is, to have the fields
> of inner_path/outer_path. If we want to have paths of underlying relations,
> a straightforward way for the concept (join relations is replaced by
> foreign-/custom-scan on a resu

Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-15 Thread Etsuro Fujita
(2014/12/16 2:59), Tom Lane wrote:
> Etsuro Fujita  writes:
>> (2014/12/13 1:17), Tom Lane wrote:
>>> We should
>>> probably also think about allowing FDWs to change these settings if
>>> they want to.
> 
>> This is not clear to me.  Maybe I'm missing something, but I think that
>> the FDW only needs to look at the original locking strength in
>> GetForeignPlan().  Please explain that in a little more detail.
> 
> Well, the point is that for postgres_fdw, we could consider using the
> same locked-row-identification methods as for local tables, ie CTID.
> Now admittedly this might be the only such case, but I'm not entirely
> convinced of that --- you could imagine using FDWs for many of the same
> use-cases that KaiGai-san has been proposing custom scans for, and
> in some of those cases CTIDs would be useful for row IDs.
> 
> We'd originally dismissed this on the argument that ROWMARK_REFERENCE
> is a cheaper implementation than CTID-based implementations for any
> FDW (since it avoids the possible need to fetch a remote row twice).
> I'm not sure I still believe that though.  Fetching all columns of all
> retrieved rows in order to avoid what might be zero or a small number of
> re-fetches is not obviously a win, especially not for FDWs that
> represent not-actually-remote resources.
> 
> So as long as we're revisiting this area, it might be worth thinking
> about letting an FDW have some say in which ROWMARK method is selected
> for its tables.

Understood.  So, to what extext should we consider such things in the
FDW improvement?  We've already started an independent infrastructure
for such things, ie, custom scans, IIUC.

Thank you for the explanation!

Best regards,
Etsuro Fujita


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tracking commit timestamps

2014-12-15 Thread Noah Misch
On Mon, Dec 15, 2014 at 12:12:10AM -0800, Michael Paquier wrote:
> On Wed, Dec 10, 2014 at 6:50 PM, Noah Misch  wrote:
> > On Mon, Dec 08, 2014 at 02:23:39AM +0100, Petr Jelinek wrote:
> >> On 08/12/14 00:56, Noah Misch wrote:
> >> >The commit_ts test suite gives me the attached diff on a 32-bit MinGW 
> >> >build
> >> >running on 64-bit Windows Server 2003.  I have not checked other Windows
> >> >configurations; the suite does pass on GNU/Linux.
> >>
> >> Hmm I wonder if "< now()" needs to be changed to "<= now()" in those 
> >> queries
> >> to make them work correctly on that plarform, I don't have machine with 
> >> that
> >> environment handy right now, so I would appreciate if you could try that, 
> >> in
> >> case you don't have time for that, I will try to setup something later...
> >
> > I will try that, though perhaps not until next week.
> 
> FWIW, I just tried that with MinGW-32 and I can see the error on Win7.
> I also checked that changing "< now()" to "<= now()" fixed the
> problem, so your assumption was right, Petr.

Committed, after fixing the alternate expected output.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor binary-search int overflow in timezone code

2014-12-15 Thread Tom Lane
Jim Nasby  writes:
> On 12/15/14, 1:39 PM, Christoph Berg wrote:
>> Well, if it's not interesting, let's just forget it. Sorry.

> At the risk of sticking my head in the lions mouth... this is the kind of 
> response that deters people from contributing anything to the project, 
> including reviewing patches. A simple "thanks, but we feel it's already clear 
> enough that there can't be anywhere close to INT_MAX timezones" would have 
> sufficed.

Yeah, I need to apologize.  I was a bit on edge today due to the release
wrap (which you may have noticed wasn't going too smoothly), and should
not have responded like that.

Having said that, though, the submission wasn't carefully thought through
either.  That problem was either not-an-issue or a potential security bug,
and if the submitter hadn't taken the time to be sure which, reporting it
in a public forum wasn't the way to proceed.

I also remain curious as to what sort of tool would complain about this
particular code and not the N other nearly-identical binary-search loops
in the PG sources, most of which deal with data structures potentially
far larger than the timezone data ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 10:26 AM, Michael Paquier
 wrote:
> In any case, I have a couple of comments about this patch as-is:
> - If the move to src/bin is done, let's update the MSVC scripts accordingly
> - contrib/pg_rewind/.gitignore should be cleaned up from its unnecessary 
> entries
> - README is incorrect, it is still mentioned for example that this
> code should be copied inside PostgreSQL code tree as contrib/pg_rewind
(Sorry email got sent...)
- Code is going to need a brush to clean up things of this type:
+ PG_9.4_201403261
+   printf("Report bugs to https://github.com/vmware/pg_rewind.\n";);
- Versioning should be made the Postgres-way, aka not that:
+#define PG_REWIND_VERSION "0.1"
But a way similar to other utilities:
pg_rewind (PostgreSQL) 9.5devel
- Shouldn't we use $(SHELL) here at least?
+++ b/contrib/pg_rewind/launcher
@@ -0,0 +1,6 @@
+#!/bin/bash
+#
+# Normally, psql feeds the files in sql/ directory to psql, but we want to
+# run them as shell scripts instead.
+
+bash
I doubt that this would work for example with MinGW.
- There are a couple of TODO items which may be good to fix:
+*
+* TODO: This assumes that there are no timeline switches on the target
+* cluster after the fork.
+*/
and:
+   /*
+* TODO: move old file out of the way, if any. And perhaps create the
+* file with temporary name first and rename in place after it's done.
+*/
- Regression tests, which have a good coverage btw are made using
shell scripts. There is some initialization process that could be
refactored IMO as this code is duplicated with pg_upgrade. For
example, listen_addresses initialization has checks fir MINGW,
environment variables PG* are unset, etc.
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 9:32 AM, Michael Paquier
 wrote:
> On Sat, Dec 13, 2014 at 12:01 AM, Heikki Linnakangas
>  wrote:
>> On 12/12/2014 04:20 PM, Andres Freund wrote:
>>>
>>> Not sure if the copyright notices in the current form are actually ok?
>>
>>
>> Hmm. We do have such copyright notices in the source tree, but I know that
>> we're trying to avoid it in new code. They had to be there when the code
>> lived as a separate project, but now that I'm contributing this to
>> PostgreSQL proper, I can remove them if necessary.
> Yep, it is fine to remove those copyright notices and to keep only the
> references to PGDG when code is integrated in the tree.
In any case, I have a couple of comments about this patch as-is:
- If the move to src/bin is done, let's update the MSVC scripts accordingly
- contrib/pg_rewind/.gitignore should be cleaned up from its unnecessary entries
- README is incorrect, it is still mentioned for example that this
code should be copied inside PostgreSQL code tree as contrib/pg_rewind
- Code is going to need a brush to clean up things of this type:



-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fractions in GUC variables

2014-12-15 Thread Jim Nasby

On 12/7/14, 1:48 PM, John Gorman wrote:

This patch implements the first wiki/Todo Configuration Files item "Consider 
normalizing fractions in postgresql.conf, perhaps using '%'".


FWIW, I've reviewed this (should have read the thread first :/). It looks 
clean, passes make check and works as advertised. I also looked at what config 
options were set to be % and they make sense. Looking for .[0-9] in 
postgresql.conf, the only GUCs I saw where % didn't make sense was the two geco 
GUCs Heikki mentioned.

One thing I don't like is the need to wrap a %-based SET in quotes, but we need 
to do that for all GUCs that include units, so presumably there's no good way 
around it.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Peter Geoghegan
On Mon, Dec 15, 2014 at 4:33 PM, Tom Lane  wrote:
> Jeff Janes  writes:
>> It seems like IGNORE is getting replaced by the preprocessor with something
>> else, but I don't know how to get my hands on the intermediate file after
>> the preprocessor has done its thing.
>
> Maybe IGNORE is defined as a macro in MinGW?
> Try s/IGNORE/IGNORE_P/g throughout the patch.

BTW, the gcc -E flag does this. So figure out what exact arguments
MinGW's gcc is passed in the ordinary course of compiling gram.c, and
prepend "-E" to the list of existing flags while manually executing
gcc -- that should let you know exactly what's happening here.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Peter Geoghegan
On Mon, Dec 15, 2014 at 4:22 PM, Jeff Janes  wrote:
> Also, in both Linux and MinGW under option 1 patch I get an OID conflict on
> OID 3261.

I'll take a pass at fixing this bitrot soon. I'll follow Tom's advice
about macro collisions on MinGW while I'm at it, since his explanation
seems plausible.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Tom Lane
Jeff Janes  writes:
> It seems like IGNORE is getting replaced by the preprocessor with something
> else, but I don't know how to get my hands on the intermediate file after
> the preprocessor has done its thing.

Maybe IGNORE is defined as a macro in MinGW?
Try s/IGNORE/IGNORE_P/g throughout the patch.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind in contrib

2014-12-15 Thread Michael Paquier
On Sat, Dec 13, 2014 at 12:01 AM, Heikki Linnakangas
 wrote:
> On 12/12/2014 04:20 PM, Andres Freund wrote:
>>
>> Not sure if the copyright notices in the current form are actually ok?
>
>
> Hmm. We do have such copyright notices in the source tree, but I know that
> we're trying to avoid it in new code. They had to be there when the code
> lived as a separate project, but now that I'm contributing this to
> PostgreSQL proper, I can remove them if necessary.
Yep, it is fine to remove those copyright notices and to keep only the
references to PGDG when code is integrated in the tree.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] REVIEW: Track TRUNCATE via pgstat

2014-12-15 Thread Jim Nasby

https://commitfest.postgresql.org/action/patch_view?id=1661 (apologies for not 
replying to the thread; I can't find it in my inbox)

Patch applies and passes make check. Code formatting looks good.

The regression test partially tests this. It does not cover 2PC, nor does it 
test rolling back a subtransaction that contains a truncate. The latter 
actually doesn't work correctly.

The test also adds 2.5 seconds of forced pg_sleep. I think that's both bad and 
unnecessary. When I removed the sleeps I still saw times of less than 0.1 
seconds. Also, wait_for_trunc_test_stats() should display something if it times 
out; otherwise you'll have a test failure and won't have any indication why.

I've attached a patch that adds logging on timeout and contains a test case 
that demonstrates the rollback to savepoint bug.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
>From 973dbe11b796395641dd3947658508ad68aebda5 Mon Sep 17 00:00:00 2001
From: Jim Nasby 
Date: Mon, 15 Dec 2014 18:18:40 -0600
Subject: [PATCH] Show broken rollback case
Warn if stats update doesn't happen. Add test that shows
broken stats counts when rolling back a subtrans with a truncate.

---
 src/test/regress/sql/truncate.sql | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/src/test/regress/sql/truncate.sql 
b/src/test/regress/sql/truncate.sql
index c912345..f5a0e7a 100644
--- a/src/test/regress/sql/truncate.sql
+++ b/src/test/regress/sql/truncate.sql
@@ -251,6 +251,9 @@ begin
 perform pg_stat_clear_snapshot();
 
   end loop;
+  IF NOT updated THEN
+RAISE WARNING 'stats update never happened';
+  END IF;
 
   TRUNCATE prevstats;  -- what a pun
   INSERT INTO prevstats SELECT newstats.*;
@@ -311,5 +314,20 @@ COMMIT;
 SELECT pg_sleep(0.5);
 SELECT * FROM wait_for_trunc_test_stats();
 
+-- now to use a savepoint: this should only count 2 inserts and have 3
+-- live tuples after commit
+BEGIN;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+SAVEPOINT p1;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+TRUNCATE trunc_stats_test;
+INSERT INTO trunc_stats_test DEFAULT VALUES;
+ROLLBACK TO SAVEPOINT p1;
+COMMIT;
+
+SELECT * FROM wait_for_trunc_test_stats();
+
 DROP TABLE prevstats CASCADE;
 DROP TABLE trunc_stats_test;
-- 
2.1.2


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-12-15 Thread Jeff Janes
On Mon, Dec 8, 2014 at 8:16 PM, Peter Geoghegan  wrote:
>
> Attached revision, v1.6, slightly tweaks the ordering of per-statement
> trigger execution. The ordering is now explicitly documented (the html
> mirror has been updated:
>
> http://postgres-benchmarks.s3-website-us-east-1.amazonaws.com/on-conflict-docs/trigger-definition.html
> ).
>
> As always, there is a variant for each approach to value locking.
>
> This revision fixes bitrot that developed when the patchset was
> applied on master's tip, and also cleans up comments regarding how the
> parent insert carries auxiliary/child state through all stages of
> query processing. That should structure be clearer now, including how
> setrefs.c has the auxiliary/child ModifyTable use the same
> resultRelation as its parent.
>


If I build either option of the patch under MinGW, I get an error in the
grammar files related to the IGNORE reserved word.

$ (./configure --host=x86_64-w64-mingw32 --without-zlib && make  && make
check) > /dev/null

In file included from ../../../src/include/parser/gramparse.h:29:0,
 from gram.y:59:
../../../src/include/parser/gram.h:207:6: error: expected identifier before
numeric constant
In file included from gram.y:14366:0:

I don't get this problem on Linux.

The build chain seems to meet the specified minimum:

flex.exe 2.5.35
bison (GNU Bison) 2.4.2
This is perl, v5.8.8 built for msys-64int

It seems like IGNORE is getting replaced by the preprocessor with something
else, but I don't know how to get my hands on the intermediate file after
the preprocessor has done its thing.

Also, in both Linux and MinGW under option 1 patch I get an OID conflict on
OID 3261.

Cheers,

Jeff


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Josh Berkus
On 12/15/2014 12:05 PM, Peter Geoghegan wrote:
> On Mon, Dec 15, 2014 at 11:52 AM, Josh Berkus  wrote:
>> On 12/15/2014 11:27 AM, Robert Haas wrote:
>>> I feel like we used to be better at encouraging people to participate
>>> in the CF even if they were not experts, and to do the best they can
>>> based on what they did know.  That was a helpful dynamic.  Sure, the
>>> reviews weren't perfect, but more people helped, and reviewing some of
>>> the patch well and some of it in a more cursory manner is way better
>>> than reviewing none of it at all.
>>
>> Well, it was strongly expressed to me by a number of senior contributors
>> on this list and at the developer meeting that inexpert reviews were not
>> really wanted, needed or helpful.  As such, I stopped recruiting new
>> reviewers (and, for that matter, doing them myself).  I don't know if
>> the same goes for anyone else.
> 
> Really? I thought we were pretty consistent in encouraging new reviewers.
> 

Read the thread on this list where I suggested crediting reviewers in
the release notes.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On partitioning

2014-12-15 Thread Amit Langote

Robert wrote:
> On Sun, Dec 14, 2014 at 9:12 PM, Amit Langote
>  wrote:
> > This means if a user puts arbitrary expressions in a partition definition, 
> > say,
> >
> > ... FOR VALUES  extract(month from current_date) TO extract(month from
> current_date + interval '3 months'),
> >
> > we make sure that those expressions are pre-computed to literal values.
> 
> I would expect that to fail, just as it would fail if you tried to
> build an index using a volatile expression.

Oops, wrong example, sorry. In case of an otherwise good expression?

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Kouhei Kaigai
Hanada-san,

Thanks for proposing this great functionality people waited for.

> On Mon, Dec 15, 2014 at 3:40 AM, Shigeru Hanada 
> wrote:
> > I'm working on $SUBJECT and would like to get comments about the
> > design.  Attached patch is for the design below.
> 
> I'm glad you are working on this.
> 
> > 1. Join source relations
> > As described above, postgres_fdw (and most of SQL-based FDWs) needs to
> > check that 1) all foreign tables in the join belong to a server, and
> > 2) all foreign tables have same checkAsUser.
> > In addition to that, I add extra limitation that both inner/outer
> > should be plain foreign tables, not a result of foreign join.  This
> > limiation makes SQL generator simple.  Fundamentally it's possible to
> > join even join relations, so N-way join is listed as enhancement item
> > below.
> 
> It seems pretty important to me that we have a way to push the entire join
> nest down.  Being able to push down a 2-way join but not more seems like
> quite a severe limitation.
> 
As long as we don't care about simpleness/gracefulness of the remote
query, what we need to do is not complicated. All the optimization jobs
are responsibility of remote system.

Let me explain my thought:
We have three cases to be considered; (1) a join between foreign tables
that is the supported case, (2) a join either of relations is foreign
join, and (3) a join both of relations are foreign joins.

In case of (1), remote query shall have the following form:
  SELECT  FROM FT1 JOIN FT2 ON  WHERE 

In case of (2) or (3), because we already construct inner/outer join,
it is not difficult to replace the FT1 or FT2 above by sub-query, like:
  SELECT  FROM FT3 JOIN
(SELECT  FROM FT1 JOIN FT2 ON  WHERE ) as FJ1
ON  WHERE 

How about your thought?


Let me comment on some other points at this moment:

* Enhancement in src/include/foreign/fdwapi.h

It seems to me GetForeignJoinPath_function and GetForeignJoinPlan_function
are not used anywhere. Is it an oversight to remove definitions from your
previous work, isn't it?
Now ForeignJoinPath is added on set_join_pathlist_hook, but not callback of
FdwRoutine.


* Is ForeignJoinPath really needed?

I guess the reason why you added ForeignJoinPath is, to have the fields
of inner_path/outer_path. If we want to have paths of underlying relations,
a straightforward way for the concept (join relations is replaced by
foreign-/custom-scan on a result set of remote join) is enhancement of the
fields in ForeignPath.
How about an idea that adds "List *fdw_subpaths" to save the list of
underlying Path nodes. It also allows to have more than two relations
to be joined.
(Probably, it should be a feature of interface portion. I may have to
enhance my portion.)

* Why NestPath is re-defined?

-typedef JoinPath NestPath;
+typedef struct NestPath
+{
+   JoinPathjpath;
+} NestPath;

It looks to me this change makes patch scale larger...

Best regards,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Status of CF 2014-10 and upcoming 2014-12

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 2:22 AM, Robert Haas  wrote:
> On Sun, Dec 14, 2014 at 10:58 PM, Michael Paquier
>  wrote:
>> PS: Could someone close CF 2014-10 btw?)
>
> Done, and I marked 2014-12 in progress.  I would give you access, but
> I can't seem to ssh into commitfest.postgresql.org any more.
That's fine. Thanks!
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 5:14 AM, Merlin Moncure  wrote:
> OTOH, Our built in compressor as we all know is a complete dog in
> terms of cpu when stacked up against some more modern implementations.
> All that said, as long as there is a clean path to migrating to
> another compression alg should one materialize, that problem can be
> nicely decoupled from this patch as Robert pointed out.
I am curious to see some numbers about that. Has anyone done such
comparison measurements?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-15 Thread Michael Paquier
On Tue, Dec 16, 2014 at 3:46 AM, Robert Haas  wrote:
> On Sat, Dec 13, 2014 at 9:36 AM, Michael Paquier
>  wrote:
>> Something to be aware of btw is that this patch introduces an
>> additional 8 bytes per block image in WAL as it contains additional
>> information to control the compression. In this case this is the
>> uint16 compress_len present in XLogRecordBlockImageHeader. In the case
>> of the measurements done, knowing that 63638 FPWs have been written,
>> there is a difference of a bit less than 500k in WAL between HEAD and
>> "FPW off" in favor of HEAD. The gain with compression is welcome,
>> still for the default there is a small price to track down if a block
>> is compressed or not. This patch still takes advantage of it by not
>> compressing the hole present in page and reducing CPU work a bit.
>
> That sounds like a pretty serious problem to me.
OK. If that's so much a problem, I'll switch back to the version using
1 bit in the block header to identify if a block is compressed or not.
This way, when switch will be off the record length will be the same
as HEAD.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-15 Thread Tomas Vondra
On 15.12.2014 22:35, Jeff Janes wrote:
> On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra  > wrote:
> 
> Hi,
> 
> Attached is v2 of the patch lowering array_agg memory requirements.
> Hopefully it addresses the issues issues mentioned by TL in this thread
> (not handling some of the callers appropriately etc.).
> 
> 
> Hi Tomas, 
> 
> When configured --with-libxml I get compilation errors:
> 
> xml.c: In function 'xml_xpathobjtoxmlarray':
> xml.c:3684: error: too few arguments to function 'accumArrayResult'
> xml.c:3721: error: too few arguments to function 'accumArrayResult'
> xml.c: In function 'xpath':
> xml.c:3933: error: too few arguments to function 'initArrayResult'
> xml.c:3936: error: too few arguments to function 'makeArrayResult'
> 
> And when configured --with-perl, I get:
> 
> plperl.c: In function 'array_to_datum_internal':
> plperl.c:1196: error: too few arguments to function 'accumArrayResult'
> plperl.c: In function 'plperl_array_to_datum':
> plperl.c:1223: error: too few arguments to function 'initArrayResult'
> 
> Cheers,

Thanks, attached is a version that fixes this.

regards
Tomas
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 18ae318..48e66bf 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -1271,13 +1271,13 @@ dblink_get_connections(PG_FUNCTION_ARGS)
 			/* stash away current value */
 			astate = accumArrayResult(astate,
 	  CStringGetTextDatum(hentry->name),
-	  false, TEXTOID, CurrentMemoryContext);
+	  false, TEXTOID, CurrentMemoryContext, true);
 		}
 	}
 
 	if (astate)
 		PG_RETURN_ARRAYTYPE_P(makeArrayResult(astate,
-			  CurrentMemoryContext));
+			  CurrentMemoryContext, true));
 	else
 		PG_RETURN_NULL();
 }
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index c16b38e..bd5eb32 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -671,7 +671,7 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace,
 /* No match, so keep old option */
 astate = accumArrayResult(astate, oldoptions[i],
 		  false, TEXTOID,
-		  CurrentMemoryContext);
+		  CurrentMemoryContext, true);
 			}
 		}
 	}
@@ -758,12 +758,12 @@ transformRelOptions(Datum oldOptions, List *defList, char *namspace,
 
 			astate = accumArrayResult(astate, PointerGetDatum(t),
 	  false, TEXTOID,
-	  CurrentMemoryContext);
+	  CurrentMemoryContext, true);
 		}
 	}
 
 	if (astate)
-		result = makeArrayResult(astate, CurrentMemoryContext);
+		result = makeArrayResult(astate, CurrentMemoryContext, true);
 	else
 		result = (Datum) 0;
 
diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c
index ab4ed6c..804398a 100644
--- a/src/backend/commands/foreigncmds.c
+++ b/src/backend/commands/foreigncmds.c
@@ -82,11 +82,11 @@ optionListToArray(List *options)
 
 		astate = accumArrayResult(astate, PointerGetDatum(t),
   false, TEXTOID,
-  CurrentMemoryContext);
+  CurrentMemoryContext, true);
 	}
 
 	if (astate)
-		return makeArrayResult(astate, CurrentMemoryContext);
+		return makeArrayResult(astate, CurrentMemoryContext, true);
 
 	return PointerGetDatum(NULL);
 }
diff --git a/src/backend/executor/nodeSubplan.c b/src/backend/executor/nodeSubplan.c
index d9faf20..39faa4c 100644
--- a/src/backend/executor/nodeSubplan.c
+++ b/src/backend/executor/nodeSubplan.c
@@ -262,7 +262,7 @@ ExecScanSubPlan(SubPlanState *node,
 	/* Initialize ArrayBuildStateAny in caller's context, if needed */
 	if (subLinkType == ARRAY_SUBLINK)
 		astate = initArrayResultAny(subplan->firstColType,
-	CurrentMemoryContext);
+	CurrentMemoryContext, true);
 
 	/*
 	 * We are probably in a short-lived expression-evaluation context. Switch
@@ -372,7 +372,7 @@ ExecScanSubPlan(SubPlanState *node,
 			Assert(subplan->firstColType == tdesc->attrs[0]->atttypid);
 			dvalue = slot_getattr(slot, 1, &disnull);
 			astate = accumArrayResultAny(astate, dvalue, disnull,
-		 subplan->firstColType, oldcontext);
+		 subplan->firstColType, oldcontext, true);
 			/* keep scanning subplan to collect all values */
 			continue;
 		}
@@ -964,7 +964,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext)
 	/* Initialize ArrayBuildStateAny in caller's context, if needed */
 	if (subLinkType == ARRAY_SUBLINK)
 		astate = initArrayResultAny(subplan->firstColType,
-	CurrentMemoryContext);
+	CurrentMemoryContext, true);
 
 	/*
 	 * Must switch to per-query memory context.
@@ -1026,7 +1026,7 @@ ExecSetParamPlan(SubPlanState *node, ExprContext *econtext)
 			Assert(subplan->firstColType == tdesc->attrs[0]->atttypid);
 			dvalue = slot_getattr(slot, 1, &disnull);
 			astate = accumArrayResultAny(astate, dvalue, disnull,
-		 subplan->firstColType, oldcontext);
+		 subplan->firstColType, oldcont

Re: [HACKERS] GiST kNN search queue (Re: KNN-GiST with recheck)

2014-12-15 Thread Jeff Janes
On Mon, Dec 15, 2014 at 5:08 AM, Heikki Linnakangas  wrote:

Here's a new version of the patch. It now uses the same pairing heap code
> that I posted in the other thread ("advance local xmin more aggressivley",
> http://www.postgresql.org/message-id/5488acf0.8050...@vmware.com). The
> pairingheap_remove() function is unused in this patch, but it is needed by
> that other patch.



Under enable-cassert, this tries to call pairingheap_empty, but that
function doesn't exist.

I looked in the other patch and didn't find it defined there, either.

cheers,

Jeff


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Alvaro Herrera
Rodrigo Gonzalez wrote:

> I have been reading this list for a long time and I have to say that I
> thought about reviewing patches for a long timebut I am really
> scared about few things
> 
> 1 - I have no knowledge at all about postgresql code so I can be doing
> things incorrectly (saying that something is fine or not fine, but
> making a mistake)
> 
> 2 - English is not my main language...and I am not really good at
> English, reading is one thing, but writing is not as easy for me so
> maybe a review cannot be correctly understood
> 
> Besides that, how long can take a review, simple one of course,
> considering that I starting vacations in 2 weeks.
> 
> I really hope I can start helping here but I am afraid that maybe I can
> make things slower than today cause of my lack of source code..

If you never start, you will never get enough experience to do
meaningful reviews.  Don't worry about making mistakes.  If you don't
catch a problem, somebody else might.  If you catch a problem and report
it to the patch author, you save somebody else the time to report that
problem.  Eventually, with experience, your reviews will be good enough
that they will be valuable on their own right.  Also, if you get enough
experience, you might eventually be able to write your own patches (or
you will discover you're not for the programming thing in the first
place.)

We have patches to review all year long; just have a look at the
http://commitfest.postgresql.org site whenever you think you will have
time.  If you return from vacations in 4-5 weeks, I'm pretty sure we
will have patches to review at that time, too.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor binary-search int overflow in timezone code

2014-12-15 Thread Jim Nasby

On 12/15/14, 1:39 PM, Christoph Berg wrote:

Re: Tom Lane 2014-12-15 <21813.1418655...@sss.pgh.pa.us>

This is totally silly.  The timecnt couldn't be anywhere near INT_MAX (in
fact, it is not allowed to exceed TZ_MAX_TIMES, which is currently just
1200).  And there are bunches of other instances of similar code in PG;
shall we put equally wishy-washy comments on them all?


Well, if it's not interesting, let's just forget it. Sorry.


At the risk of sticking my head in the lions mouth... this is the kind of response that 
deters people from contributing anything to the project, including reviewing patches. A 
simple "thanks, but we feel it's already clear enough that there can't be anywhere 
close to INT_MAX timezones" would have sufficed.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: decreasing memory needlessly consumed by array_agg

2014-12-15 Thread Jeff Janes
On Sat, Nov 29, 2014 at 8:57 AM, Tomas Vondra  wrote:
>
> Hi,
>
> Attached is v2 of the patch lowering array_agg memory requirements.
> Hopefully it addresses the issues issues mentioned by TL in this thread
> (not handling some of the callers appropriately etc.).
>

Hi Tomas,

When configured --with-libxml I get compilation errors:

xml.c: In function 'xml_xpathobjtoxmlarray':
xml.c:3684: error: too few arguments to function 'accumArrayResult'
xml.c:3721: error: too few arguments to function 'accumArrayResult'
xml.c: In function 'xpath':
xml.c:3933: error: too few arguments to function 'initArrayResult'
xml.c:3936: error: too few arguments to function 'makeArrayResult'

And when configured --with-perl, I get:

plperl.c: In function 'array_to_datum_internal':
plperl.c:1196: error: too few arguments to function 'accumArrayResult'
plperl.c: In function 'plperl_array_to_datum':
plperl.c:1223: error: too few arguments to function 'initArrayResult'

Cheers,

Jeff


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Rodrigo Gonzalez
On 12/15/2014 02:37 AM, Michael Paquier wrote:
> Hi all,
> 
> Let's begin the commit fest of December:
> https://commitfest.postgresql.org/action/commitfest_view?id=25

I have been reading this list for a long time and I have to say that I
thought about reviewing patches for a long timebut I am really
scared about few things

1 - I have no knowledge at all about postgresql code so I can be doing
things incorrectly (saying that something is fine or not fine, but
making a mistake)

2 - English is not my main language...and I am not really good at
English, reading is one thing, but writing is not as easy for me so
maybe a review cannot be correctly understood

Besides that, how long can take a review, simple one of course,
considering that I starting vacations in 2 weeks.

I really hope I can start helping here but I am afraid that maybe I can
make things slower than today cause of my lack of source code..

Best regards

Rodrigo Gonzalez


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Heikki Linnakangas

On 12/15/2014 08:32 PM, Alexander Korotkov wrote:

On Mon, Dec 15, 2014 at 7:47 PM, Heikki Linnakangas 
wrote:

On 12/15/2014 05:22 PM, Alexander Korotkov wrote:


On Mon, Dec 15, 2014 at 6:20 PM, Tom Lane  wrote:



Alexander Korotkov  writes:


On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas <


hlinnakan...@vmware.com


wrote:

Right. I also looked at it briefly, but I wasn't sure if we really want
it. AFAICT, no-one has actually asked for that operator, it was written
only to be an example of an operator that would benefit from the


knn-gist



with recheck patch.





  Lack of recheck is major limitation of KNN-GiST now. People are not



asking


for that because they don't know what is needed to implement exact KNN


for


PostGIS. Now they have to invent kluges like this:
[ query using ORDER BY ST_Distance ]



It's not apparent to me that the proposed operator is a replacement for
ST_Distance.  The underlying data in an example like this won't be either
points or polygons, it'll be PostGIS datatypes.

In short, I believe that PostGIS could use what you're talking about,
but I agree with Heikki's objection that nobody has asked for this
particular operator.



"polygon <-> point" is for sure not ST_Distance replacement. I was giving
this argument about KNN-GiST with recheck itself. "polygon <-> point" is
needed just as in-core example of KNN-GiST with recheck.



Right. I don't think point <-> polygon is too useful by itself, but we
need an example in core that could make use KNN-GiST recheck patch. We
can't write a regression test for it otherwise, for starters.

Actually, we probably could've used the circle <-> polygon for that just
as well...


Did you mean searching for circles or polygons in the last sentence?


circle as the key, on an index on a polygon column. A circle with radius 
0 is equivalent to a point, after all.


I'm looking at the knn-gist with recheck patch now, and I see that it 
actually adds two more operators: polygon <-> point (commutator of what 
I just committed) and circle <-> point. We might want to split those off 
into a separate patch too.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Tom Lane
Andres Freund  writes:
> On 2014-12-15 11:52:35 -0800, Josh Berkus wrote:
>> On 12/15/2014 11:27 AM, Robert Haas wrote:
>>> I feel like we used to be better at encouraging people to participate
>>> in the CF even if they were not experts, and to do the best they can
>>> based on what they did know.  That was a helpful dynamic.  Sure, the
>>> reviews weren't perfect, but more people helped, and reviewing some of
>>> the patch well and some of it in a more cursory manner is way better
>>> than reviewing none of it at all.

>> Well, it was strongly expressed to me by a number of senior contributors
>> on this list and at the developer meeting that inexpert reviews were not
>> really wanted, needed or helpful.

> I think that's pretty far away from what was said.

Doesn't match my recollection either.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andrew Dunstan


On 12/15/2014 03:16 PM, Andres Freund wrote:

On 2014-12-15 11:52:35 -0800, Josh Berkus wrote:

On 12/15/2014 11:27 AM, Robert Haas wrote:

I feel like we used to be better at encouraging people to participate
in the CF even if they were not experts, and to do the best they can
based on what they did know.  That was a helpful dynamic.  Sure, the
reviews weren't perfect, but more people helped, and reviewing some of
the patch well and some of it in a more cursory manner is way better
than reviewing none of it at all.

Well, it was strongly expressed to me by a number of senior contributors
on this list and at the developer meeting that inexpert reviews were not
really wanted, needed or helpful.

I think that's pretty far away from what was said.




I welcome reviews at all levels, both as a developer and as a committer.

It is true that we are very short on reviewers with in depth knowledge 
and experience, and this is the real problem we have far more than any 
technological issues people might have.


But that doesn't mean we should be turning anyone away. We should not.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning off HOT/Cleanup sometimes

2014-12-15 Thread Jeff Janes
On Thu, Dec 11, 2014 at 7:24 PM, Simon Riggs  wrote:
>
> On 17 November 2014 at 22:08, Simon Riggs  wrote:
> > On 17 November 2014 21:09, Alvaro Herrera 
> wrote:
> >> What happened to this patch?  I'm going over something that could use
> >> the concept of "clean some stuff up when reading this page, but only if
> >> we're already writing" or similar.
> >>
> >> I see some cases were presented that had a performance decrease.  Did we
> >> get any numbers for the increase in performance in some other
> >> interesting cases?
> >
> > It's not dead; it just needs more work. Maybe for next CF, or you can
> now.
>
> Latest version attached for next CF
>


I still get the compiler error in contrib:

pgstattuple.c: In function 'pgstat_heap':
pgstattuple.c:279: error: too few arguments to function
'heap_beginscan_strat'

Should it pass false for the always_prune?

Cheers,

Jeff


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andres Freund
On 2014-12-15 11:52:35 -0800, Josh Berkus wrote:
> On 12/15/2014 11:27 AM, Robert Haas wrote:
> > I feel like we used to be better at encouraging people to participate
> > in the CF even if they were not experts, and to do the best they can
> > based on what they did know.  That was a helpful dynamic.  Sure, the
> > reviews weren't perfect, but more people helped, and reviewing some of
> > the patch well and some of it in a more cursory manner is way better
> > than reviewing none of it at all.
> 
> Well, it was strongly expressed to me by a number of senior contributors
> on this list and at the developer meeting that inexpert reviews were not
> really wanted, needed or helpful.

I think that's pretty far away from what was said.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench -f and vacuum

2014-12-15 Thread Andres Freund
On 2014-12-15 10:55:30 -0800, Jeff Janes wrote:
> On Sat, Dec 13, 2014 at 7:39 AM, Tom Lane  wrote:
> >
> > Tatsuo Ishii  writes:
> > > Currently pgbench -f (run custom script) executes vacuum against
> > > pgbench_* tables before stating bench marking if -n (or --no-vacuum)
> > > is not specified. If those tables do not exist, pgbench fails. To
> > > prevent this, -n must be specified. For me this behavior seems insane
> > > because "-f" does not necessarily suppose the existence of the
> > > pgbench_* tables.  Attached patch prevents pgbench from exiting even
> > > if those tables do not exist.
> >
> > I don't particularly care for this approach.  I think if we want to
> > do something about this, we should just make -f imply -n.  Although
> > really, given the lack of complaints so far, it seems like people
> > manage to deal with this state of affairs just fine.  Do we really
> > need to do anything?
> >
> 
> I hereby complain about this.
> 
> It has bugged me several times, and having the errors be non-fatal when -f
> was given was the best (easy) thing I could come up with as well, but I was
> too lazy to actually write the code.

Same here. I vote for making -f imply -n as well.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALWriter active during recovery

2014-12-15 Thread Andres Freund
Hi,

On 2014-12-15 18:51:44 +, Simon Riggs wrote:
> Currently, WALReceiver writes and fsyncs data it receives. Clearly,
> while we are waiting for an fsync we aren't doing any other useful
> work.

Well, it can still buffer data on the network level, but there's
definitely limits to that. So I can see this as being useful.

> Following patch starts WALWriter during recovery and makes it
> responsible for fsyncing data, allowing WALReceiver to progress other
> useful actions.
> 
> At present this is a WIP patch, for code comments only. Don't bother
> with anything other than code questions at this stage.
> 
> Implementation questions are
> 
> * How should we wake WALReceiver, since it waits on a poll(). Should
> we use SIGUSR1, which is already used for latch waits, or another
> signal?

It's not entirely trivial, but also not hard, to make it use the latch
code for waiting. It'd probably end up requiring less code because then
we could just scratch libqpwalreceiver.c:libpq_select().

> * Should we introduce some pacing delays if the WALreceiver gets too
> far ahead of apply?

Hm. Why don't we simply start fsyncing in the receiver itself at regular
intervals? If already synced that's cheap, if not, it'll pace us.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-15 Thread Merlin Moncure
On Fri, Dec 12, 2014 at 8:27 AM, Andres Freund  wrote:
> On 2014-12-12 09:24:27 -0500, Bruce Momjian wrote:
>> On Fri, Dec 12, 2014 at 03:22:24PM +0100, Andres Freund wrote:
>> > > Well, the larger question is why wouldn't we just have the user compress
>> > > the entire WAL file before archiving --- why have each backend do it?
>> > > Is it the write volume we are saving?  I though this WAL compression
>> > > gave better performance in some cases.
>> >
>> > Err. Streaming?
>>
>> Well, you can already set up SSL for compression while streaming.  In
>> fact, I assume many are already using SSL for streaming as the majority
>> of SSL overhead is from connection start.
>
> That's not really true. The overhead of SSL during streaming is
> *significant*. Both the kind of compression it does (which is far more
> expensive than pglz or lz4) and the encyrption itself. In many cases
> it's prohibitively expensive - there's even a fair number on-list
> reports about this.

(late to the party)
That may be true, but there are a number of ways to work around SSL
performance issues such as hardware acceleration (perhaps deferring
encryption to another point in the network), weakening the protocol,
or not using it at all.

OTOH, Our built in compressor as we all know is a complete dog in
terms of cpu when stacked up against some more modern implementations.
All that said, as long as there is a clean path to migrating to
another compression alg should one materialize, that problem can be
nicely decoupled from this patch as Robert pointed out.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Peter Geoghegan
On Mon, Dec 15, 2014 at 11:52 AM, Josh Berkus  wrote:
> On 12/15/2014 11:27 AM, Robert Haas wrote:
>> I feel like we used to be better at encouraging people to participate
>> in the CF even if they were not experts, and to do the best they can
>> based on what they did know.  That was a helpful dynamic.  Sure, the
>> reviews weren't perfect, but more people helped, and reviewing some of
>> the patch well and some of it in a more cursory manner is way better
>> than reviewing none of it at all.
>
> Well, it was strongly expressed to me by a number of senior contributors
> on this list and at the developer meeting that inexpert reviews were not
> really wanted, needed or helpful.  As such, I stopped recruiting new
> reviewers (and, for that matter, doing them myself).  I don't know if
> the same goes for anyone else.

Really? I thought we were pretty consistent in encouraging new reviewers.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Josh Berkus
On 12/15/2014 11:27 AM, Robert Haas wrote:
> I feel like we used to be better at encouraging people to participate
> in the CF even if they were not experts, and to do the best they can
> based on what they did know.  That was a helpful dynamic.  Sure, the
> reviews weren't perfect, but more people helped, and reviewing some of
> the patch well and some of it in a more cursory manner is way better
> than reviewing none of it at all.

Well, it was strongly expressed to me by a number of senior contributors
on this list and at the developer meeting that inexpert reviews were not
really wanted, needed or helpful.  As such, I stopped recruiting new
reviewers (and, for that matter, doing them myself).  I don't know if
the same goes for anyone else.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor binary-search int overflow in timezone code

2014-12-15 Thread Christoph Berg
Re: Tom Lane 2014-12-15 <21813.1418655...@sss.pgh.pa.us>
> This is totally silly.  The timecnt couldn't be anywhere near INT_MAX (in
> fact, it is not allowed to exceed TZ_MAX_TIMES, which is currently just
> 1200).  And there are bunches of other instances of similar code in PG;
> shall we put equally wishy-washy comments on them all?

Well, if it's not interesting, let's just forget it. Sorry.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Robert Haas
On Sun, Dec 14, 2014 at 4:53 PM, Mark Cave-Ayland
 wrote:
> What I find frustrating is that I've come back from a workflow where
> I've been reviewing/testing patches within months of joining a project
> because the barrier for entry has been so low, and yet even with much
> longer experience of the PostgreSQL codebase I feel I can't do the same
> for patches submitted to the commitfest.
>
> And why is this? It's because while I know very specific areas of the
> code well, many patches span different areas of the source tree of which
> I have little and/or no experience, which when supplied as a single
> monolithic patch makes it impossible for me to give meaningful review to
> all but a tiny proportion of them.

So, there are certainly some large patches that do that, and they
typically require a very senior reviewer.  But there are lots of small
patches too, touching little enough that you can learn enough to give
them a decent review even if you've never looked at that before.  I
find myself in that situation as a reviewer and committer pretty
regularly; being a committer doesn't magically make you an expert on
the entire code base.  You can (and I do) focus your effort on the
things you know best, but you have to step outside your comfort zone
sometimes, or you never learn anything new.

I feel like we used to be better at encouraging people to participate
in the CF even if they were not experts, and to do the best they can
based on what they did know.  That was a helpful dynamic.  Sure, the
reviews weren't perfect, but more people helped, and reviewing some of
the patch well and some of it in a more cursory manner is way better
than reviewing none of it at all.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andrew Dunstan


On 12/15/2014 02:08 PM, Robert Haas wrote:

On Sun, Dec 14, 2014 at 2:24 PM, Mark Cave-Ayland
 wrote:

However if it were posted as part of patchset with a subject of "[PATCH]
gram.y: add EXCLUDED pseudo-alias to bison grammar" then this may pique
my interest enough to review the changes to the grammar rules, with the
barrier for entry reduced to understanding just the PostgreSQL-specific
parts.

Meh.  Such a patch couldn't possibly compile or work without modifying
other parts of the tree.  And I'm emphatically opposed to splitting a
commit that would have taken the tree from one working state to
another working state into a series of patches that would have taken
it from a working state through various non-working states and
eventually another working state.  Furthermore, if you really want to
review that specific part of the patch, just look for what changed in
gram.y, perhaps by applying the patch and doing git diff
src/backend/parser/gram.y.  This is really not hard.

I certainly agree that there are cases where patch authors could and
should put more work into decomposing large patches into bite-sized
chunks.  But I don't think that's always possible, and I don't think
that, for example, applying BRIN in N separate pieces would have been
a step forward.  It's one feature, not many.



+1

I have in the past found the "bunch of patches" approach to be more that 
somewhat troublesome, especially when one gets "here is an update to 
patch nn of the series" and one has to try to compose a coherent set of 
patches in order to test them. A case in point I recall was the original 
Foreign Data Wrapper patchset.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Tom Lane
Robert Haas  writes:
> On Sun, Dec 14, 2014 at 2:24 PM, Mark Cave-Ayland
>  wrote:
>> However if it were posted as part of patchset with a subject of "[PATCH]
>> gram.y: add EXCLUDED pseudo-alias to bison grammar" then this may pique
>> my interest enough to review the changes to the grammar rules, with the
>> barrier for entry reduced to understanding just the PostgreSQL-specific
>> parts.

> Meh.  Such a patch couldn't possibly compile or work without modifying
> other parts of the tree.  And I'm emphatically opposed to splitting a
> commit that would have taken the tree from one working state to
> another working state into a series of patches that would have taken
> it from a working state through various non-working states and
> eventually another working state.

Yeah; that would totally destroy the use of git bisect, which was supposed
to be an advantage of smaller patches.

Perhaps you could always design the patch split-up in such a way that
each incremental step still compiles, but that would greatly limit how
you could factorize the patch, so I'm not sure it comes out as a win.
If we're going to submit patches in small chunks I'd rather the chunks
are built around separation of concerns, and not held to "would this
compile in isolation?".

IOW, submission in chunks is one thing but it shouldn't necessarily
translate to committing in the same chunks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Robert Haas
On Sun, Dec 14, 2014 at 2:24 PM, Mark Cave-Ayland
 wrote:
> However if it were posted as part of patchset with a subject of "[PATCH]
> gram.y: add EXCLUDED pseudo-alias to bison grammar" then this may pique
> my interest enough to review the changes to the grammar rules, with the
> barrier for entry reduced to understanding just the PostgreSQL-specific
> parts.

Meh.  Such a patch couldn't possibly compile or work without modifying
other parts of the tree.  And I'm emphatically opposed to splitting a
commit that would have taken the tree from one working state to
another working state into a series of patches that would have taken
it from a working state through various non-working states and
eventually another working state.  Furthermore, if you really want to
review that specific part of the patch, just look for what changed in
gram.y, perhaps by applying the patch and doing git diff
src/backend/parser/gram.y.  This is really not hard.

I certainly agree that there are cases where patch authors could and
should put more work into decomposing large patches into bite-sized
chunks.  But I don't think that's always possible, and I don't think
that, for example, applying BRIN in N separate pieces would have been
a step forward.  It's one feature, not many.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench -f and vacuum

2014-12-15 Thread Jeff Janes
On Sat, Dec 13, 2014 at 7:39 AM, Tom Lane  wrote:
>
> Tatsuo Ishii  writes:
> > Currently pgbench -f (run custom script) executes vacuum against
> > pgbench_* tables before stating bench marking if -n (or --no-vacuum)
> > is not specified. If those tables do not exist, pgbench fails. To
> > prevent this, -n must be specified. For me this behavior seems insane
> > because "-f" does not necessarily suppose the existence of the
> > pgbench_* tables.  Attached patch prevents pgbench from exiting even
> > if those tables do not exist.
>
> I don't particularly care for this approach.  I think if we want to
> do something about this, we should just make -f imply -n.  Although
> really, given the lack of complaints so far, it seems like people
> manage to deal with this state of affairs just fine.  Do we really
> need to do anything?
>

I hereby complain about this.

It has bugged me several times, and having the errors be non-fatal when -f
was given was the best (easy) thing I could come up with as well, but I was
too lazy to actually write the code.

Cheers,

Jeff


Re: [HACKERS] On partitioning

2014-12-15 Thread Robert Haas
On Sun, Dec 14, 2014 at 9:12 PM, Amit Langote
 wrote:
> This means if a user puts arbitrary expressions in a partition definition, 
> say,
>
> ... FOR VALUES  extract(month from current_date) TO extract(month from 
> current_date + interval '3 months'),
>
> we make sure that those expressions are pre-computed to literal values.

I would expect that to fail, just as it would fail if you tried to
build an index using a volatile expression.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Logical Decoding follows timelines

2014-12-15 Thread Simon Riggs
Currently, it doesn't.

This patch is a WIP version of doing that, but only currently attempts
to do this in the WALSender.

Objective is to allow cascaded logical replication.

Very WIP, but here for comments.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


logical_timeline_following.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WALWriter active during recovery

2014-12-15 Thread Simon Riggs
Currently, WALReceiver writes and fsyncs data it receives. Clearly,
while we are waiting for an fsync we aren't doing any other useful
work.

Following patch starts WALWriter during recovery and makes it
responsible for fsyncing data, allowing WALReceiver to progress other
useful actions.

At present this is a WIP patch, for code comments only. Don't bother
with anything other than code questions at this stage.

Implementation questions are

* How should we wake WALReceiver, since it waits on a poll(). Should
we use SIGUSR1, which is already used for latch waits, or another
signal?

* Should we introduce some pacing delays if the WALreceiver gets too
far ahead of apply?

* Other questions you may have?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


walwriter_active_in_recovery.v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Jeff Janes
On Sat, Dec 13, 2014 at 1:37 AM, Craig Ringer  wrote:
>
> On 12/12/2014 06:02 AM, Josh Berkus wrote:
> >
> > Speaking as the originator of commitfests, they were *always* intended
> > to be a temporary measure, a step on the way to something else like
> > continuous integration.
>
> I'd really like to see the project revisit some of the underlying
> assumptions that're being made in this discussion:
>
> - Patches must be email attachments to a mailing list
>
> - Changes must be committed by applying a diff
>
> ... and take a look at some of the options a git-based workflow might
> offer, especially in combination with some of the tools out there that
> help track working branches, run CI, etc.
>
> Having grown used to push/pull workflows with CI integration I find the
> PostgreSQL patch workflow very frustrating, especially for larger
> patches. It's particularly annoying to see a patch series squashed into
> a monster patch whenever it changes hands or gets rebased, because it's
> being handed around as a great honking diff not a git working branch.
>
> Is it time to stop using git like CVS?
>

Perhaps it is just my inexperience with it, but I find the way that
mediawiki, for example, uses git to be utterly baffling. The git log is
bloated with the same change being listed multiple times, at least once as
a commit and again as a merge. If your suggestion would be to go in that
direction, I don't think that would be an improvement.



Cheers,

Jeff


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-12-15 Thread Robert Haas
On Sat, Dec 13, 2014 at 9:36 AM, Michael Paquier
 wrote:
> Something to be aware of btw is that this patch introduces an
> additional 8 bytes per block image in WAL as it contains additional
> information to control the compression. In this case this is the
> uint16 compress_len present in XLogRecordBlockImageHeader. In the case
> of the measurements done, knowing that 63638 FPWs have been written,
> there is a difference of a bit less than 500k in WAL between HEAD and
> "FPW off" in favor of HEAD. The gain with compression is welcome,
> still for the default there is a small price to track down if a block
> is compressed or not. This patch still takes advantage of it by not
> compressing the hole present in page and reducing CPU work a bit.

That sounds like a pretty serious problem to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical Replication Helpers WIP for discussion

2014-12-15 Thread Robert Haas
On Mon, Dec 15, 2014 at 12:57 AM, Petr Jelinek  wrote:
> we've made few helper functions for making logical replication easier, I
> bundled it into contrib module as this is mainly for discussion at this time
> (I don't expect this to get committed any time soon, but it is good way to
> iron out protocol, etc).
>
> I created sample logical decoding plugin that uses those functions and which
> can be used for passing DML changes in platform/version independent
> (hopefully) format.
>
> I will post sample apply BG worker also once I get some initial feedback
> about this.
>
> It's hard to write tests for this as the binary changes contain transaction
> ids and timestamps so the data changes constantly.
>
> This is of course based on the BDR work Andres, Craig and myself have been
> doing.

I can't understand, either from what you've written here or the rather
sparse comments in the patch, what this might be good for.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: dynahash replacement for buffer table

2014-12-15 Thread Robert Haas
On Sun, Nov 9, 2014 at 3:58 PM, Andres Freund  wrote:
>> > * There's several cases where it's noticeable that chash creates more
>> >   cacheline misses - that's imo a good part of why the single threaded
>> >   performance regresses. There's good reasons for the current design
>> >   without a inline bucket, namely that it makes the concurrency handling
>> >   easier. But I think that can be countered by still storing a pointer -
>> >   just to an element inside the bucket. Afaics that'd allow this to be
>> >   introduced quite easily?
>>
>> It's not obvious to me how that would work.  If you just throw those
>> elements on the garbage lists with everything else, it will soon be
>> the case that one bucket header ends up using the cell stored in some
>> other bucket, which isn't going to be awesome.  So you need some kind
>> of more complex scheme to preserve locality.
>
> Treating the element inside the bucket as a kind of one element freelist
> seems quite workable to me. Test whether it's marked deleted, scan the
> hazard pointer list to see whether it can be used. If yes, grand. If
> not, go to the current code. The fact that the element is cacheline
> local will make the test for its deletedness almost free. Yes, the
> hazard pointer scan surely ain't free - but at least for cases like
> bufmgr where reads are never less frequent than writes and very often
> *much* more frequent I'm pretty sure it'd be a noticeable win.

Maybe. I'd expect that to radically increase the time spend doing
hazard pointer scans.  The performance of this system depends heavily
on garbage collection not happening too often, and ISTR that the
performance changes significantly if you vary the amount of of
overallocation.

>> I'm not sure.  We're talking about something on the order of half a
>> percent on my tests, and lots of changes cause things to bounce around
>> by that much.  I'm not sure it makes sense to get too worked up about
>> that if the alternative is to add a big pile of new complexity.
>
> I saw things in the range of 3-4% on my laptop.

Mrmpf.  Well, that sucks.

>> > * I don't understand right now (but then I'm in a Hotel bar) why it's
>> >   safe for CHashAddToGarbage() to clobber the hash value?
>> >   CHashBucketScan() relies the chain being ordered by hashcode. No?
>> >   Another backend might just have been past the IsMarked() bit and look
>> >   at the hashcode?
>>
>> I think the worst case scenario is that we falsely conclude that
>> there's a hash match when there really isn't.  The ensuing memcmp()
>> will clarify matters.
>
> Hm. Let me think about it for a while.
>
> I was thinking that a spurious cmp < 0 could causing us to to miss a
> match - but that could only happen if the match just has been removed
> from the list. Which is fine. Perhaps that case deserves to be mentioned
> in the comment?

Maybe.  I mean, the general principle here is that there may be some
difference between the apparent order of execution on one CPU as
opposed to another, and the code that uses the hash table has to be OK
with that - at least, unless it has independent methods of assuring
that things happen in the right order, but in that case that other
thing may well become the botleneck.  This is just one example of
that.  You might also fail to see an insert that's just happened on
some other node but is not committed to main memory yet, which is not
really an issue we need to fix; it's just how things are.  A general
discussion of this somewhere might be worthwhile, but it's pretty much
the same as any other highly-concurrent hashtable implemenation you'll
find out there.

(It's also not really different from surrounding the hash table
operation, and only the hash table operation, with a big lock.  Then
things can't change while you are scanning the bucket list, but they
can change by the time you can do anything with the returned value,
which is the same problem we have to cope with here.)

> * Another thing I'm wondering about here is whether it's possible that
> somebody is currently walking an "older version" of the bucket list
> (i.e. is currently at an already unlinked element) and then visits a
> newly inserted element with an 'apparently' out of order hash
> value. That seems possible because the inserter might not have seen the
> unlinked element. Afaics that's not a problem for searches - but I'm not
> sure whether it couldn't cause a problem for concurrent inserts and
> deletes.

This is why the delete-mark bit has to be part of the same atomic word
as the next-pointer.  If somebody applies a delete-mark, a subsequent
attempt to insert an entry at that position will fail because the
CAS() of the next-word won't find the right value there - it will find
a delete-marked value, which will never be the value it's expecting.

> * One thing I noticed while looking at that part of code is:
> +   h = target_node->un.hashcode;
> +   if (h == hashcode)
> +   cmp = memcmp(

Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Alexander Korotkov
On Mon, Dec 15, 2014 at 7:47 PM, Heikki Linnakangas  wrote:
>
> On 12/15/2014 05:22 PM, Alexander Korotkov wrote:
>
>> On Mon, Dec 15, 2014 at 6:20 PM, Tom Lane  wrote:
>>
>>>
>>> Alexander Korotkov  writes:
>>>
 On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas <

>>> hlinnakan...@vmware.com
>>>
 wrote:
> Right. I also looked at it briefly, but I wasn't sure if we really want
> it. AFAICT, no-one has actually asked for that operator, it was written
> only to be an example of an operator that would benefit from the
>
 knn-gist
>>>
 with recheck patch.
>

>>>  Lack of recheck is major limitation of KNN-GiST now. People are not

>>> asking
>>>
 for that because they don't know what is needed to implement exact KNN

>>> for
>>>
 PostGIS. Now they have to invent kluges like this:
 [ query using ORDER BY ST_Distance ]

>>>
>>> It's not apparent to me that the proposed operator is a replacement for
>>> ST_Distance.  The underlying data in an example like this won't be either
>>> points or polygons, it'll be PostGIS datatypes.
>>>
>>> In short, I believe that PostGIS could use what you're talking about,
>>> but I agree with Heikki's objection that nobody has asked for this
>>> particular operator.
>>>
>>
>> "polygon <-> point" is for sure not ST_Distance replacement. I was giving
>> this argument about KNN-GiST with recheck itself. "polygon <-> point" is
>> needed just as in-core example of KNN-GiST with recheck.
>>
>
> Right. I don't think point <-> polygon is too useful by itself, but we
> need an example in core that could make use KNN-GiST recheck patch. We
> can't write a regression test for it otherwise, for starters.
>
> Actually, we probably could've used the circle <-> polygon for that just
> as well...


Did you mean searching for circles or polygons in the last sentence?

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Mark Cave-Ayland
On 15/12/14 16:28, Andres Freund wrote:

> I don't believe this really is a question of the type of project. I
> think it's more that especially the kernel has had to deal with similar
> problems at a much larger scale. And the granular approach somewhat
> works for them.

Correct. My argument was that dividing patches into smaller, more
reviewable chunks lessens the barrier for reviewers since many people
can review the individual patches as appropriate to their area of
expertise.

The benefits of this are that the many parts of the patchset get
reviewed early by a number of people, which reduces the workload on the
core developers as they only need to focus on a small number of
individual patches. Hence patches get reworked and merged much more
quickly in this way.

This is in contrast to the commitfest system where a single patch is i)
often held until the next commitfest (where bitrot often sets in) and
ii) requires the reviewer to have good knowledge all of the areas
covered by the patch in order to give a meaningful review, which
significantly reduces the pool of available reviewers.


ATB,

Mark.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Bug] Inconsistent result for inheritance and FOR UPDATE.

2014-12-15 Thread Tom Lane
Etsuro Fujita  writes:
> (2014/12/13 1:17), Tom Lane wrote:
>> We should
>> probably also think about allowing FDWs to change these settings if
>> they want to.

> This is not clear to me.  Maybe I'm missing something, but I think that
> the FDW only needs to look at the original locking strength in
> GetForeignPlan().  Please explain that in a little more detail.

Well, the point is that for postgres_fdw, we could consider using the
same locked-row-identification methods as for local tables, ie CTID.
Now admittedly this might be the only such case, but I'm not entirely
convinced of that --- you could imagine using FDWs for many of the same
use-cases that KaiGai-san has been proposing custom scans for, and
in some of those cases CTIDs would be useful for row IDs.

We'd originally dismissed this on the argument that ROWMARK_REFERENCE
is a cheaper implementation than CTID-based implementations for any
FDW (since it avoids the possible need to fetch a remote row twice).
I'm not sure I still believe that though.  Fetching all columns of all
retrieved rows in order to avoid what might be zero or a small number of
re-fetches is not obviously a win, especially not for FDWs that
represent not-actually-remote resources.

So as long as we're revisiting this area, it might be worth thinking
about letting an FDW have some say in which ROWMARK method is selected
for its tables.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Status of CF 2014-10 and upcoming 2014-12

2014-12-15 Thread Robert Haas
On Sun, Dec 14, 2014 at 10:58 PM, Michael Paquier
 wrote:
> PS: Could someone close CF 2014-10 btw?)

Done, and I marked 2014-12 in progress.  I would give you access, but
I can't seem to ssh into commitfest.postgresql.org any more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-12-15 Thread Tom Lane
Robert Haas  writes:
> On Sun, Dec 14, 2014 at 8:24 PM, Michael Paquier
>  wrote:
>> Moving this patch to CF 2014-12 as work is still going on, note that
>> it is currently marked with Robert as reviewer and that its current
>> status is "Needs review".

> The status here is more like "waiting around to see if anyone else has
> an opinion".  The issue is what should happen when you enter qualified
> name like alvaro.herrera and there is no column named anything like
> herrara in the RTE named alvaro, but there is some OTHER RTE that
> contains a column with a name that is only a small Levenshtein
> distance away from herrera, like roberto.correra.  The questions are:

> 1. Should we EVER give a you-might-have-meant hint in a case like this?
> 2. If so, does it matter whether the RTE name is just a bit different
> from the actual RTE or whether it's completely different?  In other
> words, might we skip the hint in the above case but give one for
> alvara.correra?

It would be astonishingly silly to not care about the RTE name's distance,
if you ask me.  This is supposed to detect typos, not thinkos.

I think there might be some value in a separate heuristic that, when
you typed foo.bar and that doesn't match but there is a baz.bar, suggests
that maybe you meant baz.bar, even if baz is not close typo-wise.  This
would be addressing the thinko case not the typo case, so the rules ought
to be quite different --- in particular I doubt that it'd be a good idea
to hint this way if the column names don't match exactly.  But in any
case the key point is that this is a different heuristic addressing a
different failure mode.  We should not try to make the
levenshtein-distance heuristic address that case.

So my two cents is that when considering a qualified name, this patch
should take levenshtein distance across the two components equally.
There's no good reason to suppose that typos will attack one name
component more (nor less) than the other.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Andres Freund
On 2014-12-15 11:30:40 -0500, Tom Lane wrote:
> Andres Freund  writes:
> > On 2014-12-15 10:15:30 -0500, Tom Lane wrote:
> >> The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
> >> contrib/test_decoding with
> >> TRAP: FailedAssertion("!(((bool)((relation)->rd_refcnt == 0)))", File: 
> >> "relcache.c", Line: 1981)
> 
> > Without catchup invalidations and/or an outside reference to a relation
> > that's normally not a problem because it won't get reloaded from the
> > caches at an inappropriate time while invisible. Until a few weeks ago
> > there was no regression test covering that case which is why these
> > crashes are only there now.
> 
> I've always thought that this whole idea of allowing the caches to contain
> stale information was a Rube Goldberg plan that was going to bite you on
> the ass eventually.

I guess we'll see. I don't think this specific case is that dramatic.
The complication here is that there's a reference to a relation from
outside logical decoding - that's not something actually likely to be
used at least by replication solutions. It's also impossible to hit from
the walsender interface.

We could just prohibit referencing relations like that... The SQL
interface primarily is used for regression tests and discovery of the
feature. At least as long there's no way to do streaming from SQL which
I can't really see how to do.

> This case isn't doing anything to increase my faith
> in it, and the proposed patch seems like just another kluge on top of
> a rickety stack.

Another possibility would be to replace
else if (!IsTransactionState())
by
else if (!IsTransactionState() || HistoricSnapshotActive())

as that pretty much what we need - invalidations during decoding happen
either outside of a valid transaction state or when entries aren't
referenced anyway.

Btw, if ever hit that code path would Assert() out without logical
decoding as well - it's not allowed to Destroy() a relation that's still
referenced as that assert shows. It's an especially bad idea if the
reference is actually from outside a subxact and the error is caught. I
think we should just remove the Destroy() call - the normal refcount
and/or resowners will take care of deleting the entry later.

> I think the safest fix would be to defer catchup interrupt processing
> while you're in this mode.  You don't really want to be processing any
> remote sinval messages at all, I'd think.

Well, we need to do relmap, smgr and similar things. So I think that'd
be more complicated than we want.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] explain sortorder

2014-12-15 Thread Mike Blackwell
Initial review:

Patch applies cleanly to current head, although it appears to have
soft/hard tab and trailing space issues.

make check fails with the output below.  The expected collation clause is
not present.

--
-- Test explain feature: sort order
--
CREATE TABLE sortordertest (n1 char(1), n2 int4);
-- Insert values by which should be ordered
INSERT INTO sortordertest(n1, n2) VALUES ('d', 5), ('b', 3), ('a', 1),
('e', 2), ('c', 4);
-- Display sort order when explain analyze and verbose are true.
EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM sortordertest ORDER BY n1
COLLATE "C" DESC, n2;
   QUERY PLAN

 Sort
   Output: n1, n2, ((n1)::character(1))
   Sort Key: sortordertest.n1, sortordertest.n2
   Sort Order:  ASC NULLS LAST,  ASC NULLS LAST
   ->  Seq Scan on public.sortordertest
 Output: n1, n2, n1
(6 rows)

DROP TABLE sortordertest;


__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Heikki Linnakangas

On 12/15/2014 05:22 PM, Alexander Korotkov wrote:

On Mon, Dec 15, 2014 at 6:20 PM, Tom Lane  wrote:


Alexander Korotkov  writes:

On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas <

hlinnakan...@vmware.com

wrote:
Right. I also looked at it briefly, but I wasn't sure if we really want
it. AFAICT, no-one has actually asked for that operator, it was written
only to be an example of an operator that would benefit from the

knn-gist

with recheck patch.



Lack of recheck is major limitation of KNN-GiST now. People are not

asking

for that because they don't know what is needed to implement exact KNN

for

PostGIS. Now they have to invent kluges like this:
[ query using ORDER BY ST_Distance ]


It's not apparent to me that the proposed operator is a replacement for
ST_Distance.  The underlying data in an example like this won't be either
points or polygons, it'll be PostGIS datatypes.

In short, I believe that PostGIS could use what you're talking about,
but I agree with Heikki's objection that nobody has asked for this
particular operator.


"polygon <-> point" is for sure not ST_Distance replacement. I was giving
this argument about KNN-GiST with recheck itself. "polygon <-> point" is
needed just as in-core example of KNN-GiST with recheck.


Right. I don't think point <-> polygon is too useful by itself, but we 
need an example in core that could make use KNN-GiST recheck patch. We 
can't write a regression test for it otherwise, for starters.


Actually, we probably could've used the circle <-> polygon for that just 
as well...


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Doing better at HINTing an appropriate column within errorMissingColumn()

2014-12-15 Thread Robert Haas
On Sun, Dec 14, 2014 at 8:24 PM, Michael Paquier
 wrote:
> On Tue, Dec 9, 2014 at 2:52 AM, Peter Geoghegan  wrote:
>> On Mon, Dec 8, 2014 at 9:43 AM, Peter Geoghegan  wrote:
>>> I think it's very possible that the wrong alias may be provided by the
>>> user, and that we should consider that when providing a hint.
>>
>> Note that the existing mechanism (the mechanism that I'm trying to
>> improve) only ever shows this error message:
>>
>> "There is a column named \"%s\" in table \"%s\", but it cannot be
>> referenced from this part of the query."
>>
>> I think it's pretty clear that this general class of user error is common.
> Moving this patch to CF 2014-12 as work is still going on, note that
> it is currently marked with Robert as reviewer and that its current
> status is "Needs review".

The status here is more like "waiting around to see if anyone else has
an opinion".  The issue is what should happen when you enter qualified
name like alvaro.herrera and there is no column named anything like
herrara in the RTE named alvaro, but there is some OTHER RTE that
contains a column with a name that is only a small Levenshtein
distance away from herrera, like roberto.correra.  The questions are:

1. Should we EVER give a you-might-have-meant hint in a case like this?
2. If so, does it matter whether the RTE name is just a bit different
from the actual RTE or whether it's completely different?  In other
words, might we skip the hint in the above case but give one for
alvara.correra?

My current feeling is that we should answer #1 "no", but Peter prefers
to answer it "yes".  My further feeling is that if we do decide to say
"yes" to #1, then I would answer #2 as "yes" also, but Peter would
answer it "no", assigning a fixed penalty for a mismatched RTE rather
than one that varies by the Levenshtein distance between the RTEs.

If no one else expresses an opinion, I'm going to insist on doing it
my way, but I'm happy to have other people weigh in.

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Role Attribute Bitmask Catalog Representation

2014-12-15 Thread Adam Brightwell
All,

Overall, I'm pretty happy with the patch and would suggest moving on to
> writing up the documentation changes to go along with the code changes.
> I'll continue to play around with it but it all seems pretty clean to
> me and will allow us to easily add the additiaonl role attributes being
> discussed.
>

I have attached an updated patch with initial documentation changes for
review.

Thanks,
Adam

-- 
Adam Brightwell - adam.brightw...@crunchydatasolutions.com
Database Engineer - www.crunchydatasolutions.com
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
new file mode 100644
index 9ceb96b..b0b4fca
*** a/doc/src/sgml/catalogs.sgml
--- b/doc/src/sgml/catalogs.sgml
***
*** 1391,1441 
   
  
   
!   rolsuper
!   bool
Role has superuser privileges
   
  
   
!   rolinherit
!   bool
!   Role automatically inherits privileges of roles it is a
!member of
   
  
   
!   rolcreaterole
!   bool
Role can create more roles
   
  
   
!   rolcreatedb
!   bool
Role can create databases
   
  
   
!   rolcatupdate
!   bool

!Role can update system catalogs directly.  (Even a superuser cannot do
!this unless this column is true)

   
  
   
!   rolcanlogin
!   bool

!Role can log in. That is, this role can be given as the initial
!session authorization identifier

   
  
   
!   rolreplication
!   bool

 Role is a replication role. That is, this role can initiate streaming
 replication (see ) and set/unset
--- 1391,1493 
   
  
   
!   rolattr
!   bigint
!   
!Role attributes; see  for details
!   
!  
! 
!  
!   rolconnlimit
!   int4
!   
!For roles that can log in, this sets maximum number of concurrent
!connections this role can make.  -1 means no limit.
!   
!  
! 
!  
!   rolpassword
!   text
!   
!Password (possibly encrypted); null if none.  If the password
!is encrypted, this column will begin with the string md5
!followed by a 32-character hexadecimal MD5 hash.  The MD5 hash
!will be of the user's password concatenated to their user name.
!For example, if user joe has password xyzzy,
!PostgreSQL will store the md5 hash of
!xyzzyjoe.  A password that does not follow that
!format is assumed to be unencrypted.
!   
!  
! 
!  
!   rolvaliduntil
!   timestamptz
!   Password expiry time (only used for password authentication);
!null if no expiration
!  
! 
!
!   
! 
!   
!rolattr bitmap positions
! 
!
! 
!  
!   Position
!   Attribute
!   Description
!  
! 
! 
! 
!  
!   0
!   Superuser
Role has superuser privileges
   
  
   
!   1
!   Inherit
!   Role automatically inherits privileges of roles it is a member of
   
  
   
!   2
!   Create Role
Role can create more roles
   
  
   
!   3
!   Create DB
Role can create databases
   
  
   
!   4
!   Catalog Update

!Role can update system catalogs directly.  (Even a superuser cannot do this unless this column is true)

   
  
   
!   5
!   Can Login

!Role can log in. That is, this role can be given as the initial session authorization identifier

   
  
   
!   6
!   Replication

 Role is a replication role. That is, this role can initiate streaming
 replication (see ) and set/unset
***
*** 1445,1479 
   
  
   
!   rolconnlimit
!   int4
!   
!For roles that can log in, this sets maximum number of concurrent
!connections this role can make.  -1 means no limit.
!   
!  
! 
!  
!   rolpassword
!   text

!Password (possibly encrypted); null if none.  If the password
!is encrypted, this column will begin with the string md5
!followed by a 32-character hexadecimal MD5 hash.  The MD5 hash
!will be of the user's password concatenated to their user name.
!For example, if user joe has password xyzzy,
!PostgreSQL will store the md5 hash of
!xyzzyjoe.  A password that does not follow that
!format is assumed to be unencrypted.

   
  
-  
-   rolvaliduntil
-   timestamptz
-   Password expiry time (only used for password authentication);
-null if no expiration
-  
  
 

--- 1497,1509 
   
  
   
!   7
!   By-pass Row Level Security

!Role can by-pass row level security policies when row_security is set off

   
  
  
   

Re: [HACKERS] [PATCH] HINT: pg_hba.conf changed since last config reload

2014-12-15 Thread Alex Shulgin
Peter Eisentraut  writes:

> On 10/16/14 11:34 PM, Craig Ringer wrote:
>> psql: FATAL:  Peer authentication failed for user "fred"
>> HINT:  See the server error log for additional information.
>
> I think this is wrong for many reasons.
>
> I have never seen an authentication system that responds with, hey, what
> you just did didn't get you in, but the administrators are currently in
> the process of making a configuration change, so why don't you check
> that out.
>
> We don't know whether the user has access to the server log.  They
> probably don't.  Also, it is vastly more likely that the user really
> doesn't have access in the way they chose, so throwing in irrelevant
> hints will be distracting.
>
> Moreover, it will be confusing to regular users if this message
> sometimes shows up and sometimes doesn't, independent of their own state
> and actions.
>
> Finally, the fact that a configuration change is in progress is
> privileged information.  Unprivileged users can deduct from the presence
> of this message that administrators are doing something, and possibly
> that they have done something wrong.
>
> I think it's fine to log a message in the server log if the pg_hba.conf
> file needs reloading.  But the client shouldn't know about this at all.

These are all valid concerns IMHO.

Attached is the modified version of the original patch by Craig,
addressing the handling of the new hint_log error data field and
removing the client-side HINT.

I'm also moving this to the current CF.

--
Alex

>From 702e0ac31f3d8023ad8c064d90bdf5a8441fddea Mon Sep 17 00:00:00 2001
From: Craig Ringer 
Date: Fri, 17 Oct 2014 11:18:18 +0800
Subject: [PATCH 1/2] Add an errhint_log, akin to errdetail_log

This allows a different HINT to be sent to the server error log
and to the client, which will be useful where there's security
sensitive information that's more appropriate for a HINT than
a DETAIL message.
---
 src/backend/utils/error/elog.c | 59 --
 src/include/utils/elog.h   |  7 +
 2 files changed, 53 insertions(+), 13 deletions(-)

diff --git a/src/backend/utils/error/elog.c b/src/backend/utils/error/elog.c
new file mode 100644
index 2316464..da55207
*** a/src/backend/utils/error/elog.c
--- b/src/backend/utils/error/elog.c
*** errfinish(int dummy,...)
*** 503,508 
--- 503,510 
  		pfree(edata->detail_log);
  	if (edata->hint)
  		pfree(edata->hint);
+ 	if (edata->hint_log)
+ 		pfree(edata->hint_log);
  	if (edata->context)
  		pfree(edata->context);
  	if (edata->schema_name)
*** errhint(const char *fmt,...)
*** 1015,1020 
--- 1017,1042 
  	return 0;	/* return value does not matter */
  }
  
+ /*
+  * errhint_log --- add a hint_log error message text to the current error
+  */
+ int
+ errhint_log(const char *fmt,...)
+ {
+ 	ErrorData  *edata = &errordata[errordata_stack_depth];
+ 	MemoryContext oldcontext;
+ 
+ 	recursion_depth++;
+ 	CHECK_STACK_DEPTH();
+ 	oldcontext = MemoryContextSwitchTo(edata->assoc_context);
+ 
+ 	EVALUATE_MESSAGE(edata->domain, hint_log, false, true);
+ 
+ 	MemoryContextSwitchTo(oldcontext);
+ 	recursion_depth--;
+ 	return 0;	/* return value does not matter */
+ }
+ 
  
  /*
   * errcontext_msg --- add a context error message text to the current error
*** CopyErrorData(void)
*** 1498,1503 
--- 1520,1527 
  		newedata->detail_log = pstrdup(newedata->detail_log);
  	if (newedata->hint)
  		newedata->hint = pstrdup(newedata->hint);
+ 	if (newedata->hint_log)
+ 		newedata->hint_log = pstrdup(newedata->hint_log);
  	if (newedata->context)
  		newedata->context = pstrdup(newedata->context);
  	if (newedata->schema_name)
*** FreeErrorData(ErrorData *edata)
*** 1536,1541 
--- 1560,1567 
  		pfree(edata->detail_log);
  	if (edata->hint)
  		pfree(edata->hint);
+ 	if (edata->hint_log)
+ 		pfree(edata->hint_log);
  	if (edata->context)
  		pfree(edata->context);
  	if (edata->schema_name)
*** ThrowErrorData(ErrorData *edata)
*** 1607,1612 
--- 1633,1640 
  		newedata->detail_log = pstrdup(edata->detail_log);
  	if (edata->hint)
  		newedata->hint = pstrdup(edata->hint);
+ 	if (edata->hint_log)
+ 		newedata->hint_log = pstrdup(edata->hint_log);
  	if (edata->context)
  		newedata->context = pstrdup(edata->context);
  	if (edata->schema_name)
*** ReThrowError(ErrorData *edata)
*** 1669,1674 
--- 1697,1704 
  		newedata->detail_log = pstrdup(newedata->detail_log);
  	if (newedata->hint)
  		newedata->hint = pstrdup(newedata->hint);
+ 	if (newedata->hint_log)
+ 		newedata->hint_log = pstrdup(newedata->hint_log);
  	if (newedata->context)
  		newedata->context = pstrdup(newedata->context);
  	if (newedata->schema_name)
*** write_csvlog(ErrorData *edata)
*** 2710,2717 
  		appendCSVLiteral(&buf, edata->detail);
  	appendStringInfoChar(&buf, ',');
  
! 	/* errhint */
! 	appendCSVLiteral(&buf, edata->hint);
  	appendString

Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Craig Ringer
On 12/16/2014 12:31 AM, Robert Haas wrote:
> On Mon, Dec 15, 2014 at 11:28 AM, Craig Ringer  wrote:
>>> >> If that's not good for some reason, my second choice is adding a
>>> >> BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
>>> >> less cumbersome than your other proposal.
>> >
>> > That'd be my preference.
> OK, let's do that, then.

Right-o.

I had an earlier patch that added unregistration on exit(0) and also
added a flag like this. Only the first part got committed. I'll
resurrect it and rebase it on top of master.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Robert Haas
On Mon, Dec 15, 2014 at 11:28 AM, Craig Ringer  wrote:
>> If that's not good for some reason, my second choice is adding a
>> BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
>> less cumbersome than your other proposal.
>
> That'd be my preference.

OK, let's do that, then.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Tom Lane
Andres Freund  writes:
> On 2014-12-15 10:15:30 -0500, Tom Lane wrote:
>> The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
>> contrib/test_decoding with
>> TRAP: FailedAssertion("!(((bool)((relation)->rd_refcnt == 0)))", File: 
>> "relcache.c", Line: 1981)

> Without catchup invalidations and/or an outside reference to a relation
> that's normally not a problem because it won't get reloaded from the
> caches at an inappropriate time while invisible. Until a few weeks ago
> there was no regression test covering that case which is why these
> crashes are only there now.

I've always thought that this whole idea of allowing the caches to contain
stale information was a Rube Goldberg plan that was going to bite you on
the ass eventually.  This case isn't doing anything to increase my faith
in it, and the proposed patch seems like just another kluge on top of
a rickety stack.

I think the safest fix would be to defer catchup interrupt processing
while you're in this mode.  You don't really want to be processing any
remote sinval messages at all, I'd think.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench -f and vacuum

2014-12-15 Thread Robert Haas
On Sat, Dec 13, 2014 at 10:39 AM, Tom Lane  wrote:
> Tatsuo Ishii  writes:
>> Currently pgbench -f (run custom script) executes vacuum against
>> pgbench_* tables before stating bench marking if -n (or --no-vacuum)
>> is not specified. If those tables do not exist, pgbench fails. To
>> prevent this, -n must be specified. For me this behavior seems insane
>> because "-f" does not necessarily suppose the existence of the
>> pgbench_* tables.  Attached patch prevents pgbench from exiting even
>> if those tables do not exist.
>
> I don't particularly care for this approach.  I think if we want to
> do something about this, we should just make -f imply -n.  Although
> really, given the lack of complaints so far, it seems like people
> manage to deal with this state of affairs just fine.  Do we really
> need to do anything?

I would vote for changing nothing.  If we make -f imply -n, then what
happens if you have a script which is a slight variant of the default
script and you *don't* want -n?  Then we'll have to add yet another
pgbench option to select the default behavior, and I don't know that
the marginal usability gain of getting to leave out -n sometimes would
be enough to justify having to remember another switch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Craig Ringer
On 12/16/2014 12:12 AM, Robert Haas wrote:
> On Sat, Dec 13, 2014 at 4:13 AM, Craig Ringer  wrote:
>> While working on BDR, I've run into a situation that I think highlights
>> a limitation of the dynamic bgworker API that should be fixed.
>> Specifically, when the postmaster crashes and restarts shared memory
>> gets cleared but dynamic bgworkers don't get unregistered, and that's a
>> mess.
> 
> I've noticed this as well.  What I was thinking of proposing is that
> we change things so that a BGW_NEVER_RESTART worker is unregistered
> when a crash-and-restart cycle happens, but workers with any other
> restart time are retained.

Personally I need workers that get restarted, but are discarded on
crash. They access shared memory, so when shmem is cleared I need them
to be discarded too, but otherwise I wish them to be persistent
until/unless they're intentionally unregistered.

If I have to use BGW_NO_RESTART then I land up having to implement
monitoring of worker status and manual re-registration in a supervisor
static worker. Which is a pain, since it's duplicating work the
postmaster would otherwise just be doing for me.

I'd really rather a separate flag.

> Maybe it would be best to make the per-database workers BGW_NO_RESTART
> and have the static bgworker, rather than the postmaster, be
> responsible for starting them.  Then the fix mentioned above would
> suffice.

Yeah... it would, but it involves a bunch of code that duplicates
process management the postmaster already does.

More importantly, if the supervisor worker crashes / is killed it loses
its handles to the other workers and the signals they send no longer go
to the right worker. So it's not robust.

> If that's not good for some reason, my second choice is adding a
> BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
> less cumbersome than your other proposal.

That'd be my preference.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Andres Freund
On 2014-12-15 11:21:03 -0500, Bruce Momjian wrote:
> On Sun, Dec 14, 2014 at 05:21:06PM +, Mark Cave-Ayland wrote:
> > I should add here that the QEMU folk do tend to go to great lengths to
> > preserve bisectability; often intermediate compatibility APIs are
> > introduced early in the patchset and then removed at the very end when
> > the final feature is implemented.
> 
> I agree with Tom on this, and I want to point out that certain software
> projects benefit more from modularized development than others , e.g.
> QEMU is an interface library and therefore probably does things in a
> more modular way than usual.  For example, they are probably not adding
> new SQL commands or configuration settings in the same way Postgres
> does.

I'm not following. What do you mean with 'interface library'? I'm pretty
sure qemu very regularly adds features including configuration
settings/parameters.

> It would be interesting to compare the directory span of a
> typical Postgres patch vs. a QEMU or Linux kernel one.

I don't believe this really is a question of the type of project. I
think it's more that especially the kernel has had to deal with similar
problems at a much larger scale. And the granular approach somewhat
works for them.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Reducing lock strength of trigger and foreign key DDL

2014-12-15 Thread Robert Haas
I'm not sure about the rest of this but...

On Sat, Dec 13, 2014 at 3:45 PM, Andreas Karlsson  wrote:
> Is this patch
> worthwhile even without reducing the lock levels of the drop commands?

Yes!  It certainly makes more sense to reduce the lock levels where we
can do that relatively easily, and postpone work on related projects
that are harder, rather than waiting until it all seems to work before
doing anything at all.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Robert Haas
On Mon, Dec 15, 2014 at 3:40 AM, Shigeru Hanada
 wrote:
> I'm working on $SUBJECT and would like to get comments about the
> design.  Attached patch is for the design below.

I'm glad you are working on this.

> 1. Join source relations
> As described above, postgres_fdw (and most of SQL-based FDWs) needs to
> check that 1) all foreign tables in the join belong to a server, and
> 2) all foreign tables have same checkAsUser.
> In addition to that, I add extra limitation that both inner/outer
> should be plain foreign tables, not a result of foreign join.  This
> limiation makes SQL generator simple.  Fundamentally it's possible to
> join even join relations, so N-way join is listed as enhancement item
> below.

It seems pretty important to me that we have a way to push the entire
join nest down.  Being able to push down a 2-way join but not more
seems like quite a severe limitation.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commitfest problems

2014-12-15 Thread Bruce Momjian
On Sun, Dec 14, 2014 at 05:21:06PM +, Mark Cave-Ayland wrote:
> I should add here that the QEMU folk do tend to go to great lengths to
> preserve bisectability; often intermediate compatibility APIs are
> introduced early in the patchset and then removed at the very end when
> the final feature is implemented.

I agree with Tom on this, and I want to point out that certain software
projects benefit more from modularized development than others , e.g.
QEMU is an interface library and therefore probably does things in a
more modular way than usual.  For example, they are probably not adding
new SQL commands or configuration settings in the same way Postgres
does.  It would be interesting to compare the directory span of a
typical Postgres patch vs. a QEMU or Linux kernel one.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making BackgroundWorkerHandle a complete type or offering a worker enumeration API?

2014-12-15 Thread Robert Haas
On Sat, Dec 13, 2014 at 4:13 AM, Craig Ringer  wrote:
> While working on BDR, I've run into a situation that I think highlights
> a limitation of the dynamic bgworker API that should be fixed.
> Specifically, when the postmaster crashes and restarts shared memory
> gets cleared but dynamic bgworkers don't get unregistered, and that's a
> mess.

I've noticed this as well.  What I was thinking of proposing is that
we change things so that a BGW_NEVER_RESTART worker is unregistered
when a crash-and-restart cycle happens, but workers with any other
restart time are retained. What's happened to me a few times is that
the database crashes after registering BGW_NO_RESTART workers but
before the postmaster launches them; the postmaster fires them up
after completing the crash-and-restart cycle, but by then the dynamic
shared memory segments they are supposed to map are gone, so they just
start up uselessly and then die.

> The latest BDR extension has a single static bgworker registered at
> shared_preload_libraries time. This worker launches one dynamic bgworker
> per database. Those dynamic bgworkers are in turn responsible for
> launching workers for each connection to another node in the mesh
> topology (and for various other tasks). They communicate via shared
> memory blocks, where each worker has an offset into a shared memory array.
>
> That's all fine until the postmaster crashes and restarts, zeroing
> shared memory. The dynamic background workers are killed by the
> postmaster, but *not* unregistered. Workers only get unregistered if
> they exit with exit code 0, which isn't the case when they're killed, or
> when their restart interval is BGW_NO_RESTART .

Maybe it would be best to make the per-database workers BGW_NO_RESTART
and have the static bgworker, rather than the postmaster, be
responsible for starting them.  Then the fix mentioned above would
suffice.

If that's not good for some reason, my second choice is adding a
BGWORKER_UNREGISTER_AFTER_CRASH flag.  That seems much simpler and
less cumbersome than your other proposal.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fractions in GUC variables

2014-12-15 Thread Bruce Momjian
On Mon, Dec 15, 2014 at 10:19:19AM -0500, Peter Eisentraut wrote:
> > Overall, I feel that this isn't really worth the trouble. We use
> > fractions consistently now, so there isn't much room for confusion over
> > what the current values mean. Using a percentage might be more familiar
> > for some people, but OTOH you'll have to get used to the fractions
> > anyway, unless we change the default output format too, and I'm not in
> > favour of doing that. I suggest that we just drop this, and remove the
> > TODO item.
> 
> Agreed.
> 
> The patch is sound as far as it goes (I might be inclined to accept
> whitespace between number and % sign), but given the above points and
> the original reason for it having been eliminated, I'm inclined to drop it.

TODO item removed.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Andres Freund
Hi,

On 2014-12-15 10:15:30 -0500, Tom Lane wrote:
> The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
> contrib/test_decoding with
>
> TRAP: FailedAssertion("!(((bool)((relation)->rd_refcnt == 0)))", File: 
> "relcache.c", Line: 1981)
>
> for example here:
> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhor&dt=2014-12-14%2005%3A50%3A09
> and here:
> http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhor&dt=2014-12-13%2021%3A26%3A05

Yes, I've been trying to debug that. I've finally managed to reproduce
locally. Unfortunately it's not directly reproducible on my laptop...

A fair amount of tinkering later I've found out that it's not actually
CLOBBER_CACHE_ALWAYS itself that triggers the problem but catchup
interrupts. It triggers with CLOBBER_CACHE_ALWAYS because that happens
to make parts of the code slow enough to not reach a ordinary
AcceptInvalidationMessages() in time.

The problem is that in the added regression test there can be a
situation where there's a relcache entry that's *not* currently visible
but still referenced. Consider:

SELECT 'init' FROM pg_create_logical_replication_slot('regression_slot', 
'test_decoding');
CREATE TABLE somechange(id serial primary key);
INSERT INTO changeresult
SELECT data FROM pg_logical_slot_get_changes(...);

While get_changes stores it data into a tuplestore there can be a moment
where 'changeresult' has a refcnt > 0 due to the INSERT, but is
invisible because we're using a historic snapshot from when changeresult
doesn't exist.

Without catchup invalidations and/or an outside reference to a relation
that's normally not a problem because it won't get reloaded from the
caches at an inappropriate time while invisible. Until a few weeks ago
there was no regression test covering that case which is why these
crashes are only there now.

It triggers via:
RelationCacheInvalidate() ->
  RelationClearRelation(relation, true) ->
/* Build temporary entry, but don't link it into hashtable */
newrel = RelationBuildDesc(save_relid, false);
if (newrel == NULL)
{
/* Should only get here if relation was deleted */
RelationCacheDelete(relation);
RelationDestroyRelation(relation, false);
elog(ERROR, "relation %u deleted while still in use", save_relid);
}

That path is only hit while refcnt > 0

RelationDestroyRelation() has
Assert(RelationHasReferenceCountZero(relation));

So that path doesn't really work... Without assertions we'd "just" get a
transient ERROR. I think that path should generally loose the
RelationDestroyRelation() - if it's referenced that's surely not the
right thing to do.  I'm not actually convinced logical decoding is the
only way that assert can be reached.

Since logical decoding happens inside a subtransaction (when called via
SQL) and invalidate caches one relatively straightforward way to fix
this would be to add something roughly akin to:
 Assert(!relation->rd_isvalid)
 /*
  * This should only happen when we're doing logical decoding where
  * it can happen when [above].
  */
 if (HistoricSnapshotActive())
 return;

There's no chance that such a entry could survive too long in an invalid
state (minus preexisting bugs independent of decoding) since we hit the
same paths that subtransaction abort hits.

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Join push-down support for foreign tables

2014-12-15 Thread Tom Lane
Shigeru Hanada  writes:
> I'm working on $SUBJECT and would like to get comments about the
> design.  Attached patch is for the design below.  Note that the patch
> requires Kaigai-san's custom foriegn join patch[1]

For the record, I'm not particularly on-board with custom scan, and
even less so with custom join.  I don't want FDW features like this
depending on those kluges, especially not when you're still in need
of core-code changes (which really points up the inadequacy of those
concepts).

Also, please don't redefine struct NestPath like that.  That adds a
whole bunch of notational churn (and backpatch risk) for no value
that I can see.  It might've been better to do it like that in a
green field, but you're about twenty years too late to do it now.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Alexander Korotkov
On Mon, Dec 15, 2014 at 6:20 PM, Tom Lane  wrote:
>
> Alexander Korotkov  writes:
> > On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas <
> hlinnakan...@vmware.com
> >> wrote:
> >> Right. I also looked at it briefly, but I wasn't sure if we really want
> >> it. AFAICT, no-one has actually asked for that operator, it was written
> >> only to be an example of an operator that would benefit from the
> knn-gist
> >> with recheck patch.
>
> > Lack of recheck is major limitation of KNN-GiST now. People are not
> asking
> > for that because they don't know what is needed to implement exact KNN
> for
> > PostGIS. Now they have to invent kluges like this:
> > [ query using ORDER BY ST_Distance ]
>
> It's not apparent to me that the proposed operator is a replacement for
> ST_Distance.  The underlying data in an example like this won't be either
> points or polygons, it'll be PostGIS datatypes.
>
> In short, I believe that PostGIS could use what you're talking about,
> but I agree with Heikki's objection that nobody has asked for this
> particular operator.
>

"polygon <-> point" is for sure not ST_Distance replacement. I was giving
this argument about KNN-GiST with recheck itself. "polygon <-> point" is
needed just as in-core example of KNN-GiST with recheck.

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] Commit fest 2014-12, let's begin!

2014-12-15 Thread Tom Lane
Alexander Korotkov  writes:
> On Mon, Dec 15, 2014 at 4:12 PM, Heikki Linnakangas > wrote:
>> Right. I also looked at it briefly, but I wasn't sure if we really want
>> it. AFAICT, no-one has actually asked for that operator, it was written
>> only to be an example of an operator that would benefit from the knn-gist
>> with recheck patch.

> Lack of recheck is major limitation of KNN-GiST now. People are not asking
> for that because they don't know what is needed to implement exact KNN for
> PostGIS. Now they have to invent kluges like this:
> [ query using ORDER BY ST_Distance ]

It's not apparent to me that the proposed operator is a replacement for
ST_Distance.  The underlying data in an example like this won't be either
points or polygons, it'll be PostGIS datatypes.

In short, I believe that PostGIS could use what you're talking about,
but I agree with Heikki's objection that nobody has asked for this
particular operator.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fractions in GUC variables

2014-12-15 Thread Peter Eisentraut
On 12/15/14 8:56 AM, Heikki Linnakangas wrote:
>> show cursor_tuple_fraction; --> 10%
>> set cursor_tuple_fraction = .15; --> 15%
>> set cursor_tuple_fraction = '33%'; --> 33%
>>
>> I tagged four configuration variables to display as percents.
> 
> I'm not sure I agree that percentages are better than fractions. I'd
> vote a -0.5 for changing the default display format. There isn't much
> harm in accepting them as a secondary format, though.

Agreed with not changing the default output.  Everyone is used to it,
and there is no reason why the percent output would be intrinsically better.

> We should only accept percentages for settings where that makes sense.
> It is sensible for most of the PGC_REAL settings, but not so much for
> geqo_selection_bias or seed, for example.

Right.  But then this feature would get more complicated, as opposed to
supposedly making things simpler.

> Overall, I feel that this isn't really worth the trouble. We use
> fractions consistently now, so there isn't much room for confusion over
> what the current values mean. Using a percentage might be more familiar
> for some people, but OTOH you'll have to get used to the fractions
> anyway, unless we change the default output format too, and I'm not in
> favour of doing that. I suggest that we just drop this, and remove the
> TODO item.

Agreed.

The patch is sound as far as it goes (I might be inclined to accept
whitespace between number and % sign), but given the above points and
the original reason for it having been eliminated, I'm inclined to drop it.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] KNN-GiST with recheck

2014-12-15 Thread Heikki Linnakangas

On 08/03/2014 04:48 PM, Emre Hasegeli wrote:

1. This patch introduces a new "polygon <-> point" operator. That seems
useful on its own, with or without this patch.


Yeah, but exact-knn cant come with no one implementation. But it would
better come in a separate patch.


I tried to split them.  Separated patches are attached.  I changed
the order of the arguments as point <-> polygon, because point was
the first one on all the others.  Its commutator was required for
the index, so I added it on the second patch.  I also added tests
for the operator.  I think it is ready for committer as a separate
patch.  We can add it to the open CommitFest.


Ok, committed this part now with minor changes. The implementation was 
copy-pasted from circle <-> polygon, so I put the common logic to a 
dist_ppoly_internal function, and called that in both dist_cpoly and 
dist_ppoly.


I was surprised that there were no documentation changes in the patch, 
but looking at the docs, we just list the geometric operators without 
explaining what the argument types are. That's not very comprehensive, 
might be good to expand the docs on that, but it's not this patch's fault.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2014-12-15 Thread Tom Lane
The CLOBBER_CACHE_ALWAYS buildfarm members occasionally fail in
contrib/test_decoding with

TRAP: FailedAssertion("!(((bool)((relation)->rd_refcnt == 0)))", File: 
"relcache.c", Line: 1981)

for example here:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhor&dt=2014-12-14%2005%3A50%3A09
and here:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=markhor&dt=2014-12-13%2021%3A26%3A05

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] speedup tidbitmap patch: hash BlockNumber

2014-12-15 Thread Tom Lane
Heikki Linnakangas  writes:
> On 10/22/2014 04:14 PM, Teodor Sigaev wrote:
>> Just replace tag_hash in tidbitmap which uses hash_any to direct call of
>> hash_uint32, it saves ~5% of execution time.

> I'd suggest putting the new function in hashfn.c, where we already have 
> similar functions, string_hash, tag_hash, oid_hash and bitmap_hash. And 
> name it "blocknumber_hash", for consistency.

I think this suggestion is misguided, and the patch itself needs
rethinking.  Instead of doing this, let's hack dynahash.c itself
to substitute a shim like this when it's told function == tag_hash and
keysize == sizeof(uint32).  Then we can remove any similar shims that
already exist, and possibly end up with a net savings of code rather than
adding more.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor binary-search int overflow in timezone code

2014-12-15 Thread Tom Lane
Christoph Berg  writes:
> a fellow Debian Developer found a minor glitch in
> src/timezone/localtime.c, where binary search is used. Now I don't
> think there is an actual problem (unless there's > 2^30 timezones),
> but it would at least make sense to mark the code as okayish so that
> people running code scanners won't stumble over the issue again.

> The attached patch added comments to address this.

This is totally silly.  The timecnt couldn't be anywhere near INT_MAX (in
fact, it is not allowed to exceed TZ_MAX_TIMES, which is currently just
1200).  And there are bunches of other instances of similar code in PG;
shall we put equally wishy-washy comments on them all?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Custom timestamp format in logs

2014-12-15 Thread Heikki Linnakangas

On 12/14/2014 06:36 PM, Magnus Hagander wrote:

A separate GUC seems kind of weird. Wouldn't it be better with something
like %(format)t or such in the log_line_prefix itself in that case? That
could also be expanded to other parameters, should we need them?


%t isn't the only thing that prints timestamps in the logs, e.g:

LOG:  database system was shut down at 2014-12-15 15:30:15 EET

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] add ssl_protocols configuration option

2014-12-15 Thread Alex Shulgin
Michael Paquier  writes:
>>>
>>> Perhaps ssloptions.[ch], unless you plan to add non-option-related code
>>> there later?
>>
>> I don't think anything else than common options-related code fits in
>> there, so ssloptions.c makes sense to me.
>>
>>> BTW, there is no Regent code in your openssl.c, so the copyright
>>> statement is incorrect.
>>
>> Good catch, I just blindly copied that from some other file.
> There have been arguments in favor and against this patch... But
> marking it as returned with feedback because of a lack of activity in
> the last couple of weeks. Feel free to update if you think that's not
> correct, and please move this patch to commit fest 2014-12.

Attached is a new version that addresses the earlier feedback: renamed
the added *.[ch] files and removed incorrect copyright line.

I'm moving this to the current CF.

--
Alex

>From 18388300f9ed34fa47c66b8a2da098aeb19a7f71 Mon Sep 17 00:00:00 2001
From: Alex Shulgin 
Date: Wed, 19 Nov 2014 19:49:29 +0300
Subject: [PATCH] DRAFT: ssl_protocols GUC

---
 doc/src/sgml/config.sgml  |  29 ++
 doc/src/sgml/libpq.sgml   |  25 ++
 src/backend/libpq/Makefile|   2 +-
 src/backend/libpq/be-secure-openssl.c |  29 --
 src/backend/libpq/be-secure.c |   3 +-
 src/backend/libpq/ssloptions.c| 123 ++
 src/backend/utils/misc/guc.c  |  15 
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/include/libpq/libpq.h |   1 +
 src/include/libpq/ssloptions.h|  48 ++
 src/interfaces/libpq/Makefile |   8 +-
 src/interfaces/libpq/fe-connect.c |   4 +
 src/interfaces/libpq/fe-secure-openssl.c  |  18 +++-
 src/interfaces/libpq/libpq-int.h  |   1 +
 14 files changed, 297 insertions(+), 10 deletions(-)
 create mode 100644 src/backend/libpq/ssloptions.c
 create mode 100644 src/include/libpq/ssloptions.h

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
new file mode 100644
index 48ae3e4..699f0f9
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
*** include_dir 'conf.d'
*** 1055,1060 
--- 1055,1089 

   
  
+  
+   ssl_protocols (string)
+   
+ssl_protocols configuration parameter
+   
+   
+   
+
+ Specifies a colon-separated list of SSL protocols that are
+ allowed to be used on secure connections. Each entry in the list can
+ be prefixed by a + (add to the current list)
+ or - (remove from the current list). If no prefix is used,
+ the list is replaced with the specified protocol.
+
+
+ The full list of supported protocols can be found in the
+ the openssl manual page.  In addition to the names of
+ individual protocols, the following keywords can be
+ used: ALL (all protocols supported by the underlying
+ crypto library), SSL (all supported versions
+ of SSL) and TLS (all supported versions
+ of TLS).
+
+
+ The default is ALL:-SSL.
+
+   
+  
+ 
   
ssl_ciphers (string)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
new file mode 100644
index d829a4b..62ee0b4
*** a/doc/src/sgml/libpq.sgml
--- b/doc/src/sgml/libpq.sgml
*** postgresql://%2Fvar%2Flib%2Fpostgresql/d
*** 1230,1235 
--- 1230,1250 

   
  
+  
+   sslprotocols
+   
+
+ Specifies a colon-separated list of SSL protocols that are
+ allowed to be used on secure connections.
+ See  server configuration parameter
+ for format.
+
+
+ Defaults to ALL:-SSL.
+
+   
+  
+ 
   
sslcompression

*** myEventProc(PGEventId evtId, void *evtIn
*** 6693,6698 
--- 6708,6723 
   
  
  
+ 
+  
+   
+PGSSLPROTOCOLS
+   
+   PGSSLPROTOCOLS behaves the same as the  connection parameter.
+  
+ 
+ 
  
   

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
new file mode 100644
index 09410c4..62abd46
*** a/src/backend/libpq/Makefile
--- b/src/backend/libpq/Makefile
*** OBJS = be-fsstubs.o be-secure.o auth.o c
*** 18,24 
 pqformat.o pqmq.o pqsignal.o
  
  ifeq ($(with_openssl),yes)
! OBJS += be-secure-openssl.o
  endif
  
  include $(top_srcdir)/src/backend/common.mk
--- 18,24 
 pqformat.o pqmq.o pqsignal.o
  
  ifeq ($(with_openssl),yes)
! OBJS += be-secure-openssl.o ssloptions.o
  endif
  
  include $(top_srcdir)/src/backend/common.mk
diff --git a/src/backend/libpq/be-secure-openssl.c b/src/backend/libpq/be-secure-openssl.c
new file mode 100644
index b05364c..ad7e0f6
*** a/src/backend/libpq/be-secure-openssl.c
--- b/src/backend/li

  1   2   >