Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-25 Thread Mark Kirkwood

On 25/06/13 15:56, Tom Lane wrote:

Mark Kirkwood mark.kirkw...@catalyst.net.nz writes:

One of the reasons for fewer reviewers than submitters, is that it is a
fundamentally more difficult job. I've submitted a few patches in a few
different areas over the years - however if I grab a patch on the queue
that is not in exactly one of the areas I know about, I'll struggle to
do a good quality review.



Now some might say any review is better than no review... I don't
think so - one of my patches a while was reviewed by someone who didn't
really know the context that well and made the whole process grind to a
standstill until a more experienced reviewer took over. I'm quite wary
of doing the same myself - anti-help is not the answer!


FWIW, a large part of the reason for the commitfest structure is that
by reviewing patches, people can educate themselves about parts of the
PG code that they don't know already, and thus become better qualified
to do more stuff later.  So I've got no problem with less-experienced
people doing reviews.

At the same time, it *is* fair to expect someone to phrase their review
as I don't understand this, could you explain and/or improve the
comments rather than saying something more negative, if they aren't
clear about what's going on.  Without some specific references it's hard
to be sure if the reviewer you mention was being unreasonable.

Anyway, the point I'm trying to make is that this is a community effort,
and each of us can stand to improve our knowledge of what is fundamentally
a complex system.  Learn something, teach something, it's all good.



Yes - the reason I mentioned this was not to dig into history and bash a 
reviewer (who was not at all unreasonable in my recollection)... but to 
highlight that approaching a review is perhaps a little more complex and 
demanding that was being made out, hence the shortage of volunteers.


However I do completely agree, that encouraging reviewers to proceed 
with the approach you've outlined above seems like the way. And yes - 
it is going to be a good way to get to know the code better.


Regards

Mark



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] fixing pg_ctl with relative paths

2013-06-25 Thread Hari Babu
On January 23, 2013 9:13 AM Josh Kupershmidt wrote:
There have been some complaints[1][2] in the past about pg_ctl not playing
nice with relative path specifications for the datadir. Here's a concise
illustration:

  $ mkdir /tmp/mydata/  initdb /tmp/mydata/
  $ cd /tmp/
  $ pg_ctl -D ./mydata/ start
  $ cd /
  $ pg_ctl -D /tmp/mydata/ restart

IMO it's pretty hard to defend the behavior of the last step, where pg_ctl
knows exactly which datadir the user has specified, and succeeds in stopping
the server but not starting it.

Digging into this gripe, a related problem I noticed is that `pg_ctl ...
restart` doesn't always preserve the -D $DATADIR argument as the following
comment suggests it should[4]:

  * We could pass PGDATA just in an environment
  * variable but we do -D too for clearer postmaster
  * 'ps' display

Specifically, if one passes in additional -o options, e.g.

  $ pg_ctl -D /tmp/mydata/ -o -N 10 restart

after which postmaster.opts will be missing the -D ... argument which is
otherwise recorded, and the `ps` output is similarly abridged.

Anyway, Tom suggested[3] two possible ways of fixing the original gripe,
and I went with his latter suggestion,

| for pg_ctl restart to override that
| option with its freshly-derived idea of where the data directory is

mainly so we don't need to worry about the impact of changing the
appearance of postmaster.opts, plus it seems like this code fits better in
pg_ctl.c rather than the postmaster (where the postmaster.opts file is
actually written). The fix seems to be pretty simple, namely stripping
post_opts of the old -D ...  portion and having the new specification, if
any, be used instead. I believe the attached patch should fix these two
infelicities.

Full disclosure: the strip_datadirs() function makes no attempt to properly
handle pathnames containing quotes. There seems to be some, uh, confusion in
the existing code about how these should be handled already. For instance,

  $ mkdir /tmp/here's a \ quote
  $ initdb /tmp/here's a \ quote

How to successfully start, restart, and stop the server with pg_ctl is left
as an exercise for the reader. So I tried to avoid that can of worms with
this patch, though I'm happy to robustify strip_datadirs() if we'd like to
properly support such pathnames, and there's consensus on how to standardize
the escaping.

[1]
http://www.postgresql.org/message-id/CAA-aLv72O+NegjAipHORmbqSMZTkZayaTxrd+C
9v60ybmmm...@mail.gmail.com
[2]
http://www.postgresql.org/message-id/CAK3UJRGABxWSOCXnAsSYw5BfR4D9ageXF+6Gts
RVm-LtfWfW=g...@mail.gmail.com
[3] http://www.postgresql.org/message-id/27233.1350234...@sss.pgh.pa.us
[4] Note, ps output and postmaster.opts will not include the datadir
specification if you rely solely on the PGDATA environment variable for
pg_ctl


Please find the review of the patch.

Basic stuff:

- Patch applies OK
- Compiles cleanly with no warnings
- Regression tests pass. 


What it does:
- 
The restart functionality of pg_ctl has problems with relative paths. This
patch removes the 
problems arising during restart. This patch strips the data directory which
is present in the 
postmaster options and keep the rest of the options already provided incase
if user not provided 
any options during restart. 


Code Review:
 
+if (orig_post_opts) { 
+post_opts = strip_datadirs(orig_post_opts); 
+} 

No need of {} as the only one statement block is present in the if block. 


+ free(tmp); 

The above statement can be moved inside the if (*(trailing_quote + 1) !=
'\0') { 
where it's exact usage is present. 

Testing: 
 
I tested this feature with different postmaster options and database folder
names, found no problem. 


The database folder with quotes present in it is any way having problems
with pg_ctl. 
I feel the strip_datadirs() function header explanation is providing good
understanding. 
Overall the patch is good. It makes the pg_ctl restart functionality works
well. 

Regards, 
Hari babu





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] converting datum to numeric

2013-06-25 Thread Szymon Guz
Hi,
I've got a couple of questions.

I was using numeric_out like this:

DatumGetCString(DirectFunctionCall1(numeric_out, d));

Why do I have to use DirectFunctionCall1 instead of calling numeric_out?


I was suggested to use numeric_send instead of numeric_out, however when
changing the function names in the above example, the whole expression
returns 8. Always 8. I check with:

char *x = DatumGetCString(DirectFunctionCall1(numeric_send, d));
PLy_elog(NOTICE, x).


And my main question: is there any documentation about those internal
functions, which use and when etc?

thanks
Szymon


Re: [HACKERS] [RFC] Minmax indexes

2013-06-25 Thread Simon Riggs
On 25 June 2013 00:51, Bruce Momjian br...@momjian.us wrote:

 On Sat, Jun 15, 2013 at 11:39:23AM -0400, Tom Lane wrote:
  Simon Riggs si...@2ndquadrant.com writes:
   On 15 June 2013 00:01, Josh Berkus j...@agliodbs.com wrote:
   If we're going to start adding reloptions for specific table behavior,
   I'd rather think of all of the optimizations we might have for a
   prospective append-only table and bundle those, rather than tying it
   to whether a certain index exists or not.
 
   I agree that the FSM behaviour shouldn't be linked to index existence.
   IMHO that should be a separate table parameter, WITH (fsm_mode =
 append)
   Index only scans would also benefit from that.
 
  -1 ... I cannot believe that such a parameter would ever get turned on
  in production by anyone.  If your table has a significant update rate,
  the resulting table bloat would make such behavior completely
  infeasible.  If you have few enough updates to make such a behavior
  practical, then you can live with the expensive index updates instead.

 Can you have pages that are receiving updates _not_ track min/max, until
 the page is nearly full?  This would require full scans of such pages,
 but there might be few of them.  The amount of free spaces on the page
 as reported by FSM might be useful here.


Yes, that is the proposal. Just like index only scans.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [HACKERS] Problem building in a directory shared from Mac to Ubuntu

2013-06-25 Thread Craig Ringer
On 06/25/2013 01:14 PM, Ashutosh Bapat wrote:
 Hi,
 I am observing a strange problem when I build latest PostgreSQL head on
 Ubuntu 12.04. I am running Ubuntu 12.04 as VM on Mac 10.7. The build
 directory points to a sub-directory of host directory shared from Mac to
 Ubuntu 12.04.

shared how?

AFP?

SMB/CIFS? With mount.cifs?

Using NFS?

In general I'd recommend using a local directory for builds. Most
network file protocols have ... interesting ... behaviour when it comes
to file locking, timestamps, etc.

Why are you using this approach rather than just having local build trees?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] converting datum to numeric

2013-06-25 Thread Pavel Stehule
Hello

2013/6/25 Szymon Guz mabew...@gmail.com:
 Hi,
 I've got a couple of questions.

 I was using numeric_out like this:

 DatumGetCString(DirectFunctionCall1(numeric_out, d));

 Why do I have to use DirectFunctionCall1 instead of calling numeric_out?

numeric_out functions doesn't use C calling convention - it use own
convention for support NULL values and some other informations.
DirectFunctionCall1 is simple function that transform C like call to
PostgreSQL call. You can do it directly, but you have to prepare
necessary structures.



 I was suggested to use numeric_send instead of numeric_out, however when
 changing the function names in the above example, the whole expression
 returns 8. Always 8. I check with:

 char *x = DatumGetCString(DirectFunctionCall1(numeric_send, d));
 PLy_elog(NOTICE, x).


send functions are used for binary protocol - so it is nonsense.


 And my main question: is there any documentation about those internal
 functions, which use and when etc?

source code :( and http://www.postgresql.org/docs/9.2/static/xfunc-c.html

src/backend/utils/atd/* is very useful and contains lot of materials for study

some examples of usage you can find in contrib examples

http://www.postgresql.org/docs/9.2/static/extend.html

Regards

Pavel


 thanks
 Szymon


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem building in a directory shared from Mac to Ubuntu

2013-06-25 Thread Ashutosh Bapat
On Tue, Jun 25, 2013 at 12:03 PM, Craig Ringer cr...@2ndquadrant.comwrote:

 On 06/25/2013 01:14 PM, Ashutosh Bapat wrote:
  Hi,
  I am observing a strange problem when I build latest PostgreSQL head on
  Ubuntu 12.04. I am running Ubuntu 12.04 as VM on Mac 10.7. The build
  directory points to a sub-directory of host directory shared from Mac to
  Ubuntu 12.04.

 shared how?


You can share the directories between Host and VM using VMware Fusion.
There is no network involved.


 AFP?

 SMB/CIFS? With mount.cifs?

 Using NFS?

 In general I'd recommend using a local directory for builds. Most
 network file protocols have ... interesting ... behaviour when it comes
 to file locking, timestamps, etc.

 Why are you using this approach rather than just having local build trees?


So that, 1. I can use the data between VM and host seemlessly (the binaries
will be useless, I agree, but the source code will be useful) 2. I can
share the same directory across more than one VM (binaries will be useful
in case two VMs are same). Like building source once and using it on
multiple VMs. 3. if VM crashes, the host will have the data in-tact.


 --
  Craig Ringer   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




-- 
Best Wishes,
Ashutosh Bapat
EntepriseDB Corporation
The Postgres Database Company


Re: [HACKERS] converting datum to numeric

2013-06-25 Thread Szymon Guz
On 25 June 2013 08:51, Pavel Stehule pavel.steh...@gmail.com wrote:

 Hello

 2013/6/25 Szymon Guz mabew...@gmail.com:
  Hi,
  I've got a couple of questions.
 
  I was using numeric_out like this:
 
  DatumGetCString(DirectFunctionCall1(numeric_out, d));
 
  Why do I have to use DirectFunctionCall1 instead of calling numeric_out?

 numeric_out functions doesn't use C calling convention - it use own
 convention for support NULL values and some other informations.
 DirectFunctionCall1 is simple function that transform C like call to
 PostgreSQL call. You can do it directly, but you have to prepare
 necessary structures.

 
 
  I was suggested to use numeric_send instead of numeric_out, however when
  changing the function names in the above example, the whole expression
  returns 8. Always 8. I check with:
 
  char *x = DatumGetCString(DirectFunctionCall1(numeric_send, d));
  PLy_elog(NOTICE, x).
 

 send functions are used for binary protocol - so it is nonsense.



Hi,
thanks for the information. So I will leave speeding it up for this moment.

thanks,
Szymon


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-25 Thread David Rowley
On Tue, Jun 25, 2013 at 3:56 PM, Tom Lane t...@sss.pgh.pa.us wrote:


 FWIW, a large part of the reason for the commitfest structure is that
 by reviewing patches, people can educate themselves about parts of the
 PG code that they don't know already, and thus become better qualified
 to do more stuff later.  So I've got no problem with less-experienced
 people doing reviews.

 At the same time, it *is* fair to expect someone to phrase their review
 as I don't understand this, could you explain and/or improve the
 comments rather than saying something more negative, if they aren't
 clear about what's going on.  Without some specific references it's hard
 to be sure if the reviewer you mention was being unreasonable.

 Anyway, the point I'm trying to make is that this is a community effort,
 and each of us can stand to improve our knowledge of what is fundamentally
 a complex system.  Learn something, teach something, it's all good.

 regards, tom lane



I just wanted to give this the +1 but also want to add. As a novice back in
the 8.4 cycle I wrote a small patch implement boyer-moore-horspool text
searches. I didn't have too much experience around the PostgreSQL source
code, so when it came to my review of another patch (which I think was even
the rule back in 8.4 IIRC) I was quite clear on what I could and could not
review. The initial windowing function patch was in the queue at the time,
so I picked that one and reviewed it along with Heikki. As a novice I did
manage to help maintain a bit of concurrency with the progress of the patch
and the patch went through quite a few revisions from my review even before
Heikki got a good look at it.  I think the most important thing is
maintaining that concurrency during the commitfest, if the author of the
patch is sitting idle waiting for a review the whole time then that leaves
so much less time to get the patch into shape before the commiter comes
along. Even if a novice reviewer can only help a tiny bit, it might still
make the difference between that patch making it to a suitable state in
time or it getting bounced to the next commitfest or even the next release.

So for a person who is a little scared to put their name in the reviewer
section of a patch, I'd recommend being quite open and honest with what you
can and can't review. For me back in 8.4 with the windowing function patch,
I managed point out a few places were the plans being created where not
quite optimal and the author of the patch quickly put fixes in and sent
updated patches, there was also a few things around the SQL spec that I
found after grabbing a copy of the spec and reading part of it. It may have
been a small part of the overall review and work to get the patch commited
but as Tom stated, I did learn quite a bit from that and I even managed to
sent back a delta patch which  helped to get the patch more SQL spec
compliant.

I'm not sure if adding any a review breakdown list to the commitfest
application would be of any use to allow breaking down of what the reviewer
is actually reviewing. Perhaps people would be quicker to sign up to review
the sections of patches around their area of expertise rather than putting
their name against the whole thing, likely a commiter would have a better
idea if such a thing was worth the extra overhead.

Regards

David Rowley


Re: [HACKERS] [9.4 CF 1] The Commitfest Slacker List

2013-06-25 Thread Brendan Jurd
On 25 June 2013 04:13, Joshua D. Drake j...@commandprompt.com wrote:
 On 06/24/2013 10:59 AM, Andres Freund wrote:
 On 2013-06-24 10:50:42 -0700, Josh Berkus wrote:
 This project is enormously stingy with giving credit to people.  It's
 not like it costs us money, you know.
 I am all for introducing a Contributed by reviewing: ... section in
 the release notes.

 It should be listed with the specific feature.

I don't have a strong opinion about whether the reviewers ought to be
listed all together or with each feature, but I do feel very strongly
that they should be given some kind of credit.

Reviewing is often not all that much fun (compared with authoring) and
as Josh points out, giving proper credit has a bang for buck
incentive value that is hard to argue with.

Cheers,
BJ


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-25 Thread Misa Simic
Hi,

Where we can find latest snapshot for 9.3 version?

We have taken latest snapshot from
http://ftp.postgresql.org/pub/snapshot/dev/

But it seems it is for 9.4 version...

Thanks,

Misa


Re: [HACKERS] Reduce maximum error in tuples estimation after vacuum.

2013-06-25 Thread Kyotaro HORIGUCHI
Hello, 

 I have tried to reproduce the problem in different m/c's, but couldn't
 reproduce it.
 I have ran tests with default configuration.

I think you had reproduced it.

 Output on Windows:
 ---
 postgres=# create table t (a int, b int); 
(snip)
 postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where
 relname= 
 't'; 
  n_live_tup | n_dead_tup 
 + 
   10001 | 98 
 (1 row)

Yes, this is the same for me. You should've done this instead,

postgres=# select reltuples from pg_class where relname = 't';
 reltuples 
---
 1e+06
(1 row)

This is 100 times larger than n_live_tup, and it is this value
which used for judge the necessity of autovacuum.

autovacuum.c: 2695
|   reltuples = classForm-reltuples;
|   vactuples = tabentry-n_dead_tuples;
   
|   vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
|   anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;

Although..

 Output on Suse
 
 postgres=# drop table if exists t; 
 create table t (a int, b int); 
 insert into t (select a, (random() * 10)::int from
 generate_series((select count(*) from t) + 1, 100) a); 
 update t set b = b + 1 where a   (select count(*) from t) * 0.7; 
 vacuum t; 
 delete from t where a  (select count(*) from t) * 0.99; 
 vacuum t; 
 select c.relpages, s.n_live_tup, c.reltuples, (select count(*) from t) as
 tuples, reltuples::float / (select count(*) from t) as ratio  from
 pg_stat_user_tables s, pg_class c where s.relname = 't' and c.relname =
 't';DROP TABLE 
 postgres=# CREATE TABLE 
 postgres=# INSERT 0 100 
 postgres=# UPDATE 69 
 postgres=# VACUUM 
 postgres=# DELETE 98 
 postgres=# VACUUM 
 postgres=# 
  relpages | n_live_tup | reltuples | tuples | ratio 
 --++---++--- 
  4425 |  10001 | 10001 |  10001 | 1 
 (1 row)

... Mmm.. I have following figures for the same operation.


 relpages | n_live_tup | reltuples | tuples |  ratio   
--++---++--
 4425 | 417670 |417670 |  10001 | 41.7628237176282

I condisider on this for a time..

 When I tried to run vactest.sh, it gives below error:
 linux:~/akapila/vacuum_nlivetup ./vactest.sh 
 ./vactest.sh: line 11: syntax error near unexpected token `' 
 ./vactest.sh: line 11: `psql ${dbname} -c vacuum verbose t |
 egrep INFO: *\t\: found  | sed -e 's/^.* versions in \([0-9]*\)
 .*$/\1/''
 
 
 Can you help me in reproducing the problem by letting me know if I am doing
 something wrong or results of test are not predictable?

Could you let me know the pg's version you're running?  And it is
appreciated if you're kindly show me the vacuum logs while
testing.

# I found a silly bug in the patch, but I put it off.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Review] Add SPI_gettypmod() to return a field's typemod from a TupleDesc

2013-06-25 Thread Jeevan Chalke
Hi Mark,

Is this the latest patch you are targeting for 9.4 CF1 ?

I am going to review it.

From the comment, here is one issue you need to resolve first:

*** exec_eval_datum(PLpgSQL_execstate *estat
*** 4386,4396 
   errmsg(record \%s\ has no field \%s\,
  rec-refname, recfield-fieldname)));
  *typeid = SPI_gettypeid(rec-tupdesc, fno);
! /* XXX there's no SPI_gettypmod, for some reason */
! if (fno  0)
! *typetypmod = rec-tupdesc-attrs[fno - 1]-atttypmod;
! else
! *typetypmod = -1;
  *value = SPI_getbinval(rec-tup, rec-tupdesc, fno,
isnull);
  break;
  }
--- 4386,4392 
   errmsg(record \%s\ has no field \%s\,
  rec-refname, recfield-fieldname)));
  *typeid = SPI_gettypeid(rec-tupdesc, fno);
! *typetypmod = *SPI_gettypeid*(rec-tupdesc, fno);
  *value = SPI_getbinval(rec-tup, rec-tupdesc, fno,
isnull);
  break;
  }


Once you confirm, I will go ahead reviewing it.

Thanks


On Sat, Feb 9, 2013 at 10:37 PM, Mark Wong mark...@gmail.com wrote:

 On Tue, Jul 3, 2012 at 8:33 AM, Robert Haas robertmh...@gmail.com wrote:
  On Thu, Jun 28, 2012 at 9:49 AM, Robert Haas robertmh...@gmail.com
 wrote:
  On Mon, Jun 18, 2012 at 3:29 PM, Amit Kapila amit.kap...@huawei.com
 wrote:
  [ review ]
 
  Chetan, this patch is waiting for an update from you.  If you'd like
  this to get committed this CommitFest, we'll need an updated patch
  soon.
 
  Hearing no response, I've marked this patch Returned with Feedback.

 Hello everyone,

 I thought I'd take a stab at helping finish this patch.  I have made
 an attempt at adding documentation and replacing the couple of XXX
 comments.  I'll add it to the next commitfest.

 Regards,
 Mark


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Jeevan B Chalke
Senior Software Engineer, RD
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Phone: +91 20 30589500

Website: www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb

This e-mail message (and any attachment) is intended for the use of the
individual or entity to whom it is addressed. This message contains
information from EnterpriseDB Corporation that may be privileged,
confidential, or exempt from disclosure under applicable law. If you are
not the intended recipient or authorized to receive this for the intended
recipient, any use, dissemination, distribution, retention, archiving, or
copying of this communication is strictly prohibited. If you have received
this e-mail in error, please notify the sender immediately by reply e-mail
and delete this message.


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-06-25 Thread Rushabh Lathia
Hi Pavel,

I gone through the discussion over here and found that with this patch we
enable the new error fields in plpgsql. Its a simple patch to expose the new
error fields in plpgsql.

Patch gets applied cleanly. make and make install too went smooth. make
check
was smooth too. Patch also include test coverage

I tested the functionality with manual test and its woking as expected.

BTW in the patch I found one additional new like in read_raise_options():

@@ -3631,7 +3661,23 @@ read_raise_options(void)
else if (tok_is_keyword(tok, yylval,
K_HINT,
hint))
opt-opt_type = PLPGSQL_RAISEOPTION_HINT;
+   else if (tok_is_keyword(tok, yylval,
+
K_COLUMN_NAME, column_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_COLUMN_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_CONSTRAINT_NAME, constraint_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_CONSTRAINT_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_DATATYPE_NAME, datatype_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_DATATYPE_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_TABLE_NAME, table_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_TABLE_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_SCHEMA_NAME, schema_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_SCHEMA_NAME;
else
+
yyerror(unrecognized RAISE statement option);

can you please remove that.

Apart from that patch looks good to me.

Thanks,
Rushabh


On Fri, Feb 1, 2013 at 7:29 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
  2013/2/1 Peter Eisentraut pete...@gmx.net:
  On 2/1/13 8:00 AM, Pavel Stehule wrote:
  2013/2/1 Marko Tiikkaja pgm...@joh.to:
  On 2/1/13 1:47 PM, Pavel Stehule wrote:
 
  now a most hard work is done and I would to enable access to new
  error fields from plpgsql.
 
 
  Is there a compelling reason why we wouldn't provide these already in
 9.3?
 
  a time for assign to last commitfest is out.
 
  this patch is relative simple and really close to enhanced error
  fields feature - but depends if some from commiters will have a time
  for commit to 9.3 - so I am expecting primary target 9.4, but I am not
  be angry if it will be commited early.
 
  If we don't have access to those fields on PL/pgSQL, what was the point
  of the patch to begin with?  Surely, accessing them from C wasn't the
  main use case?
 
 
  These fields are available for application developers now. But is a
  true, so without this patch, GET STACKED DIAGNOSTICS statement will
  not be fully consistent, because some fields are accessible and others
  not

 there is one stronger argument for commit this patch now. With this
 patch, we are able to wrote regression tests for new fields via
 plpgsql.

 Regards

 Pavel

 
  Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-06-25 Thread Pavel Stehule
2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
 Hi Pavel,

 I gone through the discussion over here and found that with this patch we
 enable the new error fields in plpgsql. Its a simple patch to expose the new
 error fields in plpgsql.

 Patch gets applied cleanly. make and make install too went smooth. make
 check
 was smooth too. Patch also include test coverage

 I tested the functionality with manual test and its woking as expected.

 BTW in the patch I found one additional new like in read_raise_options():

 @@ -3631,7 +3661,23 @@ read_raise_options(void)
 else if (tok_is_keyword(tok, yylval,
 K_HINT,
 hint))
 opt-opt_type = PLPGSQL_RAISEOPTION_HINT;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_COLUMN_NAME, column_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_COLUMN_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_CONSTRAINT_NAME, constraint_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_CONSTRAINT_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_DATATYPE_NAME, datatype_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_DATATYPE_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_TABLE_NAME, table_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_TABLE_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_SCHEMA_NAME, schema_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_SCHEMA_NAME;
 else
 +
 yyerror(unrecognized RAISE statement option);

 can you please remove that.

No, these fields are there as was proposed - it enhance possibilities
to PLpgSQL developers - they can use these fields for custom
exceptions. It is same like possibility to specify SQLCODE, MESSAGE,
HINT in current RAISE statement implementation.

Why you dislike it?

Regards

Pavel


 Apart from that patch looks good to me.

 Thanks,
 Rushabh


 On Fri, Feb 1, 2013 at 7:29 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
  2013/2/1 Peter Eisentraut pete...@gmx.net:
  On 2/1/13 8:00 AM, Pavel Stehule wrote:
  2013/2/1 Marko Tiikkaja pgm...@joh.to:
  On 2/1/13 1:47 PM, Pavel Stehule wrote:
 
  now a most hard work is done and I would to enable access to new
  error fields from plpgsql.
 
 
  Is there a compelling reason why we wouldn't provide these already in
  9.3?
 
  a time for assign to last commitfest is out.
 
  this patch is relative simple and really close to enhanced error
  fields feature - but depends if some from commiters will have a time
  for commit to 9.3 - so I am expecting primary target 9.4, but I am not
  be angry if it will be commited early.
 
  If we don't have access to those fields on PL/pgSQL, what was the point
  of the patch to begin with?  Surely, accessing them from C wasn't the
  main use case?
 
 
  These fields are available for application developers now. But is a
  true, so without this patch, GET STACKED DIAGNOSTICS statement will
  not be fully consistent, because some fields are accessible and others
  not

 there is one stronger argument for commit this patch now. With this
 patch, we are able to wrote regression tests for new fields via
 plpgsql.

 Regards

 Pavel

 
  Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 Rushabh Lathia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add more regression tests for CREATE OPERATOR

2013-06-25 Thread Robins Tharakan
 Thanks a ton Szymon (for a reminder on this one).

As a coincidental turn of events, I have had to travel half way across the
world and am without my personal laptop (without a linux distro etc.) and
just recovering from a jet-lag now.

I'll try to install a VM on a make-shift laptop and get something going to
respond as soon as is possible.

Thanks
--
Robins Tharakan

--
Robins Tharakan


On 17 June 2013 05:19, Szymon Guz mabew...@gmail.com wrote:

 On 23 May 2013 00:34, Robins Tharakan thara...@gmail.com wrote:

 Hi,

 Please find attached a patch to take code-coverage of CREATE OPERATOR
 (src/backend/commands/operatorcmds.c) from 56% to 91%.

 Any and all feedback is welcome.
 --
 Robins Tharakan


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


 Hi,
 there is one commented out test. I think it should be run, or deleted.
 There is no use of commented sql code which is not run.

 What do you think?

 regards,
 Szymon



Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-06-25 Thread Rushabh Lathia
On Tue, Jun 25, 2013 at 2:41 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
  Hi Pavel,
 
  I gone through the discussion over here and found that with this patch we
  enable the new error fields in plpgsql. Its a simple patch to expose the
 new
  error fields in plpgsql.
 
  Patch gets applied cleanly. make and make install too went smooth. make
  check
  was smooth too. Patch also include test coverage
 
  I tested the functionality with manual test and its woking as expected.
 
  BTW in the patch I found one additional new like in read_raise_options():
 
  @@ -3631,7 +3661,23 @@ read_raise_options(void)
  else if (tok_is_keyword(tok, yylval,
  K_HINT,
  hint))
  opt-opt_type = PLPGSQL_RAISEOPTION_HINT;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_COLUMN_NAME, column_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_COLUMN_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_CONSTRAINT_NAME, constraint_name))
  +   opt-opt_type =
 PLPGSQL_RAISEOPTION_CONSTRAINT_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_DATATYPE_NAME, datatype_name))
  +   opt-opt_type =
 PLPGSQL_RAISEOPTION_DATATYPE_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_TABLE_NAME, table_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_TABLE_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_SCHEMA_NAME, schema_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_SCHEMA_NAME;
  else
  +
  yyerror(unrecognized RAISE statement option);
 
  can you please remove that.

 No, these fields are there as was proposed - it enhance possibilities
 to PLpgSQL developers - they can use these fields for custom
 exceptions. It is same like possibility to specify SQLCODE, MESSAGE,
 HINT in current RAISE statement implementation.

 Why you dislike it?


Seems like some confusion.

I noticed additional new line which has been added into your patch in
function
read_raise_options()::pl_gram.y @ line number 3680. And in the earlier mail
thats what I asked to remove.




 Regards

 Pavel

 
  Apart from that patch looks good to me.
 
  Thanks,
  Rushabh
 
 
  On Fri, Feb 1, 2013 at 7:29 PM, Pavel Stehule pavel.steh...@gmail.com
  wrote:
 
  2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
   2013/2/1 Peter Eisentraut pete...@gmx.net:
   On 2/1/13 8:00 AM, Pavel Stehule wrote:
   2013/2/1 Marko Tiikkaja pgm...@joh.to:
   On 2/1/13 1:47 PM, Pavel Stehule wrote:
  
   now a most hard work is done and I would to enable access to new
   error fields from plpgsql.
  
  
   Is there a compelling reason why we wouldn't provide these already
 in
   9.3?
  
   a time for assign to last commitfest is out.
  
   this patch is relative simple and really close to enhanced error
   fields feature - but depends if some from commiters will have a time
   for commit to 9.3 - so I am expecting primary target 9.4, but I am
 not
   be angry if it will be commited early.
  
   If we don't have access to those fields on PL/pgSQL, what was the
 point
   of the patch to begin with?  Surely, accessing them from C wasn't the
   main use case?
  
  
   These fields are available for application developers now. But is a
   true, so without this patch, GET STACKED DIAGNOSTICS statement will
   not be fully consistent, because some fields are accessible and others
   not
 
  there is one stronger argument for commit this patch now. With this
  patch, we are able to wrote regression tests for new fields via
  plpgsql.
 
  Regards
 
  Pavel
 
  
   Pavel
 
 
  --
  Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-hackers
 
 
 
 
  --
  Rushabh Lathia




-- 
Rushabh Lathia


Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division

2013-06-25 Thread Jeevan Chalke
On Tue, Jun 25, 2013 at 11:11 AM, Jeevan Chalke 
jeevan.cha...@enterprisedb.com wrote:

 Hi David,

 I hope this is the latest patch to review, right ?

 I am going to review it.

 I have gone through the discussion on this thread and I agree with Stephen
 Frost that it don't add much improvements as such but definitely it is
 going to be easy for contributors in this area as they don't need to go all
 over to add any extra parameter they need to add. This patch simplifies it
 well enough.

 Will post my review soon.


Assuming *makeFuncArgs_002.diff* is the latest patch, here are my review
points.

About this patch and feature:
===
This patch tries to reduce redundancy when we need FuncCall expression. With
this patch it will become easier to add new parameter if needed. We just
need
to put it's default value at centralized location (I mean in this new
function)
so that all other places need not require and changes. And this new
parameter
is handled by the new feature who demanded it keep other untouched.

Review comments on patch:
===
1. Can't apply with git apply command but able to do it with patch -p1.
So no
issues
2. Adds one whitespace error, hopefully it will get corrected once it goes
through pgindent.
3. There is absolutely NO user visibility and thus I guess we don't need any
test case. Also no documentation are needed.
4. Also make check went smooth and thus assumes that there is no issue as
such.
Even I couldn't find any issue with my testing other than regression suite.
5. I had a code walk-through over patch and it looks good to me. However,
following line change seems unrelated (Line 186 in your patch)

! $$ = (Node *) makeSimpleA_Expr(AEXPR_OP, ~~, $1,
(Node *) n, @2);
!

Changes required from author:
===
It will be good if you remove unrelated changes from the patch and possibly
all
white-space errors.

Thanks


 Thanks.

 On Mon, Jun 17, 2013 at 11:13 AM, David Fetter da...@fetter.org wrote:

 On Mon, Feb 11, 2013 at 10:48:38AM -0800, David Fetter wrote:
  On Sun, Feb 10, 2013 at 10:09:19AM -0500, Tom Lane wrote:
   David Fetter da...@fetter.org writes:
Per suggestions and lots of help from Andrew Gierth, please find
attached a patch to clean up the call sites for FuncCall nodes,
 which
I'd like to expand centrally rather than in each of the 37 (or 38,
 but
I only redid 37) places where it's called.  The remaining one is in
src/backend/nodes/copyfuncs.c, which has to be modified for any
changes in the that struct anyhow.
  
   TBH, I don't think this is an improvement.
  
   The problem with adding a new field to any struct is that you have to
   run around and examine (and, usually, modify) every place that
   manufactures that type of struct.  With a makeFuncCall defined like
   this, you'd still have to do that; it would just become a lot easier
   to forget to do so.
  
   If the subroutine were defined like most other makeXXX subroutines,
   ie you have to supply *all* the fields, that argument would go away,
   but the notational advantage is then dubious.
  
   The bigger-picture point is that you're proposing to make the coding
   conventions for building FuncCalls different from what they are for
   any other grammar node.  I don't think that's a great idea; it will
   mostly foster confusion.
 
  The major difference between FuncCalls and others is that `while most
  raw-parsetree nodes are constructed only in their own syntax
  productions, FuncCall is constructed in many places unrelated to
  actual function call syntax.
 
  This really will make things a good bit easier on our successors.

 Here's a rebased patch with comments illustrating how best to employ.

 In my previous message, I characterized the difference between
 FuncCalls and other raw-parsetree nodes.  Is there some flaw in that
 logic? If there isn't, is there some reason not to treat them in a
 less redundant, more uniform manner as this patch does?

 Cheers,
 David.
 --
 David Fetter da...@fetter.org http://fetter.org/
 Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
 Skype: davidfetter  XMPP: david.fet...@gmail.com
 iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

 Remember to vote!
 Consider donating to Postgres: http://www.postgresql.org/about/donate


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 Jeevan B Chalke
 Senior Software Engineer, RD
 EnterpriseDB Corporation
 The Enterprise PostgreSQL Company

 Phone: +91 20 30589500

 Website: www.enterprisedb.com
 EnterpriseDB Blog: http://blogs.enterprisedb.com/
 Follow us on Twitter: http://www.twitter.com/enterprisedb

 This e-mail message (and any attachment) is intended for the use of the
 individual or entity to whom it is addressed. This message contains
 information from EnterpriseDB Corporation that may be privileged,
 confidential, or exempt from 

Re: [HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-25 Thread Michael Paquier
On Tue, Jun 25, 2013 at 5:33 PM, Misa Simic misa.si...@gmail.com wrote:
 Hi,

 Where we can find latest snapshot for 9.3 version?

 We have taken latest snapshot from
 http://ftp.postgresql.org/pub/snapshot/dev/

 But it seems it is for 9.4 version...
9.3 has moved to branch REL9_3_STABLE a couple of days ago. It looks
that its snapshot repository is missing.
Regards,
--
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-06-25 Thread Szymon Guz
On 25 June 2013 05:16, Steve Singer st...@ssinger.info wrote:


 One concern I have is that this patch makes pl/python functions involving
 numerics more than 3 times as slow as before.


 create temp table b(a numeric);
 insert into b select generate_series(1,1);

 create or replace function x(a numeric,b numeric) returns numeric as $$
 if a==None:
   return b
 return a+b
 $$ language plpythonu;
 create aggregate sm(basetype=numeric, sfunc=x,stype=numeric);


 test=# select sm(a) from b;
 sm
 --
  50005000
 (1 row)

 Time: 565.650 ms

 versus before the patch this was taking in the range of 80ms.

 Would it be faster to call numeric_send instead of numeric_out and then
 convert the sequence of Int16's to a tuple of digits that can be passed
 into the Decimal constructor? I think this is worth trying and testing,


Hi,
thanks for all the remarks.

I think I cannot do anything about speeding up the code. What I've found so
far is:

I cannot use simple fields from NumericVar in my code, so to not waste time
on something not sensible, I've tried to found out if using the tuple
constructor for decimal.Decimal will be faster. I've changed the function
to something like this:

static PyObject *
PLyDecimal_FromNumeric(PLyDatumToOb *arg, Datum d)
{
PyObject *digits = PyTuple_New(4);
PyTuple_SetItem(digits, 0, PyInt_FromLong(1));
PyTuple_SetItem(digits, 1, PyInt_FromLong(4));
PyTuple_SetItem(digits, 2, PyInt_FromLong(1));
PyTuple_SetItem(digits, 3, PyInt_FromLong(4));

PyObject *tuple = PyTuple_New(3);
PyTuple_SetItem(tuple, 0, PyInt_FromLong(1));
PyTuple_SetItem(tuple, 1, digits);
PyTuple_SetItem(tuple, 2, PyInt_FromLong(-3));

value = PyObject_CallFunctionObjArgs(PLy_decimal_ctor_global,
tuple, NULL);

return value;
}

Yes, it returns the same value regardless the params. The idea is to call
Python code like:

Decimal((0, (1,  4, 1, 4), -3))

which is simply:

Decimal('1.414')

Unfortunately this is not faster. It is as slow as it was with string
constructor.

I've checked the speed of decimal.Decimal using pure python. For this I
used a simple function, similar to yours:


def x(a, b):
if a is None:
return b
return a + b

I've run the tests using simple ints:


def test():
a = 0
for i in xrange(0, 1):
a += x(a, i)


for a in xrange(1, 100):
test()


And later I've run the same function, but with converting the arguments to
Decimals:

from decimal import Decimal


def x(a, b):
if a is None:
return b
return a + b


def test():
a = 0
for i in xrange(0, 1):
a += x(Decimal(a), Decimal(i))


for a in xrange(1, 100):
test()

It was run 100 times for decreasing the impact of test initialization.

The results for both files are:
int:  0.697s
decimal: 38.859s

What gives average time for one function call of:
int: 69ms
decimal: 380ms


For me the problem is with slow code at Python's side, the Decimal
constructors are pretty slow, and there is nothing I can do with that at
the Postgres' side.


I will send patch with fixes later.


thanks,
Szymon


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-06-25 Thread Pavel Stehule
2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:



 On Tue, Jun 25, 2013 at 2:41 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
  Hi Pavel,
 
  I gone through the discussion over here and found that with this patch
  we
  enable the new error fields in plpgsql. Its a simple patch to expose the
  new
  error fields in plpgsql.
 
  Patch gets applied cleanly. make and make install too went smooth. make
  check
  was smooth too. Patch also include test coverage
 
  I tested the functionality with manual test and its woking as expected.
 
  BTW in the patch I found one additional new like in
  read_raise_options():
 
  @@ -3631,7 +3661,23 @@ read_raise_options(void)
  else if (tok_is_keyword(tok, yylval,
  K_HINT,
  hint))
  opt-opt_type = PLPGSQL_RAISEOPTION_HINT;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_COLUMN_NAME, column_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_COLUMN_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_CONSTRAINT_NAME, constraint_name))
  +   opt-opt_type =
  PLPGSQL_RAISEOPTION_CONSTRAINT_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_DATATYPE_NAME, datatype_name))
  +   opt-opt_type =
  PLPGSQL_RAISEOPTION_DATATYPE_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_TABLE_NAME, table_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_TABLE_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_SCHEMA_NAME, schema_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_SCHEMA_NAME;
  else
  +
  yyerror(unrecognized RAISE statement option);
 
  can you please remove that.

 No, these fields are there as was proposed - it enhance possibilities
 to PLpgSQL developers - they can use these fields for custom
 exceptions. It is same like possibility to specify SQLCODE, MESSAGE,
 HINT in current RAISE statement implementation.

 Why you dislike it?


 Seems like some confusion.

 I noticed additional new line which has been added into your patch in
 function
 read_raise_options()::pl_gram.y @ line number 3680. And in the earlier mail
 thats what I asked to remove.

I am sorry

I remove these new lines

Regards

Pavel



 Regards

 Pavel

 
  Apart from that patch looks good to me.
 
  Thanks,
  Rushabh
 
 
  On Fri, Feb 1, 2013 at 7:29 PM, Pavel Stehule pavel.steh...@gmail.com
  wrote:
 
  2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
   2013/2/1 Peter Eisentraut pete...@gmx.net:
   On 2/1/13 8:00 AM, Pavel Stehule wrote:
   2013/2/1 Marko Tiikkaja pgm...@joh.to:
   On 2/1/13 1:47 PM, Pavel Stehule wrote:
  
   now a most hard work is done and I would to enable access to
   new
   error fields from plpgsql.
  
  
   Is there a compelling reason why we wouldn't provide these already
   in
   9.3?
  
   a time for assign to last commitfest is out.
  
   this patch is relative simple and really close to enhanced error
   fields feature - but depends if some from commiters will have a
   time
   for commit to 9.3 - so I am expecting primary target 9.4, but I am
   not
   be angry if it will be commited early.
  
   If we don't have access to those fields on PL/pgSQL, what was the
   point
   of the patch to begin with?  Surely, accessing them from C wasn't
   the
   main use case?
  
  
   These fields are available for application developers now. But is a
   true, so without this patch, GET STACKED DIAGNOSTICS statement will
   not be fully consistent, because some fields are accessible and
   others
   not
 
  there is one stronger argument for commit this patch now. With this
  patch, we are able to wrote regression tests for new fields via
  plpgsql.
 
  Regards
 
  Pavel
 
  
   Pavel
 
 
  --
  Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-hackers
 
 
 
 
  --
  Rushabh Lathia




 --
 Rushabh Lathia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-06-25 Thread Szymon Guz
On 25 June 2013 05:16, Steve Singer st...@ssinger.info wrote:

 On 05/28/2013 04:41 PM, Szymon Guz wrote:

 Hi,
 I've got a patch.

 This is for a plpython enhancement.

 There is an item at the TODO list http://wiki.postgresql.org/**
 wiki/Todo#Server-Side_**Languageshttp://wiki.postgresql.org/wiki/Todo#Server-Side_Languages
 Fix loss of information during conversion of numeric type to Python
 float

 This patch uses a decimal.Decimal type from Python standard library for
 the plpthon function numeric argument instead of float.

 Patch contains changes in code, documentation and tests.

 Most probably there is something wrong, as this is my first Postgres
 patch :)


 Thanks for contributing.

 This patch applies cleanly against master and compiles with warnings

 plpy_main.c: In function ‘PLy_init_interp’:
 plpy_main.c:157:2: warning: ISO C90 forbids mixed declarations and code
 [-Wdeclaration-after-**statement]
 plpy_main.c:161:2: warning: ISO C90 forbids mixed declarations and code
 [-Wdeclaration-after-**statement]

 You can avoid this by moving the declaration of decimal and decimal_dict
 to be at the top of the function where mainmod is declared.

 Also in this function you've introduced places where it returns with an
 error (the PLy_elog(ERROR...) calls before decrementing the reference to
 mainmod. I think you can decrement the mainmod reference after the call to
 SetItemString  before your changes that import the Decimal module.


 The patch works as expected, I am able to write python functions that take
 numerics as arguments and work with them.  I can adjust the decimal context
 precision inside of  my function.

 One concern I have is that this patch makes pl/python functions involving
 numerics more than 3 times as slow as before.


 create temp table b(a numeric);
 insert into b select generate_series(1,1);

 create or replace function x(a numeric,b numeric) returns numeric as $$
 if a==None:
   return b
 return a+b
 $$ language plpythonu;
 create aggregate sm(basetype=numeric, sfunc=x,stype=numeric);


 test=# select sm(a) from b;
 sm
 --
  50005000
 (1 row)

 Time: 565.650 ms

 versus before the patch this was taking in the range of 80ms.

 Would it be faster to call numeric_send instead of numeric_out and then
 convert the sequence of Int16's to a tuple of digits that can be passed
 into the Decimal constructor? I think this is worth trying and testing,


 Documentation
 =
 Your patched version of the docs say

   PostgreSQL typereal/type, typedouble/type, and
 typenumeric/type are converted to
Python typeDecimal/type. This type is imported
 fromliteraldecimal.Decimal/**literal.


 I don't think this is correct, as far as I can tell your not changing the
 behaviour for postgresql real and double types, they continue to use
 floating point.



 listitem
 para
PostgreSQL typereal/type and typedouble/typeare converted to
Python typefloat/type.
 /para
 /listitem

 listitem
 para
PostgreSQL typenumeric/type is converted to
Python typeDecimal/type. This type is imported from
 literaldecimal.Decimal/**literal.
 /para
 /listitem


Hi,
I've attached a new patch. I've fixed all the problems you've found, except
for the efficiency problem, which has been described in previous email.

thanks,
Szymon


plpython_decimal_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch submission: truncate trailing nulls from heap rows to reduce the size of the null bitmap [Review]

2013-06-25 Thread Amit Kapila
On Monday, June 24, 2013 8:20 PM Tom Lane wrote:
 Amit Kapila amit.kap...@huawei.com writes:
  I will summarize the results, and if most of us feel that they are
 not good
  enough, then we can return this patch.
 
 Aside from the question of whether there's really any generally-useful
 performance improvement from this patch, it strikes me that this change
 forecloses other games that we might want to play with interpretation
 of
 the value of a tuple's natts.
 
 In particular, when I was visiting Salesforce last week, the point came
 up that they'd really like ALTER TABLE ADD COLUMN to be free even
 when
 the column had a non-null DEFAULT.  It's not too difficult to imagine
 how we might support that, at least when the default expression is a
 constant: decree that if the requested attribute number is beyond
 natts,
 look aside at the tuple descriptor to see if the column is marked as
 having a magic default value, and if so, substitute that rather than
 just returning NULL.  (This has to be a magic value separate from
 the current default, else a subsequent DROP or SET DEFAULT would do
 the wrong thing.)
 
 However, this idea conflicts with an optimization that supposes it can
 reduce natts to suppress null columns: if the column was actually
 stored
 as null, you'd lose that information, and would incorrectly return the
 magic default on subsequent reads.
 
 I think it might be possible to make both ideas play together, by
 not reducing natts further than the last column with a magic default.
 However, that means extra complexity in heap_form_tuple, which would
 invalidate the performance measurements done in support of this patch.

  It can have slight impact on normal scenario's, but I am not sure how much
because
  the change will be very less(may be one extra if check and one assignment)

  For this Patch's scenario, I think the major benefit for Performance is in
heap_fill_tuple() where the 
  For loop is reduced. However added some logic in heap_form_tuple can
reduce the performance improvement, 
  but there can still be space saving benefit. 

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Naming of ORDINALITY column

2013-06-25 Thread Dean Rasheed
On 24 June 2013 04:29, Josh Berkus j...@agliodbs.com wrote:
 On 06/23/2013 08:00 PM, Andrew Gierth wrote:
 OK, let's try to cover all the bases here in one go.

 1. Stick with ?column? as a warning flag that you're not supposed to
 be using this without aliasing it to something.

 How do I actually supply an alias which covers both columns?  What does
 that look like, syntactically?


There are a number of possible syntaxes:

SELECT unnest, ?column? FROM unnest(ARRAY['x','y']) WITH ORDINALITY;
or
SELECT unnest.unnest, unnest.?column? FROM unnest(ARRAY['x','y'])
WITH ORDINALITY;
 unnest | ?column?
+--
 x  |1
 y  |2
(2 rows)


SELECT t, ?column? FROM unnest(ARRAY['x','y']) WITH ORDINALITY AS t;
or
SELECT t.t, t.?column? FROM unnest(ARRAY['x','y']) WITH ORDINALITY AS t;
 t | ?column?
---+--
 x |1
 y |2
(2 rows)


SELECT val, ?column? FROM unnest(ARRAY['x','y']) WITH ORDINALITY AS t(val);
or
SELECT t.val, t.?column? FROM unnest(ARRAY['x','y']) WITH ORDINALITY
AS t(val);
 val | ?column?
-+--
 x   |1
 y   |2
(2 rows)


SELECT val, ord FROM unnest(ARRAY['x','y']) WITH ORDINALITY AS t(val, ord);
or
SELECT t.val, t.ord FROM unnest(ARRAY['x','y']) WITH ORDINALITY AS t(val, ord);
 val | ord
-+-
 x   |   1
 y   |   2
(2 rows)

My suggestion was to replace ?column? with ordinality wherever it
appears above, for the user's convenience, but so far more people
prefer ?column? as a way of indicating that you're supposed to
provide an alias for the column.

If that's what people prefer, I don't mind --- it's still going to be
a very handy new feature.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: FILTER for aggregates [was Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division]

2013-06-25 Thread Dean Rasheed
On 24 June 2013 03:50, Tom Lane t...@sss.pgh.pa.us wrote:
 David Fetter da...@fetter.org writes:
 On Sun, Jun 23, 2013 at 07:44:26AM -0700, Kevin Grittner wrote:
 I think it is OK if that gets a syntax error.  If that's the worst
 case I like this approach.

 I think reducing the usefulness of error messages is something we need
 to think extremely hard about before we do.  Is there maybe a way to
 keep the error messages even if by some magic we manage to unreserve
 the words?


The flip side is that the error message you get if you don't realise a
word is now reserved is not exactly useful: Syntax error at or near
xxx. I don't know of any way to improve that though.


 Of the alternatives discussed so far, I don't really like anything
 better than adding another special case to base_yylex().  Robert opined
 in the other thread about RESPECT NULLS that the potential failure cases
 with that approach are harder to reason about, which is true, but that
 doesn't mean that we should accept failures we know of because there
 might be failures we don't know of.

 One thing that struck me while thinking about this is that it seems
 like we've been going about it wrong in base_yylex() in any case.
 For example, because we fold WITH followed by TIME into a special token
 WITH_TIME, we will fail on a statement beginning

 WITH time AS ...

 even though time is a legal ColId.  But suppose that instead of
 merging the two tokens into one, we just changed the first token into
 something special; that is, base_yylex would return a special token
 WITH_FOLLOWED_BY_TIME and then TIME.  We could then fix the above
 problem by allowing either WITH or WITH_FOLLOWED_BY_TIME as the leading
 keyword of a statement; and similarly for the few other places where
 WITH can be followed by an arbitrary identifier.

 Going on the same principle, we could probably let FILTER be an
 unreserved keyword while FILTER_FOLLOWED_BY_PAREN could be a
 type_func_name_keyword.  (I've not tried this though.)


I've not tried either, but wouldn't that mean that SELECT * FROM
list_filters() filter would be legal, whereas SELECT * FROM
list_filters() filter(id, val) would be a syntax error? If so, I
don't think that would be an improvement.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Index on regexes

2013-06-25 Thread Heikki Linnakangas

On 13.06.2013 23:19, Alexander Korotkov wrote:

Hackers,

Attached patch contains opclass which demonstrates advantages of GIN
additional information storing itself without other GIN improvements. It
implements inversed task of regex indexing. It works so: you create index
on regexes and search for regexes matched query string. It introduce two
additional operators |~ and *~ for case-sensetive and case-insensetive
regex to string matching, and gin_regexp_trgm_ops opclass.


I'm going to step back from this patch and trigrams for a while, and 
ponder in general, how one would do indexing of regexps.


The best approach surely depends a lot on the kinds of regexps and query 
strings involved. For example, how much common structure do the regexps 
share, how easily they can be decomposed into common and differing 
parts. Also, how long are the query strings being used, and how many 
regexps does it need to work efficiently with.


The first thought that comes to my mind is to build a GiST index, where 
the leafs items are the regexps, in a precompiled form. The upper nodes 
are also pre-compiled regexps, which match everything that the child 
regexps contain. For example, if you have the regexps .*foobar.* and 
.*foofoo.* on a leaf page, the upper node might be .*foo.*. In other 
words, the union function forms a regular expression that matches all 
the strings that any of the member regexps, and may match more. 
Implementing the union-function is probably the most difficult part of 
this approach.


In literature, there is the Aho-Corasick algorithm that can be used to 
efficiently search for several substrings in a string in one go. I 
believe it can be extended to handle regexps. That does not fit nicely 
into existing GIN or GiST indexes, however, so that would need to be a 
whole new indexam. Also, I'm not sure how it scales as the number of 
regexps searched increses, and incremental updates might be tricky.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-06-25 Thread Ronan Dunklau
Concerning the efficiency problem, it should be noted that the latest 3.3
release of cpython introduces an accelerator for decimal data types, as a
C-module.  This module was previously available from the Python package
index at: https://pypi.python.org/pypi/cdecimal/2.2

It may be overkill to try to include such a dependency, but the performance
overhead from using decimal is really mitigated by this implementation.


2013/6/25 Szymon Guz mabew...@gmail.com

 On 25 June 2013 05:16, Steve Singer st...@ssinger.info wrote:

 On 05/28/2013 04:41 PM, Szymon Guz wrote:

 Hi,
 I've got a patch.

 This is for a plpython enhancement.

 There is an item at the TODO list http://wiki.postgresql.org/**
 wiki/Todo#Server-Side_**Languageshttp://wiki.postgresql.org/wiki/Todo#Server-Side_Languages
 Fix loss of information during conversion of numeric type to Python
 float

 This patch uses a decimal.Decimal type from Python standard library for
 the plpthon function numeric argument instead of float.

 Patch contains changes in code, documentation and tests.

 Most probably there is something wrong, as this is my first Postgres
 patch :)


 Thanks for contributing.

 This patch applies cleanly against master and compiles with warnings

 plpy_main.c: In function ‘PLy_init_interp’:
 plpy_main.c:157:2: warning: ISO C90 forbids mixed declarations and code
 [-Wdeclaration-after-**statement]
 plpy_main.c:161:2: warning: ISO C90 forbids mixed declarations and code
 [-Wdeclaration-after-**statement]

 You can avoid this by moving the declaration of decimal and decimal_dict
 to be at the top of the function where mainmod is declared.

 Also in this function you've introduced places where it returns with an
 error (the PLy_elog(ERROR...) calls before decrementing the reference to
 mainmod. I think you can decrement the mainmod reference after the call to
 SetItemString  before your changes that import the Decimal module.


 The patch works as expected, I am able to write python functions that
 take numerics as arguments and work with them.  I can adjust the decimal
 context precision inside of  my function.

 One concern I have is that this patch makes pl/python functions involving
 numerics more than 3 times as slow as before.


 create temp table b(a numeric);
 insert into b select generate_series(1,1);

 create or replace function x(a numeric,b numeric) returns numeric as $$
 if a==None:
   return b
 return a+b
 $$ language plpythonu;
 create aggregate sm(basetype=numeric, sfunc=x,stype=numeric);


 test=# select sm(a) from b;
 sm
 --
  50005000
 (1 row)

 Time: 565.650 ms

 versus before the patch this was taking in the range of 80ms.

 Would it be faster to call numeric_send instead of numeric_out and then
 convert the sequence of Int16's to a tuple of digits that can be passed
 into the Decimal constructor? I think this is worth trying and testing,


 Documentation
 =
 Your patched version of the docs say

   PostgreSQL typereal/type, typedouble/type, and
 typenumeric/type are converted to
Python typeDecimal/type. This type is imported
 fromliteraldecimal.Decimal/**literal.


 I don't think this is correct, as far as I can tell your not changing the
 behaviour for postgresql real and double types, they continue to use
 floating point.



 listitem
 para
PostgreSQL typereal/type and typedouble/typeare converted
 to
Python typefloat/type.
 /para
 /listitem

 listitem
 para
PostgreSQL typenumeric/type is converted to
Python typeDecimal/type. This type is imported from
 literaldecimal.Decimal/**literal.
 /para
 /listitem


 Hi,
 I've attached a new patch. I've fixed all the problems you've found,
 except for the efficiency problem, which has been described in previous
 email.

 thanks,
 Szymon


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




Re: [HACKERS] [PATCH] Fix conversion for Decimal arguments in plpython functions

2013-06-25 Thread Szymon Guz
Well, I really don't like the idea of such a dependency.

However it could be added as configuration option, so you could compile
postgres with e.g. --with-cdecimal, and then it would be user dependent.
Maybe it is a good idea for another patch.

On 25 June 2013 14:23, Ronan Dunklau rdunk...@gmail.com wrote:

 Concerning the efficiency problem, it should be noted that the latest 3.3
 release of cpython introduces an accelerator for decimal data types, as a
 C-module.  This module was previously available from the Python package
 index at: https://pypi.python.org/pypi/cdecimal/2.2

 It may be overkill to try to include such a dependency, but the
 performance overhead from using decimal is really mitigated by this
 implementation.


 2013/6/25 Szymon Guz mabew...@gmail.com

 On 25 June 2013 05:16, Steve Singer st...@ssinger.info wrote:

 On 05/28/2013 04:41 PM, Szymon Guz wrote:

 Hi,
 I've got a patch.

 This is for a plpython enhancement.

 There is an item at the TODO list http://wiki.postgresql.org/**
 wiki/Todo#Server-Side_**Languageshttp://wiki.postgresql.org/wiki/Todo#Server-Side_Languages
 Fix loss of information during conversion of numeric type to Python
 float

 This patch uses a decimal.Decimal type from Python standard library for
 the plpthon function numeric argument instead of float.

 Patch contains changes in code, documentation and tests.

 Most probably there is something wrong, as this is my first Postgres
 patch :)


 Thanks for contributing.

 This patch applies cleanly against master and compiles with warnings

 plpy_main.c: In function ‘PLy_init_interp’:
 plpy_main.c:157:2: warning: ISO C90 forbids mixed declarations and code
 [-Wdeclaration-after-**statement]
 plpy_main.c:161:2: warning: ISO C90 forbids mixed declarations and code
 [-Wdeclaration-after-**statement]

 You can avoid this by moving the declaration of decimal and decimal_dict
 to be at the top of the function where mainmod is declared.

 Also in this function you've introduced places where it returns with an
 error (the PLy_elog(ERROR...) calls before decrementing the reference to
 mainmod. I think you can decrement the mainmod reference after the call to
 SetItemString  before your changes that import the Decimal module.


 The patch works as expected, I am able to write python functions that
 take numerics as arguments and work with them.  I can adjust the decimal
 context precision inside of  my function.

 One concern I have is that this patch makes pl/python functions
 involving numerics more than 3 times as slow as before.


 create temp table b(a numeric);
 insert into b select generate_series(1,1);

 create or replace function x(a numeric,b numeric) returns numeric as $$
 if a==None:
   return b
 return a+b
 $$ language plpythonu;
 create aggregate sm(basetype=numeric, sfunc=x,stype=numeric);


 test=# select sm(a) from b;
 sm
 --
  50005000
 (1 row)

 Time: 565.650 ms

 versus before the patch this was taking in the range of 80ms.

 Would it be faster to call numeric_send instead of numeric_out and then
 convert the sequence of Int16's to a tuple of digits that can be passed
 into the Decimal constructor? I think this is worth trying and testing,


 Documentation
 =
 Your patched version of the docs say

   PostgreSQL typereal/type, typedouble/type, and
 typenumeric/type are converted to
Python typeDecimal/type. This type is imported
 fromliteraldecimal.Decimal/**literal.


 I don't think this is correct, as far as I can tell your not changing
 the behaviour for postgresql real and double types, they continue to use
 floating point.



 listitem
 para
PostgreSQL typereal/type and typedouble/typeare converted
 to
Python typefloat/type.
 /para
 /listitem

 listitem
 para
PostgreSQL typenumeric/type is converted to
Python typeDecimal/type. This type is imported from
 literaldecimal.Decimal/**literal.
 /para
 /listitem


 Hi,
 I've attached a new patch. I've fixed all the problems you've found,
 except for the efficiency problem, which has been described in previous
 email.

 thanks,
 Szymon


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers





[HACKERS] Hash partitioning.

2013-06-25 Thread Yuri Levinsky
Hi,

Do we have any plans to implement Hash Partitioning, maybe I missing
this feature? 

 

Sincerely yours,

 

 

Yuri Levinsky, DBA

Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel

Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222

 

image002.jpg

Re: backend hangs at immediate shutdown (Re: [HACKERS] Back-branch update releases coming in a couple weeks)

2013-06-25 Thread MauMau

From: Alvaro Herrera alvhe...@2ndquadrant.com

Yeah, I see that --- after removing that early exit, there are unwanted
messages.  And in fact there are some signals sent that weren't
previously sent.  Clearly we need something here: if we're in immediate
shutdown handler, don't signal anyone (because they have already been
signalled) and don't log any more messages; but the cleaning up of
postmaster's process list must still be carried out.

Would you please add that on top of the attached cleaned up version of
your patch?


Thanks.  I'll do that tomorrow at the earliest.


Noah Misch escribió:

On Sun, Jun 23, 2013 at 01:55:19PM +0900, MauMau wrote:



 the clients at the immediate shutdown.  The code gets much simpler.  In
 addition, it may be better that we similarly send SIGKILL in backend
 crash (FatalError) case, eliminate the use of SIGQUIT and remove
 quickdie() and other SIGQUIT handlers.

My take is that the client message has some value, and we shouldn't just
discard it to simplify the code slightly.  Finishing the shutdown quickly 
has
value, of course.  The relative importance of those things should guide 
the
choice of a timeout under method #2, but I don't see a rigorous way to 
draw

the line.  I feel five seconds is, if anything, too high.


There's obviously a lot of disagreement on 5 seconds being too high or
too low.  We have just followed SysV init's precedent of waiting 5
seconds by default between sending signals TERM and QUIT during a
shutdown.  I will note that during a normal shutdown, services are
entitled to do much more work than just signal all their children to
exit immediately; and yet I don't find much evidence that this period is
inordinately short.  I don't feel strongly that it couldn't be shorter,
though.

What about using deadlock_timeout?  It constitutes a rough barometer on 
the
system's tolerance for failure detection latency, and we already overload 
it
by having it guide log_lock_waits.  The default of 1s makes sense to me 
for
this purpose, too.  We can always add a separate 
immediate_shutdown_timeout if
there's demand, but I don't expect such demand.  (If we did add such a 
GUC,

setting it to zero could be the way to request method 1.  If some folks
strongly desire method 1, that's probably the way to offer it.)


I dunno.  Having this be configurable seems overkill to me.  But perhaps
that's the way to satisfy most people: some people could set it very
high so that they could have postmaster wait longer if they believe
their server is going to be overloaded; people wishing immediate SIGKILL
could get that too, as you describe.

I think this should be a separate patch, however.


I think so, too.  We can add a parameter later if we find it highly 
necessary after some experience in the field.



Regards
MauMau




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Possible bug in CASE evaluation

2013-06-25 Thread Andres Freund
On 2013-06-24 21:35:53 -0400, Noah Misch wrote:
 On Sat, Jun 22, 2013 at 04:54:50PM +0200, Andres Freund wrote:
  On 2013-06-21 16:45:28 +0200, Andres Freund wrote:
   On 2013-06-21 09:51:05 -0400, Noah Misch wrote:
That being said, if we discover a simple-enough fix that performs well, 
we may
as well incorporate it.
  
   What about passing another parameter down eval_const_expressions_mutator
   (which is static, so changing the API isn't a problem) that basically
   tells us whether we actually should evaluate expressions or just perform
   the transformation?
   There's seems to be basically a couple of places where we call dangerous
   things:
   * simplify_function (via -evaluate_function-evaluate_expr) which is
 called in a bunch of places
   * evaluate_expr which is directly called in T_ArrayExpr
 T_ArrayCoerceExpr
  
   All places I've inspected so far need to deal with simplify_function
   returning NULL anyway, so that seems like a viable fix.
 
  *Prototype* patch - that seems simple enough - attached. Opinions?

 Simple enough, yes.  The other point still stands.

You mean performance? Primarily I still think we should first worry
about correctness first and then about performance. And CASE is the
documented (and really only, without writing procedual code) solution to
use for the cases where evaluation order actually *is* important.

But anyway, the question is to find realistic cases to measure the
performance of. Obviously you can just make arbitrarily expensive
expressions that can be computed full during constant folding. Which I
don't find very interesting, do you?

So, what I've done is to measure the performance difference when doing
full table queries of some CASE containing system views.

best of 5 everytime:
SELECT * FROM pg_stats;
master: 28.287 patched: 28.565

SELECT * FROM information_schema.applicable_roles;
master: 0.757 patched: 0.755

regression=# SELECT * FROM information_schema.attributes:
master: 8.392 patched: 8.555

SELECT * FROM information_schema.column_privileges;
master: 90.853 patched: 88.551

SELECT * FROM information_schema.columns;
master: 259.436 patched: 274.145

SELECT * FROM information_schema.constraint_column_usage ;
master: 14.736 patched 15.005

SELECT * FROM information_schema.parameters;
master: 76.173 patched: 79.850

SELECT * FROM information_schema.routines;
master: 45.102 patched: 46.517 ms

...

So, on those queries there's some difference (I've left out the ones
which are too short), but it's not big.

Now, for the other extreme, the following completely random query I just
typed out:
SELECT f FROM (SELECT (CASE g.i WHEN -1 THEN 0 WHEN 1 THEN 3.0/1 WHEN g.i THEN 
2.0/3 END) f FROM generate_series(1, 100) g(i)) s WHERE f = 0;
master: 491.931 patched: 943.629

suffers way much worse because the division is so expensive...

Greetings,

Andres Freund

--
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Possible bug in CASE evaluation

2013-06-25 Thread Pavel Stehule
2013/6/25 Andres Freund and...@2ndquadrant.com:
 On 2013-06-24 21:35:53 -0400, Noah Misch wrote:
 On Sat, Jun 22, 2013 at 04:54:50PM +0200, Andres Freund wrote:
  On 2013-06-21 16:45:28 +0200, Andres Freund wrote:
   On 2013-06-21 09:51:05 -0400, Noah Misch wrote:
That being said, if we discover a simple-enough fix that performs 
well, we may
as well incorporate it.
  
   What about passing another parameter down eval_const_expressions_mutator
   (which is static, so changing the API isn't a problem) that basically
   tells us whether we actually should evaluate expressions or just perform
   the transformation?
   There's seems to be basically a couple of places where we call dangerous
   things:
   * simplify_function (via -evaluate_function-evaluate_expr) which is
 called in a bunch of places
   * evaluate_expr which is directly called in T_ArrayExpr
 T_ArrayCoerceExpr
  
   All places I've inspected so far need to deal with simplify_function
   returning NULL anyway, so that seems like a viable fix.
 
  *Prototype* patch - that seems simple enough - attached. Opinions?

 Simple enough, yes.  The other point still stands.

 You mean performance? Primarily I still think we should first worry
 about correctness first and then about performance. And CASE is the
 documented (and really only, without writing procedual code) solution to
 use for the cases where evaluation order actually *is* important.

 But anyway, the question is to find realistic cases to measure the
 performance of. Obviously you can just make arbitrarily expensive
 expressions that can be computed full during constant folding. Which I
 don't find very interesting, do you?

 So, what I've done is to measure the performance difference when doing
 full table queries of some CASE containing system views.

 best of 5 everytime:
 SELECT * FROM pg_stats;
 master: 28.287 patched: 28.565

 SELECT * FROM information_schema.applicable_roles;
 master: 0.757 patched: 0.755

 regression=# SELECT * FROM information_schema.attributes:
 master: 8.392 patched: 8.555

 SELECT * FROM information_schema.column_privileges;
 master: 90.853 patched: 88.551

 SELECT * FROM information_schema.columns;
 master: 259.436 patched: 274.145

 SELECT * FROM information_schema.constraint_column_usage ;
 master: 14.736 patched 15.005

 SELECT * FROM information_schema.parameters;
 master: 76.173 patched: 79.850

 SELECT * FROM information_schema.routines;
 master: 45.102 patched: 46.517 ms

 ...

 So, on those queries there's some difference (I've left out the ones
 which are too short), but it's not big.

 Now, for the other extreme, the following completely random query I just
 typed out:
 SELECT f FROM (SELECT (CASE g.i WHEN -1 THEN 0 WHEN 1 THEN 3.0/1 WHEN g.i 
 THEN 2.0/3 END) f FROM generate_series(1, 100) g(i)) s WHERE f = 0;
 master: 491.931 patched: 943.629

 suffers way much worse because the division is so expensive...

:-(

it is too high price

Pavel



 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Bruce Momjian
On Tue, Jun 25, 2013 at 03:48:19PM +0300, Yuri Levinsky wrote:
 Hi,
 
 Do we have any plans to implement Hash Partitioning, maybe I missing this
 feature?

You can do it by writing your own constraint and trigger functions that
control the hashing.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-25 Thread Fujii Masao
On Tue, Jun 25, 2013 at 6:33 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 On Tue, Jun 25, 2013 at 5:33 PM, Misa Simic misa.si...@gmail.com wrote:
 Hi,

 Where we can find latest snapshot for 9.3 version?

 We have taken latest snapshot from
 http://ftp.postgresql.org/pub/snapshot/dev/

 But it seems it is for 9.4 version...
 9.3 has moved to branch REL9_3_STABLE a couple of days ago.

Yes. We can find the snapshot from REL9_3_STABLE git branch.
http://git.postgresql.org/gitweb/?p=postgresql.git;a=shortlog;h=refs/heads/REL9_3_STABLE

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-06-25 Thread Sawada Masahiko
On Tue, Jun 25, 2013 at 12:19 PM, Pavan Deolasee
pavan.deola...@gmail.com wrote:



 On Mon, Jun 24, 2013 at 7:17 PM, Sawada Masahiko sawada.m...@gmail.com
 wrote:



 
 [Server]
 standby_name = 'slave1'
 synchronous_transfer = commit
 wal_sender_timeout = 30
 [Server]
 standby_name = 'slave2'
 synchronous_transfer = all
 wal_sender_timeout = 50
 ---


 What different values/modes you are thinking for synchronous_transfer ? IMHO
 only commit and all may not be enough. As I suggested upthread, we may
 need an additional mode, say data, which will ensure synchronous WAL
 transfer before making any file system changes. We need this separate mode
 because the failback safe (or whatever we call it) standby need not wait on
 the commits and it's important to avoid that wait since it comes in a direct
 path of client transactions.

 If we are doing it, I wonder if an additional mode none also makes sense
 so that users can also control asynchronous standbys via the same mechanism.

I made mistake how to use name of parameter between
synchronous_transfer and failback_safe_standby_mode.
it means that we control file system changes using
failback_safe_standby_mode. if failback_safe_standby_mode is set
'remote_flush', master server wait for flushing all data page in
standby server (e.g., CLOG, pg_control).
right?
for example:
--
[server]
standby_name = 'slave1'
failback_safe_standby_mode = remote_flush
wal_sender_timeout = 50
--

in this case, we should also set synchronous_commit and
synchronous_level to each standby server. that is, do we need to set
following 3 parameters for supporting case 3,4 as I said?
-synchronous_commit = on/off/local/remote_write
-failback_safe_standby_mode = off/remote_write/remote_flush
-synchronous_level = sync/async (this parameter means that standby
server is connected using which mode (sync/async) .)

please give me your feedback.

Regards,
---
Sawada Masahiko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL 9.3 latest dev snapshot

2013-06-25 Thread Michael Paquier

On 2013/06/25, at 22:23, Fujii Masao masao.fu...@gmail.com wrote:

 On Tue, Jun 25, 2013 at 6:33 PM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 On Tue, Jun 25, 2013 at 5:33 PM, Misa Simic misa.si...@gmail.com wrote:
 Hi,
 
 Where we can find latest snapshot for 9.3 version?
 
 We have taken latest snapshot from
 http://ftp.postgresql.org/pub/snapshot/dev/
 
 But it seems it is for 9.4 version...
 9.3 has moved to branch REL9_3_STABLE a couple of days ago.
 
 Yes. We can find the snapshot from REL9_3_STABLE git branch.
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=shortlog;h=refs/heads/REL9_3_STABLE
Indeed, I completely forgot that you can download snapshots from 
postgresql.org's git. Simply use that instead of the FTP server now as long as 
9.3 snapshots are not generated there.
--
Michael
(Sent from my mobile phone)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: FILTER for aggregates [was Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division]

2013-06-25 Thread Tom Lane
Dean Rasheed dean.a.rash...@gmail.com writes:
 On 24 June 2013 03:50, Tom Lane t...@sss.pgh.pa.us wrote:
 Going on the same principle, we could probably let FILTER be an
 unreserved keyword while FILTER_FOLLOWED_BY_PAREN could be a
 type_func_name_keyword.  (I've not tried this though.)

 I've not tried either, but wouldn't that mean that SELECT * FROM
 list_filters() filter would be legal, whereas SELECT * FROM
 list_filters() filter(id, val) would be a syntax error? If so, I
 don't think that would be an improvement.

Hm, good point.  The SQL committee really managed to choose some
unfortunate syntax here, didn't they.

I know it's heresy in these parts, but maybe we should consider
adopting a non-spec syntax for this feature?  In particular, it's
really un-obvious why the FILTER clause shouldn't be inside rather
than outside the aggregate's parens, like ORDER BY.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: FILTER for aggregates [was Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division]

2013-06-25 Thread Pavel Stehule
2013/6/25 Tom Lane t...@sss.pgh.pa.us:
 Dean Rasheed dean.a.rash...@gmail.com writes:
 On 24 June 2013 03:50, Tom Lane t...@sss.pgh.pa.us wrote:
 Going on the same principle, we could probably let FILTER be an
 unreserved keyword while FILTER_FOLLOWED_BY_PAREN could be a
 type_func_name_keyword.  (I've not tried this though.)

 I've not tried either, but wouldn't that mean that SELECT * FROM
 list_filters() filter would be legal, whereas SELECT * FROM
 list_filters() filter(id, val) would be a syntax error? If so, I
 don't think that would be an improvement.

 Hm, good point.  The SQL committee really managed to choose some
 unfortunate syntax here, didn't they.

 I know it's heresy in these parts, but maybe we should consider
 adopting a non-spec syntax for this feature?  In particular, it's
 really un-obvious why the FILTER clause shouldn't be inside rather
 than outside the aggregate's parens, like ORDER BY.

the gram can be more free and final decision should be done in later stages ???

This technique was enough when I wrote prototype for CUBE ROLLUP
without CUBE ROLLUP reserwed keywords.

Regards

Pavel


 regards, tom lane


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-06-25 Thread Pavel Stehule
Hello



2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
 Hi Pavel,

 I gone through the discussion over here and found that with this patch we
 enable the new error fields in plpgsql. Its a simple patch to expose the new
 error fields in plpgsql.

 Patch gets applied cleanly. make and make install too went smooth. make
 check
 was smooth too. Patch also include test coverage

 I tested the functionality with manual test and its woking as expected.

 BTW in the patch I found one additional new like in read_raise_options():

 @@ -3631,7 +3661,23 @@ read_raise_options(void)
 else if (tok_is_keyword(tok, yylval,
 K_HINT,
 hint))
 opt-opt_type = PLPGSQL_RAISEOPTION_HINT;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_COLUMN_NAME, column_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_COLUMN_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_CONSTRAINT_NAME, constraint_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_CONSTRAINT_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_DATATYPE_NAME, datatype_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_DATATYPE_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_TABLE_NAME, table_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_TABLE_NAME;
 +   else if (tok_is_keyword(tok, yylval,
 +
 K_SCHEMA_NAME, schema_name))
 +   opt-opt_type = PLPGSQL_RAISEOPTION_SCHEMA_NAME;
 else
 +
 yyerror(unrecognized RAISE statement option);

 can you please remove that.

cleaned patch is in attachment


 Apart from that patch looks good to me.

:) thank you for review

Regards

Pavel Stehule


 Thanks,
 Rushabh


 On Fri, Feb 1, 2013 at 7:29 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
  2013/2/1 Peter Eisentraut pete...@gmx.net:
  On 2/1/13 8:00 AM, Pavel Stehule wrote:
  2013/2/1 Marko Tiikkaja pgm...@joh.to:
  On 2/1/13 1:47 PM, Pavel Stehule wrote:
 
  now a most hard work is done and I would to enable access to new
  error fields from plpgsql.
 
 
  Is there a compelling reason why we wouldn't provide these already in
  9.3?
 
  a time for assign to last commitfest is out.
 
  this patch is relative simple and really close to enhanced error
  fields feature - but depends if some from commiters will have a time
  for commit to 9.3 - so I am expecting primary target 9.4, but I am not
  be angry if it will be commited early.
 
  If we don't have access to those fields on PL/pgSQL, what was the point
  of the patch to begin with?  Surely, accessing them from C wasn't the
  main use case?
 
 
  These fields are available for application developers now. But is a
  true, so without this patch, GET STACKED DIAGNOSTICS statement will
  not be fully consistent, because some fields are accessible and others
  not

 there is one stronger argument for commit this patch now. With this
 patch, we are able to wrote regression tests for new fields via
 plpgsql.

 Regards

 Pavel

 
  Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 Rushabh Lathia


enhanced_error_fields_plpgsql.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Yuri Levinsky
Bruce,
Many thanks. According to PostgreSQL documentation it's only range and
list partitions are supported. My question is: when I am following your
advice, is PostgreSQL will do partitioning pruning on select? My
expectation is:
I divided my table on 128 hash partitions according let's say user_id.
When I do select * from users where user_id=? , I am expecting the
engine select from some particular partition according to my function.
The issue is critical when you working with big tables, that you can't
normally partition by range/list. The feature allow parallel select from
such table: each thread might select from his own dedicated partition.
The feature also (mainly) allow to decrease index b-tree level on
partition key column by dividing index into smaller parts.

Sincerely yours,


Yuri Levinsky, DBA
Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel
Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222


-Original Message-
From: Bruce Momjian [mailto:br...@momjian.us] 
Sent: Tuesday, June 25, 2013 4:21 PM
To: Yuri Levinsky
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Hash partitioning.

On Tue, Jun 25, 2013 at 03:48:19PM +0300, Yuri Levinsky wrote:
 Hi,
 
 Do we have any plans to implement Hash Partitioning, maybe I missing 
 this feature?

You can do it by writing your own constraint and trigger functions that
control the hashing.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

This mail was received via Mail-SeCure System.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Bruce Momjian
On Tue, Jun 25, 2013 at 05:19:47PM +0300, Yuri Levinsky wrote:
 Bruce,
 Many thanks. According to PostgreSQL documentation it's only range and
 list partitions are supported. My question is: when I am following your
 advice, is PostgreSQL will do partitioning pruning on select? My
 expectation is:
 I divided my table on 128 hash partitions according let's say user_id.
 When I do select * from users where user_id=? , I am expecting the
 engine select from some particular partition according to my function.
 The issue is critical when you working with big tables, that you can't
 normally partition by range/list. The feature allow parallel select from
 such table: each thread might select from his own dedicated partition.
 The feature also (mainly) allow to decrease index b-tree level on
 partition key column by dividing index into smaller parts.

Uh, where do you see that we only support range and list?  You aren't
using an EnterpriseDB closed-source product, are you?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: FILTER for aggregates [was Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division]

2013-06-25 Thread Robert Haas
On Sun, Jun 23, 2013 at 10:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 David Fetter da...@fetter.org writes:
 On Sun, Jun 23, 2013 at 07:44:26AM -0700, Kevin Grittner wrote:
 I think it is OK if that gets a syntax error.  If that's the worst
 case I like this approach.

 I think reducing the usefulness of error messages is something we need
 to think extremely hard about before we do.  Is there maybe a way to
 keep the error messages even if by some magic we manage to unreserve
 the words?

 Of the alternatives discussed so far, I don't really like anything
 better than adding another special case to base_yylex().  Robert opined
 in the other thread about RESPECT NULLS that the potential failure cases
 with that approach are harder to reason about, which is true, but that
 doesn't mean that we should accept failures we know of because there
 might be failures we don't know of.

Sure, that's true; but the proposal on the other thread is just to
disallow invalid syntax early enough that it benefits the parser.  The
error message is different, but I don't think it's a BAD error
message.

 One thing that struck me while thinking about this is that it seems
 like we've been going about it wrong in base_yylex() in any case.
 For example, because we fold WITH followed by TIME into a special token
 WITH_TIME, we will fail on a statement beginning

 WITH time AS ...

 even though time is a legal ColId.  But suppose that instead of
 merging the two tokens into one, we just changed the first token into
 something special; that is, base_yylex would return a special token
 WITH_FOLLOWED_BY_TIME and then TIME.  We could then fix the above
 problem by allowing either WITH or WITH_FOLLOWED_BY_TIME as the leading
 keyword of a statement; and similarly for the few other places where
 WITH can be followed by an arbitrary identifier.

 Going on the same principle, we could probably let FILTER be an
 unreserved keyword while FILTER_FOLLOWED_BY_PAREN could be a
 type_func_name_keyword.  (I've not tried this though.)

I think this whole direction is going to collapse under its own weight
VERY quickly.  The problems you're describing are essentially
shift/reduce conflicts that are invisible because they're hidden
behind lexer magic.  Part of the value of using a parser generator is
that it TELLS you when you've added ambiguous syntax.  But it doesn't
know about lexer hacks, so stuff will just silently break.  I think
this type of lexer hacks works reasonably well keyword-like things
that are used in just one place in the grammar.  As soon as you get up
to two, the wheels come off - as with RESPECT NULLS vs. NULLS FIRST.

 This idea doesn't help much for OVER because one of the alternatives for
 over_clause is OVER ColId, and I doubt we want to have base_yylex know
 all the alternatives for ColId.  I also had no great success with the
 NULLS FIRST/LAST case: AFAICT the substitute token for NULLS still has
 to be fully reserved, meaning that something like select nulls last
 still doesn't work without quoting.  We could maybe fix that with enough
 denormalization of the index_elem productions, but it'd be ugly.

I don't think that particular example is very compelling - there's a
general rule that column aliases can't be keywords of any type.
That's not wonderful, and EnterpriseDB has had bug reports filed about
it, but the real-world impact is pretty minimal, certainly compared to
what we used to do which is not allow column aliases AT ALL.

 It'd sure be interesting to know what the SQL committee's target parsing
 algorithm is.  I find it hard to believe they're uninformed enough to
 not know that these random syntaxes they keep inventing are hard to deal
 with in LALR(1).  Or maybe they really don't give a damn about breaking
 applications every time they invent a new reserved word?

Does the SQL committee contemplate that SELECT * FROM somefunc()
filter (id, val) should act as a table alias and that SELECT * FROM
somefunc() filter (where x  1) is an aggregate filter?  This all gets
much easier to understand if one of those constructs isn't allowed in
that particular context.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Robert Haas
On Tue, Jun 25, 2013 at 9:21 AM, Bruce Momjian br...@momjian.us wrote:
 On Tue, Jun 25, 2013 at 03:48:19PM +0300, Yuri Levinsky wrote:
 Hi,

 Do we have any plans to implement Hash Partitioning, maybe I missing this
 feature?

 You can do it by writing your own constraint and trigger functions that
 control the hashing.

Not really.  Constraint exclusion won't kick in for a constraint like
CHECK (hashme(a) % 16 == 3) and a WHERE clause of the form  a = 42.

Of course, since partitioning generally doesn't improve performance in
PostgreSQL anyway, it's not clear why you'd want to do this in the
first place.  But the fact that constraint exclusion won't work if you
do is kind of a knockout blow.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Bruce Momjian
On Tue, Jun 25, 2013 at 11:02:40AM -0400, Robert Haas wrote:
 On Tue, Jun 25, 2013 at 9:21 AM, Bruce Momjian br...@momjian.us wrote:
  On Tue, Jun 25, 2013 at 03:48:19PM +0300, Yuri Levinsky wrote:
  Hi,
 
  Do we have any plans to implement Hash Partitioning, maybe I missing this
  feature?
 
  You can do it by writing your own constraint and trigger functions that
  control the hashing.
 
 Not really.  Constraint exclusion won't kick in for a constraint like
 CHECK (hashme(a) % 16 == 3) and a WHERE clause of the form  a = 42.

Uh, I thought we checked the constant against every CHECK constraint and
only scanned partitions that matched.  Why does this not work?

 Of course, since partitioning generally doesn't improve performance in
 PostgreSQL anyway, it's not clear why you'd want to do this in the

I think partitioning does improve performance by reducing index depth.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-25 Thread Andrew Dunstan


On 06/24/2013 07:24 PM, Cédric Villemain wrote:

Le mardi 25 juin 2013 00:18:26, Andrew Dunstan a écrit :

On 06/24/2013 04:02 PM, Cédric Villemain wrote:

WIth extension, we do have to set VPATH explicitely if we want to use
VPATH (note that contribs/extensions must not care that postgresql has
been built with or without VPATH set). My patches try to fix that.

No, this is exactly what I'm objecting to. I want to be able to do:

 invoke_vpath_magic
 standard_make_commands_as_for_non_vpath

Your patches have been designed to overcome your particular issues, but
they don't meet the general case, IMNSHO. This is why I want to have
more discussion about how vpath builds could work for PGXS modules.

The patch does not restrict anything, it is not supposed to lead to
regression.
The assignment of VPATH and srcdir are wrong in the PGXS case, the patch
correct them. You're still free to do make VPATH=/mypath ... the VPATH
provided won't be erased by the pgxs.mk logic.




I still think this is premature.  I have spent some more time trying to 
make it work the way I think it should, so far without success. I think 
we need more discussion about how we'd like VPATH to work for PGXS 
would. There's no urgency about this - we've lived with the current 
situation for quite a while.


When I have more time I will work on it some more.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-06-25 Thread Cédric Villemain
Le mardi 25 juin 2013 17:18:51, Andrew Dunstan a écrit :
 On 06/24/2013 07:24 PM, Cédric Villemain wrote:
  Le mardi 25 juin 2013 00:18:26, Andrew Dunstan a écrit :
  On 06/24/2013 04:02 PM, Cédric Villemain wrote:
  WIth extension, we do have to set VPATH explicitely if we want to use
  VPATH (note that contribs/extensions must not care that postgresql has
  been built with or without VPATH set). My patches try to fix that.
  
  No, this is exactly what I'm objecting to. I want to be able to do:
   invoke_vpath_magic
   standard_make_commands_as_for_non_vpath
  
  Your patches have been designed to overcome your particular issues, but
  they don't meet the general case, IMNSHO. This is why I want to have
  more discussion about how vpath builds could work for PGXS modules.
  
  The patch does not restrict anything, it is not supposed to lead to
  regression.
  The assignment of VPATH and srcdir are wrong in the PGXS case, the patch
  correct them. You're still free to do make VPATH=/mypath ... the VPATH
  provided won't be erased by the pgxs.mk logic.
 
 I still think this is premature.  I have spent some more time trying to
 make it work the way I think it should, so far without success. I think
 we need more discussion about how we'd like VPATH to work for PGXS
 would. There's no urgency about this - we've lived with the current
 situation for quite a while.

Sure...
I did a quick and dirty patch (I just validate without lot of testing), 
attached to this email to fix json_build (at least for make, make install)
As you're constructing targets based on commands to execute in the srcdir 
directory, and because srcdir is only set in pgxs.mk, it is possible to define 
srcdir early in the json_build/Makefile and use it.

 When I have more time I will work on it some more.

Thank you

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation
diff --git a/Makefile b/Makefile
index ce10008..87582d1 100644
--- a/Makefile
+++ b/Makefile
@@ -1,24 +1,28 @@
 EXTENSION= json_build
-EXTVERSION   = $(shell grep default_version $(EXTENSION).control | sed -e s/default_version[[:space:]]*=[[:space:]]*'\([^']*\)'/\1/)
 
-DATA = $(filter-out $(wildcard sql/*--*.sql),$(wildcard sql/*.sql))
-DOCS = $(wildcard doc/*.md)
+srcdir = $(dir $(firstword $(MAKEFILE_LIST)))
+EXTVERSION = $(shell grep default_version $(srcdir)/$(EXTENSION).control | sed -e s/default_version[[:space:]]*=[[:space:]]*'\([^']*\)'/\1/)
+
+DATA = $(filter-out $(wildcard $(srcdir)/sql/*--*.sql),$(wildcard $(srcdir)/sql/*.sql))
+DOCS = $(wildcard $(srcdir)/doc/*.md)
 USE_MODULE_DB = 1
-TESTS= $(wildcard test/sql/*.sql)
+TESTS= $(wildcard $(srcdir)/test/sql/*.sql)
 REGRESS_OPTS = --inputdir=test --outputdir=test \
 	--load-extension=$(EXTENSION)
-REGRESS  = $(patsubst test/sql/%.sql,%,$(TESTS))
+REGRESS  = $(patsubst $(srcdir)/test/sql/%.sql,%,$(TESTS))
 MODULE_big  = $(EXTENSION)
-OBJS = $(patsubst src/%.c,src/%.o,$(wildcard src/*.c))
+OBJS = $(patsubst $(srcdir)/src/%.c,$(srcdir)/src/%.o,$(wildcard $(srcdir)/src/*.c))
 PG_CONFIG= pg_config
 
 all: sql/$(EXTENSION)--$(EXTVERSION).sql
 
 sql/$(EXTENSION)--$(EXTVERSION).sql: sql/$(EXTENSION).sql
+	$(MKDIR_P) sql
 	cp $ $@
 
+
 DATA_built = sql/$(EXTENSION)--$(EXTVERSION).sql
-DATA = $(filter-out sql/$(EXTENSION)--$(EXTVERSION).sql, $(wildcard sql/*--*.sql))
+DATA = $(filter-out $(srcdir)/sql/$(EXTENSION)--$(EXTVERSION).sql, $(wildcard $(srcdir)/sql/*--*.sql))
 EXTRA_CLEAN = sql/$(EXTENSION)--$(EXTVERSION).sql
 
 PGXS := $(shell $(PG_CONFIG) --pgxs)


signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] GIN improvements part 3: ordering in index

2013-06-25 Thread Heikki Linnakangas

On 25.06.2013 01:24, Alexander Korotkov wrote:

On Wed, Jun 19, 2013 at 1:21 AM, Alexander Korotkovaekorot...@gmail.comwrote:


On Mon, Jun 17, 2013 at 10:27 PM, Heikki Linnakangas
hlinnakan...@vmware.com  wrote:

That has some obvious limitations. First of all, you can run out of

memory.


Yes, it is so. qsort should be replaced with tuplesort.


In attached patch qsort is replaced with tuplesort. As expected, it leads
to some performance drawback, but it's not dramatic.
Also, some doc is added for new distance method of GIN.


Thanks. But the fact remains that with this patch, you fetch all the 
tuples and then you sort them, which is exactly the same thing you do 
without the patch. The way it happens without the patch just seems to be 
slower. Time to break out the profiler..


I downloaded what I believe to be the same DBLP titles dataset that you 
tested with. Without the patch:


postgres=# explain analyze select * from dblp_titles where tsvector @@
to_tsquery('english', 'statistics') order by  ts_rank(tsvector, 
to_tsquery('english', 'statistics')) limit 10;
QUERY 
PLAN


-
-
 Limit  (cost=42935.81..42935.84 rows=10 width=64) (actual 
time=57.945..57.948 ro

ws=10 loops=1)
   -  Sort  (cost=42935.81..42980.86 rows=18020 width=64) (actual 
time=57.943..5

7.944 rows=10 loops=1)
 Sort Key: (ts_rank(tsvector, '''statist'''::tsquery))
 Sort Method: top-N heapsort  Memory: 28kB
 -  Bitmap Heap Scan on dblp_titles  (cost=211.66..42546.41 
rows=18020 w

idth=64) (actual time=13.061..46.358 rows=15142 loops=1)
   Recheck Cond: (tsvector @@ '''statist'''::tsquery)
   -  Bitmap Index Scan on gin_title  (cost=0.00..207.15 
rows=18020

width=0) (actual time=8.339..8.339 rows=15142 loops=1)
 Index Cond: (tsvector @@ '''statist'''::tsquery)
 Total runtime: 57.999 ms
(9 rows)

And the profile looks like this:

6,94%  postgrestbm_iterate
6,12%  postgreshash_search_with_hash_value
4,40%  postgrestbm_comparator
3,79%  libc-2.17.so__memcpy_ssse3_back
3,68%  postgresheap_hot_search_buffer
2,62%  postgresslot_deform_tuple
2,47%  postgresnocachegetattr
2,37%  postgresheap_page_prune_opt
2,27%  libc-2.17.so__memcmp_sse4_1
2,21%  postgresheap_fill_tuple
2,18%  postgrespg_qsort
1,96%  postgrestas
1,88%  postgrespalloc0
1,83%  postgrescalc_rank_or

Drilling into that, tbm_iterate, tbm_comparator and pq_sort calls come 
from the Tidbitmap code, as well as about 1/3 of the 
hash_search_with_hash_value calls. There seems to be a fair amount of 
overhead in building and iterating the tid bitmap. Is that what's 
killing us?


For comparison, this is the same with your patch:

postgres=# explain analyze select * from dblp_titles where tsvector @@
to_tsquery('english', 'statistics') order by tsvector 
to_tsquery('english', 'statistics') limit 10;
   QUERY 
PLAN


-

 Limit  (cost=16.00..52.81 rows=10 width=136) (actual time=9.957..9.980 
rows=10 l

oops=1)
   -  Index Scan using gin_title on dblp_titles  (cost=16.00..52198.94 
rows=1417

5 width=136) (actual time=9.955..9.977 rows=10 loops=1)
 Index Cond: (tsvector @@ '''statist'''::tsquery)
 Order By: (tsvector  '''statist'''::tsquery)
 Total runtime: 10.084 ms
(5 rows)

9,57%  postgresscanGetItemFast
7,02%  postgrescalc_rank_or
5,71%  postgresFunctionCall10Coll
5,59%  postgresentryGetNextItem
5,19%  postgreskeyGetOrdering
5,13%  postgresginDataPageLeafReadItemPointer
4,89%  postgresentryShift
4,85%  postgresginCompareItemPointers
3,44%  postgresginDataPageLeafRead
3,28%  postgresAllocSetAlloc
3,27%  postgresinsertScanItem
3,18%  postgresgin_tsquery_distance
2,38%  postgresputtuple_common
2,26%  postgrescheckcondition_gin
2,20%  postgrescmpEntries
2,17%  postgresAllocSetFreeIndex
2,11%  postgrescalc_rank

Unsurprisingly, the tidbitmap overhead is not visible in the profile 
with your patch.


To see how much of the difference is caused by the tidbitmap overhead, I 
wrote the attached quick  dirty patch (it will crash and burn with 
queries that require tidbitmap unions or intersects etc.). When there 
are fewer than 10 items on a page, the tidbitmap keeps the offsets of 
those items in an ordered array of offsets, instead of setting the bits 
in the 

Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Robert Haas
On Tue, Jun 25, 2013 at 11:06 AM, Bruce Momjian br...@momjian.us wrote:
 Not really.  Constraint exclusion won't kick in for a constraint like
 CHECK (hashme(a) % 16 == 3) and a WHERE clause of the form  a = 42.

 Uh, I thought we checked the constant against every CHECK constraint and
 only scanned partitions that matched.  Why does this not work?

That's a pretty fuzzy description of what we do.  For this to work,
we'd have to be able to use the predicate a = 42 to prove that
hashme(a) % 16 = 3 is false.  But we can't actually substitute 42 in
for a and then evaluate hashme(42) % 16  = 3, because we don't know
that the a = 42 in the WHERE clause means exact equality for all
purposes, only that it means has the numerically same value.  For
integers, equality under = is sufficient to prove equivalence.

But for numeric values, for example, it is not.  The values
'42'::numeric and '42.0'::numeric are equal according to =(numeric,
numeric), but they are not the same.  If the hashme() function did
something like length($1::text), it would get different answers for
those two values.  IOW, the theorem prover has no way of knowing that
the hash function provided has semantics that are compatible with the
opclass of the operator used in the query.

 Of course, since partitioning generally doesn't improve performance in
 PostgreSQL anyway, it's not clear why you'd want to do this in the

 I think partitioning does improve performance by reducing index depth.

Generally, I think traversing an extra level of the index is cheaper
than opening extra relations and going through the theorem-prover
machinery.  There are benefits to partitioning, but they have to do
with management - e.g. each partition can be vacuumed independently;
old partitions can be dropped more efficiently than you can
bulk-delete rows spread throughout a table - rather than performance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix pgstattuple/pgstatindex to use regclass-type as the argument

2013-06-25 Thread Robert Haas
On Thu, Jun 20, 2013 at 2:32 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Thu, Jun 20, 2013 at 11:43 AM, Satoshi Nagayasu sn...@uptime.jp wrote:
 (2013/06/17 4:02), Fujii Masao wrote:

 On Sat, Mar 9, 2013 at 3:23 PM, Satoshi Nagayasu sn...@uptime.jp wrote:

 It is obviously easy to keep two types of function interfaces,
 one with regclass-type and another with text-type, in the next
 release for backward-compatibility like below:

 pgstattuple(regclass)  -- safer interface.
 pgstattuple(text)  -- will be depreciated in the future release.


 So you're thinking to remove pgstattuple(oid) soon?


 AFAIK, a regclass type argument would accept an OID value,
 which means regclass type has upper-compatibility against
 oid type.

 So, even if the declaration is changed, compatibility could
 be kept actually. This test case (in sql/pgstattuple.sql)
 confirms that.

 
 select * from pgstatindex('myschema.test_pkey'::regclass::oid);
  version | tree_level | index_size | root_block_no | internal_pages |
 leaf_pages | empty_pages | deleted_pages | avg_leaf_density |
 leaf_fragmentation
 -+++---+++-+---+--+
2 |  0 |   8192 | 1 |  0 |
 1 |   0 | 0 | 0.79 |0
 (1 row)
 


 Having both interfaces for a while would allow users to have enough
 time to rewrite their applications.

 Then, we will be able to obsolete (or just drop) old interfaces
 in the future release, maybe 9.4 or 9.5. I think this approach
 would minimize an impact of such interface change.

 So, I think we can clean up function arguments in the pgstattuple
 module, and also we can have two interfaces, both regclass and text,
 for the next release.

 Any comments?


 In the document, you should mark old functions as deprecated.


 I'm still considering changing the function name as Tom pointed
 out. How about pgstatbtindex?

 Or just adding pgstatindex(regclass)?

 In fact, pgstatindex does support only BTree index.
 So, pgstatbtindex seems to be more appropriate for this function.

 Can most ordinary users realize bt means btree?

 We can keep having both (old) pgstatindex and (new) pgstatbtindex
 during next 2-3 major releases, and the old one will be deprecated
 after that.

 Since pg_relpages(oid) doesn't exist, pg_relpages() is in the same
 situation as pgstatindex(), i.e., we cannot just replace pg_relpages(text)
 with pg_relpages(regclass) for the backward-compatibility. How do you
 think we should solve the pg_relpages() problem? Rename? Just
 add pg_relpages(regclass)?

Adding a function with a new name seems likely to be smoother, since
that way you don't have to worry about problems with function calls
being thought ambiguous.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Bruce Momjian
On Tue, Jun 25, 2013 at 11:15:24AM -0400, Robert Haas wrote:
 On Tue, Jun 25, 2013 at 11:06 AM, Bruce Momjian br...@momjian.us wrote:
  Not really.  Constraint exclusion won't kick in for a constraint like
  CHECK (hashme(a) % 16 == 3) and a WHERE clause of the form  a = 42.
 
  Uh, I thought we checked the constant against every CHECK constraint and
  only scanned partitions that matched.  Why does this not work?
 
 That's a pretty fuzzy description of what we do.  For this to work,
 we'd have to be able to use the predicate a = 42 to prove that
 hashme(a) % 16 = 3 is false.  But we can't actually substitute 42 in
 for a and then evaluate hashme(42) % 16  = 3, because we don't know
 that the a = 42 in the WHERE clause means exact equality for all
 purposes, only that it means has the numerically same value.  For
 integers, equality under = is sufficient to prove equivalence.
 
 But for numeric values, for example, it is not.  The values
 '42'::numeric and '42.0'::numeric are equal according to =(numeric,
 numeric), but they are not the same.  If the hashme() function did
 something like length($1::text), it would get different answers for
 those two values.  IOW, the theorem prover has no way of knowing that
 the hash function provided has semantics that are compatible with the
 opclass of the operator used in the query.

I looked at predtest.c but I can't see how we accept = and = ranges,
but not CHECK (a % 16 == 3).  It is the '%' operator?  I am not sure why
the hashme() function is there.  Wouldn't it work if hashme() was an
immutable function?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] C++ compiler

2013-06-25 Thread Joshua D. Drake


On 06/24/2013 09:16 PM, Tom Lane wrote:

Bruce Momjian br...@momjian.us writes:

Right. I don't think there are any C features we want to avoid;  are
there any?


We're avoiding C99-and-later features that are not in C89, such as //
for comments, as well as more useful things.  It might be time to
reconsider whether we should move the baseline portability requirement
up to C99.


The problem here is we lose the MS compilers which are not being updated 
for C99.


JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Robert Haas
On Tue, Jun 25, 2013 at 11:45 AM, Bruce Momjian br...@momjian.us wrote:
 On Tue, Jun 25, 2013 at 11:15:24AM -0400, Robert Haas wrote:
 On Tue, Jun 25, 2013 at 11:06 AM, Bruce Momjian br...@momjian.us wrote:
  Not really.  Constraint exclusion won't kick in for a constraint like
  CHECK (hashme(a) % 16 == 3) and a WHERE clause of the form  a = 42.
 
  Uh, I thought we checked the constant against every CHECK constraint and
  only scanned partitions that matched.  Why does this not work?

 That's a pretty fuzzy description of what we do.  For this to work,
 we'd have to be able to use the predicate a = 42 to prove that
 hashme(a) % 16 = 3 is false.  But we can't actually substitute 42 in
 for a and then evaluate hashme(42) % 16  = 3, because we don't know
 that the a = 42 in the WHERE clause means exact equality for all
 purposes, only that it means has the numerically same value.  For
 integers, equality under = is sufficient to prove equivalence.

 But for numeric values, for example, it is not.  The values
 '42'::numeric and '42.0'::numeric are equal according to =(numeric,
 numeric), but they are not the same.  If the hashme() function did
 something like length($1::text), it would get different answers for
 those two values.  IOW, the theorem prover has no way of knowing that
 the hash function provided has semantics that are compatible with the
 opclass of the operator used in the query.

 I looked at predtest.c but I can't see how we accept = and = ranges,
 but not CHECK (a % 16 == 3).  It is the '%' operator?  I am not sure why
 the hashme() function is there.  Wouldn't it work if hashme() was an
 immutable function?

Let me back up a minute.  You told the OP that he could make hash
partitioning by writing his own constraint and trigger functions.  I
think that won't work.  But I'm happy to be proven wrong.  Do you have
an example showing how to do it?

Here's why I think it WON'T work:

rhaas=# create table foo (a int, b text);
CREATE TABLE
rhaas=# create table foo0 (check ((a % 16) = 0)) inherits (foo);
CREATE TABLE
rhaas=# create table foo1 (check ((a % 16) = 1)) inherits (foo);
CREATE TABLE
rhaas=# create table foo2 (check ((a % 16) = 2)) inherits (foo);
CREATE TABLE
rhaas=# create table foo3 (check ((a % 16) = 3)) inherits (foo);
CREATE TABLE
rhaas=# explain select * from foo where a = 1;
 QUERY PLAN

 Append  (cost=0.00..101.50 rows=25 width=36)
   -  Seq Scan on foo  (cost=0.00..0.00 rows=1 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo0  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo1  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo2  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo3  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
(11 rows)

Notice we get a scan on every partition.  Now let's try it with no
modulo arithmetic, just a straightforward one-partition-per-value:

rhaas=# create table foo (a int, b text);
CREATE TABLE
rhaas=# create table foo0 (check (a = 0)) inherits (foo);
CREATE TABLE
rhaas=# create table foo1 (check (a = 1)) inherits (foo);
CREATE TABLE
rhaas=# create table foo2 (check (a = 2)) inherits (foo);
CREATE TABLE
rhaas=# create table foo3 (check (a = 3)) inherits (foo);
CREATE TABLE
rhaas=# explain select * from foo where a = 1;
 QUERY PLAN

 Append  (cost=0.00..25.38 rows=7 width=36)
   -  Seq Scan on foo  (cost=0.00..0.00 rows=1 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo1  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
(5 rows)

Voila, now constraint exclusion is working.

I confess that I'm not entirely clear about the details either, but
the above tests speak for themselves.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench --startup option

2013-06-25 Thread Robert Haas
On Thu, Jun 20, 2013 at 1:46 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 I've fixed a conflict, and I've removed extraneous semicolons from the C.

 I've left in the fixing of some existing bad indenting in the existing code,
 which is not strictly related to my change.

OK, I like this idea a lot, but I have a question.  Right now, to use
this, you have to supply the startup SQL on the command line.  And
that could definitely be useful.  But ISTM that you might also want to
take the startup SQL from a file, and indeed you might well want to
include metacommands in there.  Maybe that's getting greedy, but the
rate at which people are adding features to pgbench suggests to me
that it won't be long before this isn't enough.

Thoughts?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-06-25 Thread Fujii Masao
On Tue, Jun 25, 2013 at 8:15 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
 Patch updated according to comments.

Thanks for updating the patch!

When I ran VACUUM FULL, I got the following error.

ERROR:  attempt to apply a mapping to unmapped relation 16404
STATEMENT:  vacuum full;

Could you let me clear why toast_save_datum needs to update even invalid toast
index? It's required only for REINDEX CONCURRENTLY?

@@ -1573,7 +1648,7 @@ toastid_valueid_exists(Oid toastrelid, Oid valueid)

toastrel = heap_open(toastrelid, AccessShareLock);

-   result = toastrel_valueid_exists(toastrel, valueid);
+   result = toastrel_valueid_exists(toastrel, valueid, AccessShareLock);

toastid_valueid_exists() is used only in toast_save_datum(). So we should use
RowExclusiveLock here rather than AccessShareLock?

+ * toast_open_indexes
+ *
+ * Get an array of index relations associated to the given toast relation
+ * and return as well the position of the valid index used by the toast
+ * relation in this array. It is the responsability of the caller of this

Typo: responsibility

toast_open_indexes(Relation toastrel,
+  LOCKMODE lock,
+  Relation **toastidxs,
+  int *num_indexes)
+{
+   int i = 0;
+   int res = 0;
+   boolfound = false;
+   List   *indexlist;
+   ListCell   *lc;
+
+   /* Get index list of relation */
+   indexlist = RelationGetIndexList(toastrel);

What about adding the assertion which checks that the return value
of RelationGetIndexList() is not NIL?

When I ran pg_upgrade for the upgrade from 9.2 to HEAD (with patch),
I got the following error. Without the patch, that succeeded.

command: /dav/reindex/bin/pg_dump --host /dav/reindex --port 50432
--username postgres --schema-only --quote-all-identifiers
--binary-upgrade --format=custom
--file=pg_upgrade_dump_12270.custom postgres 
pg_upgrade_dump_12270.log 21
pg_dump: query returned 0 rows instead of one: SELECT c.reltoastrelid,
t.indexrelid FROM pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_index
t ON (c.reltoastrelid = t.indrelid) WHERE c.oid =
'16390'::pg_catalog.oid AND t.indisvalid;

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] refresh materialized view concurrently

2013-06-25 Thread Robert Haas
On Fri, Jun 21, 2013 at 5:20 AM, Hitoshi Harada umi.tan...@gmail.com wrote:
 If I don't miss something, the requirement for the CONCURRENTLY option is to
 allow simple SELECT reader to read the matview concurrently while the view
 is populating the new data, and INSERT/UPDATE/DELETE and SELECT FOR
 UPDATE/SHARE are still blocked.  So, I wonder why it is not possible just to
 acquire ExclusiveLock on the matview while populating the data and swap the
 relfile by taking small AccessExclusiveLock.  This lock escalation is no
 dead lock hazard, I suppose, because concurrent operation would block the
 other at the point ExclusiveLock is acquired, and ExclusiveLock conflicts
 AccessExclusiveLock.  Then you don't need the complicated SPI logic or
 unique key index dependency.

This is no good.  One, all lock upgrades are deadlock hazards.  In
this case, that plays out as follows: suppose that the session running
REFRESH MATERIALIZED VIEW CONCURRENTLY also holds a lock on something
else.  Some other process takes an AccessShareLock on the materialized
view and then tries to take a conflicting lock on the other object.
Kaboom, deadlock.  Granted, the chances of that happening in practice
are small, but it IS the reason why we typically try to having
long-running operations perform lock upgrades.  Users get really
annoyed when their DDL runs for an hour and then rolls back.

Two, until we get MVCC catalog scans, it's not safe to update any
system catalog tuple without an AccessExclusiveLock on some locktag
that will prevent concurrent catalog scans for that tuple.  Under
SnapshotNow semantics, concurrent readers can fail to see that the
object is present at all, leading to mysterious failures - especially
if some of the object's catalog scans are seen and others are missed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 I looked at predtest.c but I can't see how we accept = and = ranges,
 but not CHECK (a % 16 == 3).  It is the '%' operator?  I am not sure why
 the hashme() function is there.  Wouldn't it work if hashme() was an
 immutable function?

No.  Robert's description is exactly correct: it's a question of whether
we can know that the semantics of function X have anything to do with
the behavior of operator Y.  In the case of something like CHECK (X = 16)
combined with WHERE X = 10, if the given = and = operators belong to
the same btree opclass family then we can assume that their semantics
are compatible and then apply reasoning to show that these two clauses
can't both be true for the same value of X.  We can *not* use X = 10
to reason about the behavior of anything that isn't in the = operator's
btree opclass, because we don't assume that = means absolutely
identical for every purpose.  And in fact it does not mean that for
several pretty common datatypes (float being another example besides
numeric).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Yuri Levinsky
Guys,
I am sorry for taking your time. The reason for my question is:
As former Oracle DBA and now simple beginner PostgreSQL DBA I would like
to say: the current partitioning mechanism might be improved. Sorry, it
seems to me far behind yesterday requirements. As model for improvement
the Oracle might be taken as example. Unfortunately I am not writing an
C code and see my benefit to PostgreSQL community in only rising this
issue. I'll be very happy to be helpful in something else, but...   

Sincerely yours,


Yuri Levinsky, DBA
Celltick Technologies Ltd., 32 Maskit St., Herzliya 46733, Israel
Mobile: +972 54 6107703, Office: +972 9 9710239; Fax: +972 9 9710222


-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com] 
Sent: Tuesday, June 25, 2013 6:55 PM
To: Bruce Momjian
Cc: Yuri Levinsky; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Hash partitioning.

On Tue, Jun 25, 2013 at 11:45 AM, Bruce Momjian br...@momjian.us
wrote:
 On Tue, Jun 25, 2013 at 11:15:24AM -0400, Robert Haas wrote:
 On Tue, Jun 25, 2013 at 11:06 AM, Bruce Momjian br...@momjian.us
wrote:
  Not really.  Constraint exclusion won't kick in for a constraint 
  like CHECK (hashme(a) % 16 == 3) and a WHERE clause of the form  a
= 42.
 
  Uh, I thought we checked the constant against every CHECK 
  constraint and only scanned partitions that matched.  Why does this
not work?

 That's a pretty fuzzy description of what we do.  For this to work, 
 we'd have to be able to use the predicate a = 42 to prove that
 hashme(a) % 16 = 3 is false.  But we can't actually substitute 42 in 
 for a and then evaluate hashme(42) % 16  = 3, because we don't know 
 that the a = 42 in the WHERE clause means exact equality for all 
 purposes, only that it means has the numerically same value.  For 
 integers, equality under = is sufficient to prove equivalence.

 But for numeric values, for example, it is not.  The values 
 '42'::numeric and '42.0'::numeric are equal according to =(numeric, 
 numeric), but they are not the same.  If the hashme() function did 
 something like length($1::text), it would get different answers for 
 those two values.  IOW, the theorem prover has no way of knowing that

 the hash function provided has semantics that are compatible with the

 opclass of the operator used in the query.

 I looked at predtest.c but I can't see how we accept = and = ranges,

 but not CHECK (a % 16 == 3).  It is the '%' operator?  I am not sure 
 why the hashme() function is there.  Wouldn't it work if hashme() was 
 an immutable function?

Let me back up a minute.  You told the OP that he could make hash
partitioning by writing his own constraint and trigger functions.  I
think that won't work.  But I'm happy to be proven wrong.  Do you have
an example showing how to do it?

Here's why I think it WON'T work:

rhaas=# create table foo (a int, b text); CREATE TABLE rhaas=# create
table foo0 (check ((a % 16) = 0)) inherits (foo); CREATE TABLE rhaas=#
create table foo1 (check ((a % 16) = 1)) inherits (foo); CREATE TABLE
rhaas=# create table foo2 (check ((a % 16) = 2)) inherits (foo); CREATE
TABLE rhaas=# create table foo3 (check ((a % 16) = 3)) inherits (foo);
CREATE TABLE rhaas=# explain select * from foo where a = 1;
 QUERY PLAN

 Append  (cost=0.00..101.50 rows=25 width=36)
   -  Seq Scan on foo  (cost=0.00..0.00 rows=1 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo0  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo1  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo2  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo3  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
(11 rows)

Notice we get a scan on every partition.  Now let's try it with no
modulo arithmetic, just a straightforward one-partition-per-value:

rhaas=# create table foo (a int, b text); CREATE TABLE rhaas=# create
table foo0 (check (a = 0)) inherits (foo); CREATE TABLE rhaas=# create
table foo1 (check (a = 1)) inherits (foo); CREATE TABLE rhaas=# create
table foo2 (check (a = 2)) inherits (foo); CREATE TABLE rhaas=# create
table foo3 (check (a = 3)) inherits (foo); CREATE TABLE rhaas=# explain
select * from foo where a = 1;
 QUERY PLAN

 Append  (cost=0.00..25.38 rows=7 width=36)
   -  Seq Scan on foo  (cost=0.00..0.00 rows=1 width=36)
 Filter: (a = 1)
   -  Seq Scan on foo1  (cost=0.00..25.38 rows=6 width=36)
 Filter: (a = 1)
(5 rows)

Voila, now constraint exclusion is working.

I confess that I'm not entirely clear about the details either, but the
above tests speak for themselves.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL
Company

This mail was received via Mail-SeCure System.




-- 
Sent via pgsql-hackers mailing list 

Re: [HACKERS] [PATCH] add long options to pgbench (submission 1)

2013-06-25 Thread Robert Haas
On Thu, Jun 20, 2013 at 9:17 AM, Fabrízio de Royes Mello
fabriziome...@gmail.com wrote:
 On Thu, Jun 20, 2013 at 9:59 AM, Fabien COELHO coe...@cri.ensmp.fr wrote:
 Please fix that and re-send the patch.
 Find attached diff wrt current master.
 Thanks.

I would like to solicit opinions on whether this is a good idea.  I
understand that the patch author thinks it's a good idea, and I don't
have a strong position either way.  But I want to hear what other
people think.

Anyone have an opinion?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] add long options to pgbench (submission 1)

2013-06-25 Thread Andres Freund
On 2013-06-25 12:11:06 -0400, Robert Haas wrote:
 On Thu, Jun 20, 2013 at 9:17 AM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
  On Thu, Jun 20, 2013 at 9:59 AM, Fabien COELHO coe...@cri.ensmp.fr wrote:
  Please fix that and re-send the patch.
  Find attached diff wrt current master.
  Thanks.
 
 I would like to solicit opinions on whether this is a good idea.  I
 understand that the patch author thinks it's a good idea, and I don't
 have a strong position either way.  But I want to hear what other
 people think.
 
 Anyone have an opinion?

+1. When writing scripts I like to use the long options because that
makes it easier to understand some time later. I don't really see a
reason not to do this.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] isolationtester and 'specs' subdirectory

2013-06-25 Thread Robert Haas
On Thu, Jun 20, 2013 at 7:10 PM, Josh Kupershmidt schmi...@gmail.com wrote:
 I eventually tracked down the cause of this failure to a trailing ':'
 in my $LIBRARY_PATH, which causes gcc to look inside the current
 directory for a 'specs' file [1] among other things. Although I
 probably don't need that trailing ':', it seems like we should avoid
 naming this directory 'specs' nonetheless to avoid confusion with gcc.

 Renaming the 'specs' directory to something like 'isolation_specs' and
 adjusting isolation_main.c accordingly lets me pass `make
 check-world`. Proposed patch attached.

This seems like pretty stupid behavior on the part of gcc.  And, we're
generally reluctant to rename things too blithely because it
complicates back-patching.  But on the flip side, back-patching
changes to the isolation specs is probably a rare event.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] add long options to pgbench (submission 1)

2013-06-25 Thread Alvaro Herrera
Robert Haas escribió:
 On Thu, Jun 20, 2013 at 9:17 AM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
  On Thu, Jun 20, 2013 at 9:59 AM, Fabien COELHO coe...@cri.ensmp.fr wrote:
  Please fix that and re-send the patch.
  Find attached diff wrt current master.
  Thanks.
 
 I would like to solicit opinions on whether this is a good idea.  I
 understand that the patch author thinks it's a good idea, and I don't
 have a strong position either way.  But I want to hear what other
 people think.

+1 from me on the general idea.  Didn't check the actual patch.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] add long options to pgbench (submission 1)

2013-06-25 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I would like to solicit opinions on whether this is a good idea.  I
 understand that the patch author thinks it's a good idea, and I don't
 have a strong position either way.  But I want to hear what other
 people think.

If it makes pgbench more consistent with psql's command line options,
it seems reasonable to me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Bruce Momjian
On Tue, Jun 25, 2013 at 12:08:34PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  I looked at predtest.c but I can't see how we accept = and = ranges,
  but not CHECK (a % 16 == 3).  It is the '%' operator?  I am not sure why
  the hashme() function is there.  Wouldn't it work if hashme() was an
  immutable function?
 
 No.  Robert's description is exactly correct: it's a question of whether
 we can know that the semantics of function X have anything to do with
 the behavior of operator Y.  In the case of something like CHECK (X = 16)
 combined with WHERE X = 10, if the given = and = operators belong to
 the same btree opclass family then we can assume that their semantics
 are compatible and then apply reasoning to show that these two clauses
 can't both be true for the same value of X.  We can *not* use X = 10
 to reason about the behavior of anything that isn't in the = operator's
 btree opclass, because we don't assume that = means absolutely
 identical for every purpose.  And in fact it does not mean that for
 several pretty common datatypes (float being another example besides
 numeric).

OK, so it is really the index comparisons that we are using;  makes
sense.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pluggable compression support

2013-06-25 Thread Robert Haas
On Thu, Jun 20, 2013 at 8:09 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-06-15 12:20:28 +0200, Andres Freund wrote:
 On 2013-06-14 21:56:52 -0400, Robert Haas wrote:
  I don't think we need it.  I think what we need is to decide is which
  algorithm is legally OK to use.  And then put it in.
 
  In the past, we've had a great deal of speculation about that legal
  question from people who are not lawyers.  Maybe it would be valuable
  to get some opinions from people who ARE lawyers.  Tom and Heikki both
  work for real big companies which, I'm guessing, have substantial
  legal departments; perhaps they could pursue getting the algorithms of
  possible interest vetted.  Or, I could try to find out whether it's
  possible do something similar through EnterpriseDB.

 I personally don't think the legal arguments holds all that much water
 for snappy and lz4. But then the opinion of a european non-lawyer doesn't
 hold much either.
 Both are widely used by a large number open and closed projects, some of
 which have patent grant clauses in their licenses. E.g. hadoop,
 cassandra use lz4, and I'd be surprised if the companies behind those
 have opened themselves to litigation.

 I think we should preliminarily decide which algorithm to use before we
 get lawyers involved. I'd surprised if they can make such a analysis
 faster than we can rule out one of them via benchmarks.

 Will post an updated patch that includes lz4 as well.

 Attached.

 Changes:
 * add lz4 compression algorithm (2 clause bsd)
 * move compression algorithms into own subdirectory
 * clean up compression/decompression functions
 * allow 258 compression algorithms, uses 1byte extra for any but the
   first three
 * don't pass a varlena to pg_lzcompress.c anymore, but data directly
 * add pglz_long as a test fourth compression method that uses the +1
   byte encoding
 * us postgres' endian detection in snappy for compatibility with osx

 Based on the benchmarks I think we should go with lz4 only for now. The
 patch provides the infrastructure should somebody else want to add more
 or even proper configurability.

 Todo:
 * windows build support
 * remove toast_compression_algo guc
 * remove either snappy or lz4 support
 * remove pglz_long support (just there for testing)

 New benchmarks:

 Table size:
   List of relations
  Schema |Name| Type  | Owner  |  Size  | Description
 ++---+++-
  public | messages_pglz  | table | andres | 526 MB |
  public | messages_snappy| table | andres | 523 MB |
  public | messages_lz4   | table | andres | 522 MB |
  public | messages_pglz_long | table | andres | 527 MB |
 (4 rows)

 Workstation (2xE5520, enough s_b for everything):

 Data load:
 pglz:   36643.384 ms
 snappy: 24626.894 ms
 lz4:23871.421 ms
 pglz_long:  37097.681 ms

 COPY messages_* TO '/dev/null' WITH BINARY;
 pglz:   3116.083 ms
 snappy: 2524.388 ms
 lz4:2349.396 ms
 pglz_long:  3104.134 ms

 COPY (SELECT rawtxt FROM messages_*) TO '/dev/null' WITH BINARY;
 pglz:   1609.969 ms
 snappy: 1031.696 ms
 lz4: 886.782 ms
 pglz_long:  1606.803 ms


 On my elderly laptop (core 2 duo), too load shared buffers:

 Data load:
 pglz:   39968.381 ms
 snappy: 26952.330 ms
 lz4:29225.472 ms
 pglz_long:  39929.568 ms

 COPY messages_* TO '/dev/null' WITH BINARY;
 pglz:   3920.588 ms
 snappy: 3421.938 ms
 lz4:3311.540 ms
 pglz_long:  3885.920 ms

 COPY (SELECT rawtxt FROM messages_*) TO '/dev/null' WITH BINARY;
 pglz:   2238.145 ms
 snappy: 1753.403 ms
 lz4:1638.092 ms
 pglz_long:  2227.804 ms

Well, the performance of both snappy and lz4 seems to be significantly
better than pglz.  On these tests lz4 has a small edge but that might
not be true on other data sets.  I still think the main issue is legal
review: are there any license or patent concerns about including
either of these algorithms in PG?  If neither of them have issues, we
might need to experiment a little more before picking between them.
If one does and the other does not, well, then it's a short
conversation.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] add long options to pgbench (submission 1)

2013-06-25 Thread Robert Haas
On Tue, Jun 25, 2013 at 12:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 I would like to solicit opinions on whether this is a good idea.  I
 understand that the patch author thinks it's a good idea, and I don't
 have a strong position either way.  But I want to hear what other
 people think.

 If it makes pgbench more consistent with psql's command line options,
 it seems reasonable to me.

OK, I think it does that.  So that's three votes in favor and none
opposed.  I think I'd like to quibble with some of the names a bit,
though.  The patch adds --fill-factor, but I think we should spell it
without the hyphen: --fillfactor.  I think --quiet-log should be
spelled --quiet.  I think --connect for each connection is not very
descriptive; maybe --connection-per-transaction or something, although
that's kind of long.  I think -M should be aliased to --protocol, not
--query-mode.  --skip-some-update is incorrectly pluralized; if that's
what we're going to use, it should be --skip-some-updates.
Alternatively, we could use --update-large-tables-only, which might
make the intent more clear.

On another note, it doesn't look like this updates the output of
pgbench --help, which seems important.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_filedump 9.3: checksums (and a few other fixes)

2013-06-25 Thread Jeff Davis
On Mon, 2013-06-24 at 20:34 -0400, Josh Kupershmidt wrote:
 This patch is in the current CommitFest, does it still need to be
 reviewed? If so, I notice that the version in pgfoundry's CVS is
 rather different than the version the patch seems to have been built
 against (presumably the pg_filedump-9.2.0.tar.gz release), and
 conflicts in several places with cvs tip.

Oh, thank you. After browsing the CVS repo failed, I just made the diff
against 9.2.0. I'll rebase it.

Regards,
Jeff Davis




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-06-25 Thread Heikki Linnakangas

On 21.06.2013 11:29, KONDO Mitsumasa wrote:

I took results of my separate patches and original PG.

* Result of DBT-2
| TPS 90%tile Average Maximum
--
original_0.7 | 3474.62 18.348328 5.739 36.977713
original_1.0 | 3469.03 18.637865 5.842 41.754421
fsync | 3525.03 13.872711 5.382 28.062947
write | 3465.96 19.653667 5.804 40.664066
fsync + write | 3564.94 16.31922 5.1 34.530766

- 'original_*' indicates checkpoint_completion_target in PG 9.2.4.
- In other patched postgres, checkpoint_completion_target sets 0.7.
- 'write' is applied write patch, and 'fsync' is applied fsync patch.
- 'fsync + write' is applied both patches.


* Investigation of result
- Large value of checkpoint_completion_target in original and the patch
in write become slow latency in benchmark transactions. Because slow
write pages are caused long time fsync IO in final checkpoint.
- The patch in fsync has an effect latency in each file fsync. Continued
fsyncsin each files are caused slow latency. Therefore, it is good for
latency that fsync stage in checkpoint has sleeping time after slow
fsync IO.
- The patches of fsync + write were seemed to improve TPS. I think that
write patch does not disturb transactions which are in full-page-write
WAL write than original(plain) PG.


Hmm, so the write patch doesn't do much, but the fsync patch makes the 
response times somewhat smoother. I'd suggest that we drop the write 
patch for now, and focus on the fsyncs.


What checkpointer_fsync_delay_ratio and 
checkpointer_fsync_delay_threshold settings did you use with the fsync 
patch? It's disabled by default.


This is the interesting part of the patch:


@@ -1171,6 +1174,20 @@ mdsync(void)
 
FilePathName(seg-mdfd_vfd),
 (double) 
elapsed / 1000);

+   /*
+* If this fsync has long time, 
we sleep 'fsync-time * checkpoint_fsync_delay_ratio'
+* for giving priority to 
executing transaction.
+*/
+   if( CheckPointerFsyncDelayThreshold = 
0 
+   !shutdown_requested 
+   !ImmediateCheckpointRequested() 

+   (elapsed / 1000  
CheckPointerFsyncDelayThreshold))
+   {
+   pg_usleep((elapsed / 
1000) * CheckPointerFsyncDelayRatio * 1000L);
+   if(log_checkpoints)
+   elog(DEBUG1, 
checkpoint sync sleep: time=%.3f msec,
+   
(double) (elapsed / 1000) * CheckPointerFsyncDelayRatio);
+   }
break;  /* out of retry loop */
}


I'm not sure it's a good idea to sleep proportionally to the time it 
took to complete the previous fsync. If you have a 1GB cache in the RAID 
controller, fsyncing the a 1GB segment will fill it up. But since it 
fits in cache, it will return immediately. So we proceed fsyncing other 
files, until the cache is full and the fsync blocks. But once we fill up 
the cache, it's likely that we're hurting concurrent queries. ISTM it 
would be better to stay under that threshold, keeping the I/O system 
busy, but never fill up the cache completely.


This is just a theory, though. I don't have a good grasp on how the OS 
and a typical RAID controller behaves under these conditions.


I'd suggest that we just sleep for a small fixed amount of time between 
every fsync, unless we're running behind the checkpoint schedule. And 
for a first approximation, let's just assume that the fsync phase is e.g 
10% of the whole checkpoint work.



I will send you more detail investigation and result next week. And I
will also take result in pgbench. If you mind other part of benchmark
result or parameter of postgres, please tell me.


Attached is a quick patch to implement a fixed, 100ms delay between 
fsyncs, and the assumption that fsync phase is 10% of the total 
checkpoint duration. I suspect 100ms is too small to have much effect, 
but that happens to be what we have currently in CheckpointWriteDelay(). 
Could you test this patch along with yours? If you can test with 
different delays (e.g 100ms, 500ms and 1000ms) and different ratios 
between the write and fsync phase (e.g 0.5, 0.7, 0.9), to get an idea of 
how sensitive the test case is to those settings.


- Heikki
diff --git 

[HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Josh Berkus
Hackers,

I'd like to take a straw poll here on how we should acknowledge
reviewers.  Please answer the below with your thoughts, either on-list
or via private email.

How should reviewers get credited in the release notes?

a) not at all
b) in a single block titled Reviewers for this version at the bottom.
c) on the patch they reviewed, for each patch

Should there be a criteria for a creditable review?

a) no, all reviews are worthwhile
b) yes, they have to do more than it compiles
c) yes, only code reviews should count

Should reviewers for 9.4 get a prize, such as a t-shirt, as a
promotion to increase the number of non-submitter reviewers?

a) yes
b) no
c) yes, but submitters and committers should get it too

Thanks for your feedback!

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean switchover

2013-06-25 Thread Fujii Masao
On Mon, Jun 24, 2013 at 3:41 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-06-14 04:56:15 +0900, Fujii Masao wrote:
 On Wed, Jun 12, 2013 at 9:48 PM, Stephen Frost sfr...@snowman.net wrote:
  * Magnus Hagander (mag...@hagander.net) wrote:
  On Wed, Jun 12, 2013 at 1:48 PM, Andres Freund and...@2ndquadrant.com 
  wrote:
   On 2013-06-12 07:53:29 +0900, Fujii Masao wrote:
   The attached patch fixes this problem. It just changes walsender so 
   that it
   waits for all the outstanding WAL records to be replicated to the 
   standby
   before closing the replication connection.
  
   Imo this is a fix that needs to get backpatched... The code tried to do
   this but failed, I don't think it really gives grounds for valid *new*
   concerns.
 
  +1 (without having looked at the code itself, it's definitely a
  behaviour that needs to be fixed)
 
  Yea, I was also thinking it would be reasonable to backpatch this; it
  really looks like a bug that we're allowing this to happen today.
 
  So, +1 on a backpatch for me.

 +1. I think that we can backpatch to 9.1, 9.2 and 9.3.

 I marked the patch as ready for committer.

Committed. Thanks a lot!

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Heikki Linnakangas

On 25.06.2013 20:17, Josh Berkus wrote:

Hackers,

I'd like to take a straw poll here on how we should acknowledge
reviewers.  Please answer the below with your thoughts, either on-list
or via private email.

How should reviewers get credited in the release notes?

a) not at all
b) in a single block titled Reviewers for this version at the bottom.
c) on the patch they reviewed, for each patch


a)

Sometimes a reviewer contributes greatly to the patch, revising it and 
rewriting parts of it. At that point, it's not just a review anymore, 
and he/she should be mentioned in the release notes as a co-author.



Should there be a criteria for a creditable review?

a) no, all reviews are worthwhile
b) yes, they have to do more than it compiles
c) yes, only code reviews should count


This is one reason why I answered a) above. I don't want to set a criteria.


Should reviewers for 9.4 get a prize, such as a t-shirt, as a
promotion to increase the number of non-submitter reviewers?

a) yes
b) no
c) yes, but submitters and committers should get it too


a).

I don't think we should make any promises, though. Just arbitrarily send 
a t-shirt when you feel that someone has done a good job reviewing other 
people's patches. And I'm not sure it's really worth the trouble, to 
arrange the logistics etc.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Erik Rijkers
On Tue, June 25, 2013 19:17, Josh Berkus wrote:

 How should reviewers get credited in the release notes?
b) in a single block titled Reviewers for this version at the bottom.


  Should there be a criteria for a creditable review?
b) yes, they have to do more than it compiles


 Should reviewers for 9.4 get a prize, such as a t-shirt, as a
 promotion to increase the number of non-submitter reviewers?
b) no



Erik Rijkers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] request a new feature in fuzzystrmatch

2013-06-25 Thread Liming Hu
On Mon, Jun 24, 2013 at 6:02 PM, Joe Conway m...@joeconway.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 06/14/2013 12:08 PM, Liming Hu wrote:
 I have implemented the code according to Joe's
 suggestion, and put the code at:
 https://github.com/liminghu/fuzzystrmatch/tree/fuzzystrmatchv1.1



 Please submit a proper patch so it can be seen on our mailing list
 archives.

 Hi Alvaro,

 I am kind of new to the Postgresql hacker community, Can
 you please help me on submit the patch?

 I have not done much in the way of real review yet, but here at least
 is a context diff against git master.

 Joe

Hi Joe,

Thanks a lot.

please remove dameraulevenshteinnocompatible related stuff, I just
followed the template you created.
dameraulevenshteinnocompatible was used in my first testing.

diff -cNr /opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--1.0.sql
fuzzystrmatch/fuzzystrmatch--1.0.sql
*** /opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--1.0.sql
2012-02-25 20:24:23.0 -0600
--- fuzzystrmatch/fuzzystrmatch--1.0.sql 2013-06-11 04:09:01.0 -0500
CREATE FUNCTION dameraulevenshteinnocompatible
(text,text,int,int,int,int) RETURNS int
+ AS 'MODULE_PATHNAME','dameraulevenshtein_with_costs_noncompatible'
+ LANGUAGE C IMMUTABLE STRICT;


please remove dameraulevenshteinnocompatible related stuff, I just
followed the template you created.
dameraulevenshteinnocompatible was used in my first testing.

diff -cNr 
/opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--unpackaged--1.0.sql
fuzzystrmatch/fuzzystrmatch--unpackaged--1.0.sql
*** 
/opt/src/pgsql-git/master/contrib/fuzzystrmatch/fuzzystrmatch--unpackaged--1.0.sql
2012-02-25 20:24:23.0 -0600
--- fuzzystrmatch/fuzzystrmatch--unpackaged--1.0.sql 2013-06-11
04:09:01.0 -0500

+ ALTER EXTENSION fuzzystrmatch ADD function
dameraulevenshteinnoncompatible(text,text,integer,integer,integer,integer);

+CREATE FUNCTION dameraulevenshtein_less_equal_noncompatible
(text,text,int,int,int,int,int) RETURNS int
+ AS 'MODULE_PATHNAME','dameraulevenshtein_less_equal_with_costs_noncompatible'
+ LANGUAGE C IMMUTABLE STRICT;


I updated:

https://github.com/liminghu/fuzzystrmatch/blob/fuzzystrmatchv1.1/fuzzystrmatch--1.0.sql

https://github.com/liminghu/fuzzystrmatch/blob/fuzzystrmatchv1.1/fuzzystrmatch--unpackaged--1.0.sql

correspondingly.

Sorry about that.

Thanks and nice day.

Liming



 - --
 Joe Conway
 credativ LLC: http://www.credativ.us
 Linux, PostgreSQL, and general Open Source
 Training, Service, Consulting,  24x7 Support
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/

 iQIcBAEBAgAGBQJRyOw2AAoJEDfy90M199hlFhEP/2o/d08Fq4CHA9iI05PxDwQM
 AHF6PWS5gJToNLtiTevyEamWyzlULjX3yJ9gTAhxc+ltxE9Xuf3/bfMcJQSbJ5c9
 6GSH7CWpaY1DpfAwvYiwTJ9EPBZ11u8VZaMrb1VU2DS8rioOOL3llzpk+D6/1no5
 y327vlH1aOuqk3Hwu0c/WN8RAcf1HLbhafph2KruUfr/ba3uILBQZtzpyK4uCDew
 OJA+cktXHv3ho7w4N//xVQs3sZ/EoLahOt/y4syd3dN+Cq/8kj/uJaVgT2QY8rDQ
 HIBpYvm+b3DsYpjw2qrSVBriIupF2a0UPkLfC5cu3om8VvBilShydE6uTI+f+55p
 a12NuzepwgDnpZ1jOZkkmnBslpfoZI9TEo1UDeZ8x/TSZDW72FhwRtWq9kO9CFIK
 cJvG9B9E5zhyZx9V1C2S0qpvIJAj/Gns4ymvYU1lm5UUnpPSpTQoUK8ybrfnHoTE
 B0DEVjqxbrk9SSJ5LI3KojAaSMUIQDcjMQFCsghR1s5/yRhpZ7xEPvcL63ARwDcv
 ZeeL6r5G+nmKAfRAjGbmWi/Y1ANI0ucO5XxnfhSDb+TCQow4U6IgNvkYjN1hTNEI
 //9mQHDHi5qsuGcRcgvFLLeR4eVSGewseYLOR9XELMYTam4IUClwPr6WHuMqE/aE
 jvMZafqZ/1EhQ2SeqCo4
 =wcGM
 -END PGP SIGNATURE-



--
Liming Hu
cell: (435)-512-4190
Seattle Washington


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch to compute Max LSN of Data Pages

2013-06-25 Thread Andres Freund
Hi,

On 2013-06-16 17:19:49 -0700, Josh Berkus wrote:
 Amit posted a new version of this patch on January 23rd.  But last
 comment on it by Tom is not sure everyone wants this.
 
 https://commitfest.postgresql.org/action/patch_view?id=905

 ... so, what's the status of this patch?

That comment was referencing a mail of mine - so perhaps I better
explain:

I think the usecase for this utility isn't big enough to be included in
postgres since it really can only help in a very limited
circumstances. And I think it's too likely to be misused for stuff it's
not useable for (e.g. remastering).

The only scenario I see is that somebody deleted/corrupted
pg_controldata. In that scenario the tool is supposed to be used to find
the biggest lsn used so far so the user then can use pg_resetxlog to set
that as the wal starting point.
But that can be way much easier solved by just setting the LSN to
something very, very high. The database cannot be used for anything
reliable afterwards anyway.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Andres Freund
On 2013-06-25 10:17:07 -0700, Josh Berkus wrote:
 How should reviewers get credited in the release notes?
 
 a) not at all
 b) in a single block titled Reviewers for this version at the bottom.
 c) on the patch they reviewed, for each patch

b).

If the review was substantial enough the reviewer gets bumped to a
secondary author, just as it already happens.

 Should there be a criteria for a creditable review?
 
 a) no, all reviews are worthwhile
 b) yes, they have to do more than it compiles
 c) yes, only code reviews should count

b). Surely performance reviews should also count, they can be at least
as time consuming as a code review, so c) doesn't seem to make sense.

 Should reviewers for 9.4 get a prize, such as a t-shirt, as a
 promotion to increase the number of non-submitter reviewers?
 
 a) yes
 b) no
 c) yes, but submitters and committers should get it too

Not sure. Seems like it might be a way to spend a lot of effort without
achieving all that much. But I can also imagine that it feels nice and
encourages a casual reviewer/contributor.

So it's either b) or c). Although I'd perhaps exclude regular
contributors to keep the list reasonable?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Josh Berkus
On 06/25/2013 10:46 AM, Andres Freund wrote:
 Not sure. Seems like it might be a way to spend a lot of effort without
 achieving all that much. But I can also imagine that it feels nice and
 encourages a casual reviewer/contributor.
 
 So it's either b) or c). Although I'd perhaps exclude regular
 contributors to keep the list reasonable?

Well, one of the other prizes which occurred to me today would be a
pgCon lottery.  That is, each review posted by a non-committer would go
in a hat, and in February we would draw one who would get a free
registration and airfare to pgCon.

Seems apropos and without the horrible logistics issues of mailing
tshirts to 15 countries.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INTERVAL overflow detection is terribly broken

2013-06-25 Thread Rok Kralj
So, any insights on these problems?

They might not be critical, but might be silently corrupting someone's data.


2013/6/23 Rok Kralj rok.kr...@gmail.com

 Hi, after studying ITERVAL and having a long chat with RhoidumToad and
 StuckMojo on #postgresql, I am presenting you 3 bugs regarding INTERVAL.

 As far as I understand, the Interval struct (binary internal
 representation) consists of:

 int32 months
 int32 days
 int64 microseconds

 1. OUTPUT ERRORS: Since 2^63 microseconds equals 2,562,047,788  2^31
 hours, the overflow in pg_tm when displaying the value causes overflow. The
 value of Interval struct is actually correct, error happens only on
 displaying it.

 SELECT INTERVAL '2147483647 hours' + INTERVAL '5 hours'
 -2147483644:00:00

 Even wireder:

 SELECT INTERVAL '2147483647 hours' + '1 hour'
 --2147483648:00:00

 notice the double minus? Don't ask how I came across this two bugs.

 2. OPERATION ERRORS: When summing two intervals, the user is not notified
 when overflow occurs:

 SELECT INT '2147483647' + INT '1'
 ERROR: integer out of range

 SELECT INTERVAL '2147483647 days' + INTERVAL '1 day'
 -2147483648 days

 This should be analogous.

 3. PARSER / INPUT ERRORS:

 This is perhaps the hardest one to explain, since this is an architectural
 flaw. You are checking the overflows when parsing string - pg_tm struct.
 However, at this point, the parser doesn't know, that weeks and days are
 going to get combined, or years are going to get converted to months, for
 example.

 Unawarness of underlying Interval struct causes two types of suberrors:

 a) False positive

 SELECT INTERVAL '2147483648 microseconds'
 ERROR:  interval field value out of range: 2147483648 microseconds

 This is not right. Microseconds are internally stored as 64 bit signed
 integer. The catch is: this amount of microseconds is representable in
 Interval data structure, this shouldn't be an error.

 b) False negative

 SELECT INTERVAL '10 years'
 -73741824 years

 We don't catch errors like this, because parser only checks for overflow
 in pg_tm. If overflow laters happens in Interval, we don't seem to care.

 4. POSSIBLE SOLUTIONS:

 a) Move the overflow checking just after the conversion of pg_tm -
 Interval is made. This way, you can accurately predict if the result is
 really not store-able.

 b) Because of 1), you have to declare tm_hour as int64, if you want to use
 that for the output. But, why not use Interval struct for printing
 directly, without intermediate pg_tm?

 5. Offtopic: Correct the documentation, INTERVAL data size is 16 bytes,
 not 12.

 Rok Kralj

 --
 eMail: rok.kr...@gmail.com




-- 
eMail: rok.kr...@gmail.com


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Joshua D. Drake


On 06/25/2013 10:17 AM, Josh Berkus wrote:


Hackers,

I'd like to take a straw poll here on how we should acknowledge
reviewers.  Please answer the below with your thoughts, either on-list
or via private email.

How should reviewers get credited in the release notes?

a) not at all
b) in a single block titled Reviewers for this version at the bottom.
c) on the patch they reviewed, for each patch


C. The idea that reviewers are somehow less than authors is rather 
disheartening.




Should there be a criteria for a creditable review?

a) no, all reviews are worthwhile
b) yes, they have to do more than it compiles
c) yes, only code reviews should count


B. I think it compiles, and I tested it via X should be the minimum. 
Here is a case. I was considering taking a review of the new Gin Cache 
patch. I can't really do a code review but I can certainly run 
benchmarking tests before/after and apply the patch etc.




Should reviewers for 9.4 get a prize, such as a t-shirt, as a
promotion to increase the number of non-submitter reviewers?

a) yes
b) no
c) yes, but submitters and committers should get it too

Thanks for your feedback!



B. We already give them a multi-million dollar piece of software for free.

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Alvaro Herrera
Yuri Levinsky escribió:

 As former Oracle DBA and now simple beginner PostgreSQL DBA I would like
 to say: the current partitioning mechanism might be improved. Sorry, it
 seems to me far behind yesterday requirements.

I don't think you'll find anybody that disagrees with this.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Claudio Freire
On Tue, Jun 25, 2013 at 12:55 PM, Robert Haas robertmh...@gmail.com wrote:
 Let me back up a minute.  You told the OP that he could make hash
 partitioning by writing his own constraint and trigger functions.  I
 think that won't work.  But I'm happy to be proven wrong.  Do you have
 an example showing how to do it?

 Here's why I think it WON'T work:

 rhaas=# create table foo (a int, b text);
 CREATE TABLE
 rhaas=# create table foo0 (check ((a % 16) = 0)) inherits (foo);
 CREATE TABLE
 rhaas=# create table foo1 (check ((a % 16) = 1)) inherits (foo);
 CREATE TABLE
 rhaas=# create table foo2 (check ((a % 16) = 2)) inherits (foo);
 CREATE TABLE
 rhaas=# create table foo3 (check ((a % 16) = 3)) inherits (foo);
 CREATE TABLE
 rhaas=# explain select * from foo where a = 1;
  QUERY PLAN
 
  Append  (cost=0.00..101.50 rows=25 width=36)
-  Seq Scan on foo  (cost=0.00..0.00 rows=1 width=36)
  Filter: (a = 1)
-  Seq Scan on foo0  (cost=0.00..25.38 rows=6 width=36)
  Filter: (a = 1)
-  Seq Scan on foo1  (cost=0.00..25.38 rows=6 width=36)
  Filter: (a = 1)
-  Seq Scan on foo2  (cost=0.00..25.38 rows=6 width=36)
  Filter: (a = 1)
-  Seq Scan on foo3  (cost=0.00..25.38 rows=6 width=36)
  Filter: (a = 1)
 (11 rows)

 Notice we get a scan on every partition.  Now let's try it with no
 modulo arithmetic, just a straightforward one-partition-per-value:

 rhaas=# create table foo (a int, b text);
 CREATE TABLE
 rhaas=# create table foo0 (check (a = 0)) inherits (foo);
 CREATE TABLE
 rhaas=# create table foo1 (check (a = 1)) inherits (foo);
 CREATE TABLE
 rhaas=# create table foo2 (check (a = 2)) inherits (foo);
 CREATE TABLE
 rhaas=# create table foo3 (check (a = 3)) inherits (foo);
 CREATE TABLE
 rhaas=# explain select * from foo where a = 1;


Did you try select * from foo where (a % 16) = (1::int % 16)?

A few views I have that span multiple partitions (in quotes since
they're not exactly partitions, but close), I can make constraint
exclusion work if I match the expression EXACTLY, including types
(I've posted a few questions about this to pg-performance).


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 3: ordering in index

2013-06-25 Thread Alexander Korotkov
On Tue, Jun 25, 2013 at 7:31 PM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 25.06.2013 01:24, Alexander Korotkov wrote:

 On Wed, Jun 19, 2013 at 1:21 AM, Alexander Korotkovaekorot...@gmail.com
 **wrote:

  On Mon, Jun 17, 2013 at 10:27 PM, Heikki Linnakangas
 hlinnakan...@vmware.com  wrote:

 That has some obvious limitations. First of all, you can run out of

 memory.


 Yes, it is so. qsort should be replaced with tuplesort.


 In attached patch qsort is replaced with tuplesort. As expected, it leads
 to some performance drawback, but it's not dramatic.
 Also, some doc is added for new distance method of GIN.


 Thanks. But the fact remains that with this patch, you fetch all the
 tuples and then you sort them, which is exactly the same thing you do
 without the patch. The way it happens without the patch just seems to be
 slower. Time to break out the profiler..


Actually, it's no exactly so. gingettuple doesn't touch any heap tuple. It
gets all required information to calculate relevance directly from index
itself. It's possible with respect to additional information which is word
positions for fts. So, patch is doing similar but with significant
difference: source of data for ranking changes from heap to index. The
information required for ranking in index is much more well-localized than
it is in heap.


 I downloaded what I believe to be the same DBLP titles dataset that you
 tested with. Without the patch:

 postgres=# explain analyze select * from dblp_titles where tsvector @@
 to_tsquery('english', 'statistics') order by  ts_rank(tsvector,
 to_tsquery('english', 'statistics')) limit 10;
 QUERY PLAN

 --**--**
 -
 --**---
  Limit  (cost=42935.81..42935.84 rows=10 width=64) (actual
 time=57.945..57.948 ro
 ws=10 loops=1)
-  Sort  (cost=42935.81..42980.86 rows=18020 width=64) (actual
 time=57.943..5
 7.944 rows=10 loops=1)
  Sort Key: (ts_rank(tsvector, '''statist'''::tsquery))

  Sort Method: top-N heapsort  Memory: 28kB
  -  Bitmap Heap Scan on dblp_titles  (cost=211.66..42546.41
 rows=18020 w
 idth=64) (actual time=13.061..46.358 rows=15142 loops=1)
Recheck Cond: (tsvector @@ '''statist'''::tsquery)
-  Bitmap Index Scan on gin_title  (cost=0.00..207.15
 rows=18020
 width=0) (actual time=8.339..8.339 rows=15142 loops=1)

  Index Cond: (tsvector @@ '''statist'''::tsquery)
  Total runtime: 57.999 ms
 (9 rows)

 And the profile looks like this:

 6,94%  postgrestbm_iterate
 6,12%  postgreshash_search_with_hash_value
 4,40%  postgrestbm_comparator
 3,79%  libc-2.17.so__memcpy_ssse3_back
 3,68%  postgresheap_hot_search_buffer
 2,62%  postgresslot_deform_tuple
 2,47%  postgresnocachegetattr
 2,37%  postgresheap_page_prune_opt
 2,27%  libc-2.17.so__memcmp_sse4_1
 2,21%  postgresheap_fill_tuple
 2,18%  postgrespg_qsort
 1,96%  postgrestas
 1,88%  postgrespalloc0
 1,83%  postgrescalc_rank_or

 Drilling into that, tbm_iterate, tbm_comparator and pq_sort calls come
 from the Tidbitmap code, as well as about 1/3 of the
 hash_search_with_hash_value calls. There seems to be a fair amount of
 overhead in building and iterating the tid bitmap. Is that what's killing
 us?

 For comparison, this is the same with your patch:

 postgres=# explain analyze select * from dblp_titles where tsvector @@

 to_tsquery('english', 'statistics') order by tsvector 
 to_tsquery('english', 'statistics') limit 10;
QUERY PLAN

 --**--**
 -
 --**--
  Limit  (cost=16.00..52.81 rows=10 width=136) (actual time=9.957..9.980
 rows=10 l
 oops=1)
-  Index Scan using gin_title on dblp_titles  (cost=16.00..52198.94
 rows=1417
 5 width=136) (actual time=9.955..9.977 rows=10 loops=1)

  Index Cond: (tsvector @@ '''statist'''::tsquery)
  Order By: (tsvector  '''statist'''::tsquery)
  Total runtime: 10.084 ms
 (5 rows)

 9,57%  postgresscanGetItemFast
 7,02%  postgrescalc_rank_or
 5,71%  postgresFunctionCall10Coll
 5,59%  postgresentryGetNextItem
 5,19%  postgreskeyGetOrdering
 5,13%  postgresginDataPageLeafReadItemPointer
 4,89%  postgresentryShift
 4,85%  postgresginCompareItemPointers
 3,44%  postgresginDataPageLeafRead
 3,28%  postgresAllocSetAlloc
 3,27%  postgresinsertScanItem
 3,18%  postgresgin_tsquery_distance
 2,38%  postgres

Re: [HACKERS] pgbench --startup option

2013-06-25 Thread Fabien COELHO



OK, I like this idea a lot, but I have a question.  Right now, to use
this, you have to supply the startup SQL on the command line.  And
that could definitely be useful.  But ISTM that you might also want to
take the startup SQL from a file, and indeed you might well want to
include metacommands in there.  Maybe that's getting greedy, but the
rate at which people are adding features to pgbench suggests to me
that it won't be long before this isn't enough.

Thoughts?


My 0.02€:

I thought about this point while reviewing the patch, but I did not add it 
to the review because ISTM that it would require a lot of changes, and 
pgbench is already kind of a mess, IMHO. Also, possibly you would like to 
use a script under different assumptions to test them, that would require 
the startup string to be out of the script so as to change it easily. So 
it would not be that useful.


--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Andres Freund
On 2013-06-25 11:04:38 -0700, Joshua D. Drake wrote:
 a) not at all
 b) in a single block titled Reviewers for this version at the bottom.
 c) on the patch they reviewed, for each patch
 
 C. The idea that reviewers are somehow less than authors is rather
 disheartening.

It's not about the reviewers being less. It's a comparison of
effort. The effort for a casual review simply isn't comparable with the
effort spent on developing a nontrivial patch.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Andrew Dunstan


On 06/25/2013 01:17 PM, Josh Berkus wrote:

Hackers,

I'd like to take a straw poll here on how we should acknowledge
reviewers.  Please answer the below with your thoughts, either on-list
or via private email.

How should reviewers get credited in the release notes?

a) not at all
b) in a single block titled Reviewers for this version at the bottom.
c) on the patch they reviewed, for each patch


b) seems about right.



Should there be a criteria for a creditable review?

a) no, all reviews are worthwhile
b) yes, they have to do more than it compiles
c) yes, only code reviews should count



c). Compilation, functionality and performance tests are useful, but 
what we desperately need are in depth code reviews of large patches. If 
we debase the currency by rewarding things less than that then any 
incentive effect of kudos in encouraging more reviews is lost.





Should reviewers for 9.4 get a prize, such as a t-shirt, as a
promotion to increase the number of non-submitter reviewers?

a) yes
b) no
c) yes, but submitters and committers should get it too



I'd like to see prizes each release for best contribution and best 
reviewer - I've thought for years something like this would be worth 
trying. Committers and core members should not be eligible - this is 
about encouraging new people.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pluggable compression support

2013-06-25 Thread Andres Freund
On 2013-06-25 12:22:31 -0400, Robert Haas wrote:
 On Thu, Jun 20, 2013 at 8:09 PM, Andres Freund and...@2ndquadrant.com wrote:
  On 2013-06-15 12:20:28 +0200, Andres Freund wrote:
  On 2013-06-14 21:56:52 -0400, Robert Haas wrote:
   I don't think we need it.  I think what we need is to decide is which
   algorithm is legally OK to use.  And then put it in.
  
   In the past, we've had a great deal of speculation about that legal
   question from people who are not lawyers.  Maybe it would be valuable
   to get some opinions from people who ARE lawyers.  Tom and Heikki both
   work for real big companies which, I'm guessing, have substantial
   legal departments; perhaps they could pursue getting the algorithms of
   possible interest vetted.  Or, I could try to find out whether it's
   possible do something similar through EnterpriseDB.
 
  I personally don't think the legal arguments holds all that much water
  for snappy and lz4. But then the opinion of a european non-lawyer doesn't
  hold much either.
  Both are widely used by a large number open and closed projects, some of
  which have patent grant clauses in their licenses. E.g. hadoop,
  cassandra use lz4, and I'd be surprised if the companies behind those
  have opened themselves to litigation.
 
  I think we should preliminarily decide which algorithm to use before we
  get lawyers involved. I'd surprised if they can make such a analysis
  faster than we can rule out one of them via benchmarks.
 
  Will post an updated patch that includes lz4 as well.
 
  Attached.
 
 Well, the performance of both snappy and lz4 seems to be significantly
 better than pglz.  On these tests lz4 has a small edge but that might
 not be true on other data sets.

From what I've seen of independent benchmarks on more varying datasets
and from what I tested (without pg inbetween) lz4 usually has a bigger
margin than this, especially on decompression.
The implementation also seems to be better prepared to run on more
platforms, e.g. it didn't require any fiddling with endian.h in contrast
to snappy.
But yes, even snappy would be a big improvement should lz4 turn out to
be problematic and the performance difference isn't big enough to rule
one out as I'd hopped.

 I still think the main issue is legal
 review: are there any license or patent concerns about including
 either of these algorithms in PG?  If neither of them have issues, we
 might need to experiment a little more before picking between them.
 If one does and the other does not, well, then it's a short
 conversation.

True. So, how do we proceed on that?

The ASF decided it was safe to use lz4 in cassandra. Does anybody have
contacts over there?

Btw, I have the feeling we hold this topic to a higher standard wrt
patent issues than other work in postgres...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Claudio Freire
On Tue, Jun 25, 2013 at 2:17 PM, Josh Berkus j...@agliodbs.com wrote:
 How should reviewers get credited in the release notes?
 c) on the patch they reviewed, for each patch

This not only makes sense, it also lets people reading release notes
know there's been a review, and how thorough it was. I know, all
changes in PG get reviewed, but having it explicit on release notes I
imagine would be useful. Especially if I know the reviewers.

The co-author part also makes a lot of sense. When a reviewer
introduces changes directly, and they get committed, I think it should
be automatically considered co-authoring the patch.

 Should there be a criteria for a creditable review?

 a) no, all reviews are worthwhile

It's not that they're all worthwhile, but arbitrary decisions lend
themselves to arbitrary criticism. Whatever criteria should be
straightforward, unambiguous and unbiased, and it's hard getting all
those three in any other criteria than: all are worthwhile.

 b) yes, they have to do more than it compiles

This one's better than nothing, if you must have a minimum criteria.
But then people will just point out some trivial stuff and you'd be
tempted to say that trivialities don't count... and you get a snowball
going. IMHO, it's all or nothing.

 Should reviewers for 9.4 get a prize, such as a t-shirt, as a
 promotion to increase the number of non-submitter reviewers?

 b) no

Yeah, while a fun idea, I don't think the logistics of it make it
worth the effort. Too much effort for too little benefit.

And I think recognition is a far better incentive than T-shirts
anyway. I know I'd be encouraged to review patches for the recognition
alone, a lot more than for a T-shirt I might not get. Contributing is
nice, but feeling appreciated while doing so is better.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Josh Berkus
On 06/25/2013 11:26 AM, Andres Freund wrote:
 On 2013-06-25 11:04:38 -0700, Joshua D. Drake wrote:
 a) not at all
 b) in a single block titled Reviewers for this version at the bottom.
 c) on the patch they reviewed, for each patch

 C. The idea that reviewers are somehow less than authors is rather
 disheartening.
 
 It's not about the reviewers being less. It's a comparison of
 effort. The effort for a casual review simply isn't comparable with the
 effort spent on developing a nontrivial patch.

On the other hand, I will point out that we currently have a shortage of
reviewers, and we do NOT have a shortage of patch submitters.  Seems
like we should apply incentives where we need help, no?

Mind you, my votes are (B), (A), and (A).

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Joshua D. Drake


On 06/25/2013 11:26 AM, Andres Freund wrote:


On 2013-06-25 11:04:38 -0700, Joshua D. Drake wrote:

a) not at all
b) in a single block titled Reviewers for this version at the bottom.
c) on the patch they reviewed, for each patch


C. The idea that reviewers are somehow less than authors is rather
disheartening.


It's not about the reviewers being less. It's a comparison of
effort. The effort for a casual review simply isn't comparable with the
effort spent on developing a nontrivial patch.


I think this is a backwards way to look at it.

The effort may not be comparable but the drudgery certainly is.

Reviewing patches sucks. Writing patches (for the most part) is fun.

Should the patch submitter get first billing? Yes.
Should the reviewer that makes sure to a reasonable level that the patch 
is sane also get billing? Absolutely.

Should the reviewer get billing that is about the patch they reviewed. Yes.

As I mentioned before in the release notes something like:

Author: Tom Lane
Reviewer(s): Greg Stark, Andrew Dunstan

I think that is perfectly reasonable.

JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  509-416-6579
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC, @cmdpromptinc
For my dreams of your image that blossoms
   a rose in the deeps of my heart. - W.B. Yeats


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Christopher Browne
On Tue, Jun 25, 2013 at 12:08 PM, Yuri Levinsky yu...@celltick.com wrote:

 Guys,
 I am sorry for taking your time. The reason for my question is:
 As former Oracle DBA and now simple beginner PostgreSQL DBA I would like
 to say: the current partitioning mechanism might be improved. Sorry, it
 seems to me far behind yesterday requirements. As model for improvement
 the Oracle might be taken as example. Unfortunately I am not writing an
 C code and see my benefit to PostgreSQL community in only rising this
 issue. I'll be very happy to be helpful in something else, but...


Please don't flee over this...

As I think you can see, now, the partitioning problem is tougher than it
may at first seem to be.  It's quite useful to quickly get to the point of
understanding that.

There would indeed be merit in improving the partitioning apparatus,
and actually, I think it's been a couple of years since there has been
serious discussion of this.

The discussion tends to head into the rabbit hole of disputing about
whether one mechanism or another is ideal.  That's the wrong starting
point - we shouldn't start with what's easiest to make ideal, we
should start by determining what is required/desirable, without too
much reference, at least initially, on to how to achieve it.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?


Re: [HACKERS] pluggable compression support

2013-06-25 Thread Josh Berkus
On 06/25/2013 11:42 AM, Andres Freund wrote:
 True. So, how do we proceed on that?
 
 The ASF decided it was safe to use lz4 in cassandra. Does anybody have
 contacts over there?
 
 Btw, I have the feeling we hold this topic to a higher standard wrt
 patent issues than other work in postgres...

We have access to attorneys, both in the US and Canada.  I will proceed
to ask for real legal advice.

However, can you tell me what exactly you are concerned about? lz4 is
under the BSD license, and released by Google.  Why are we worried, exactly?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] add long options to pgbench (submission 1)

2013-06-25 Thread Fabien COELHO



I think I'd like to quibble with some of the names a bit, though.


That is a good idea, because I'm not a native English speaker and I was 
not so sure for many options.



The patch adds --fill-factor, but I think we should spell it
without the hyphen: --fillfactor.


Fine with me.


I think --quiet-log should be spelled --quiet.


ISTM that --quiet usually means not verbose on stdout, so I added log 
because this was specific to the log output, and that there may be a need 
for a --quiet option for stdout at some time.


I think --connect for each connection is not very descriptive; maybe 
--connection-per-transaction or something, although that's kind of long.


Yes, I think that it is too long. You have to type them!

What about '--reconnect'?


 I think -M should be aliased to --protocol, not --query-mode.


Fine with me.


--skip-some-update is incorrectly pluralized; if that's
what we're going to use, it should be --skip-some-updates.


Indeed.


Alternatively, we could use --update-large-tables-only, which might
make the intent more clear.


Yep, but quite long.


On another note, it doesn't look like this updates the output of
pgbench --help, which seems important.


Indeed, it should.

Please find attached a v4 which takes into account most of your comments, 
but very very long option names.


--
Fabien.diff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index 1303217..e8126f5 100644
--- a/contrib/pgbench/pgbench.c
+++ b/contrib/pgbench/pgbench.c
@@ -329,11 +329,16 @@ usage(void)
 		   Usage:\n
 		 %s [OPTION]... [DBNAME]\n
 		   \nInitialization options:\n
-		 -i   invokes initialization mode\n
-		 -F NUM   fill factor\n
-		 -n   do not run VACUUM after initialization\n
-		 -q   quiet logging (one message each 5 seconds)\n
-		 -s NUM   scaling factor\n
+		 -i, --initialize\n
+		  invokes initialization mode\n
+		 -F NUM, --fillfactor NUM\n
+		  set fill factor\n
+		 -n, --no-vacuum\n
+		  do not run VACUUM after initialization\n
+		 -q, --quiet-log\n
+		  quiet logging (one message each 5 seconds)\n
+		 -s NUM, --scale NUM\n
+		  scaling factor\n
 		 --foreign-keys\n
 		  create foreign key constraints between tables\n
 		 --index-tablespace=TABLESPACE\n
@@ -343,32 +348,50 @@ usage(void)
 		 --unlogged-tables\n
 		  create tables as unlogged tables\n
 		   \nBenchmarking options:\n
-		  -c NUM   number of concurrent database clients (default: 1)\n
-		 -C   establish new connection for each transaction\n
-		 -D VARNAME=VALUE\n
+		 -c NUM, --client NUM\n
+		  number of concurrent database clients (default: 1)\n
+		 -C, --connect\n
+		  establish new connection for each transaction\n
+		 -D VARNAME=VALUE, --define VARNAME=VALUE\n
 		  define variable for use by custom script\n
-		 -f FILENAME  read transaction script from FILENAME\n
-		 -j NUM   number of threads (default: 1)\n
-		 -l   write transaction times to log file\n
-		 -M simple|extended|prepared\n
-		  protocol for submitting queries to server (default: simple)\n
-		 -n   do not run VACUUM before tests\n
-		 -N   do not update tables \pgbench_tellers\ and \pgbench_branches\\n
-		 -r   report average latency per command\n
-		 -s NUM   report this scale factor in output\n
-		 -S   perform SELECT-only transactions\n
-	   -t NUM   number of transactions each client runs (default: 10)\n
-		 -T NUM   duration of benchmark test in seconds\n
-		 -v   vacuum all four standard tables before tests\n
+		 -f FILENAME, --file FILENAME\n
+		  read transaction script from FILENAME\n
+		 -j NUM, --jobs NUM\n
+		  number of threads (default: 1)\n
+		 -l, --logwrite transaction times to log file\n
+		 -M simple|extended|prepared, --protocole ...\n
+		  protocol for submitting queries to server 
+		   (default: simple)\n
+		 -n, --no-vacuum\n
+		  do not run VACUUM before tests\n
+		 -N, --skip-some-updates\n
+		  do not update tables \pgbench_tellers\
+		  and \pgbench_branches\\n
+		 -r, --report-latencies\n
+		  report average latency per command\n
+		 -s NUM, --scale NUM\n
+		  report this scale factor in output\n
+		 -S, --select-only\n
+		  perform SELECT-only transactions\n
+		 -t NUM, --transactions NUM\n
+		  number of transactions each client runs 
+		 (default: 10)\n
+		 -T NUM, --time NUM\n
+		  duration of 

Re: [HACKERS] pluggable compression support

2013-06-25 Thread Andres Freund
On 2013-06-25 12:08:22 -0700, Josh Berkus wrote:
 On 06/25/2013 11:42 AM, Andres Freund wrote:
  True. So, how do we proceed on that?
  
  The ASF decided it was safe to use lz4 in cassandra. Does anybody have
  contacts over there?
  
  Btw, I have the feeling we hold this topic to a higher standard wrt
  patent issues than other work in postgres...
 
 We have access to attorneys, both in the US and Canada.  I will proceed
 to ask for real legal advice.

Cool!

 However, can you tell me what exactly you are concerned about? lz4 is
 under the BSD license, and released by Google.

Snappy is released/copyrighted by google. lz4 by Yann Collet.

Both are under BSD licenses (3 and 2 clause variants respectively). I
don't think we need to worry about the license.

 Why are we worried, exactly?

The concerns I heard about were all about patent issues.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pluggable compression support

2013-06-25 Thread Claudio Freire
On Tue, Jun 25, 2013 at 4:15 PM, Andres Freund and...@2ndquadrant.com wrote:
 However, can you tell me what exactly you are concerned about? lz4 is
 under the BSD license, and released by Google.

 Snappy is released/copyrighted by google. lz4 by Yann Collet.

 Both are under BSD licenses (3 and 2 clause variants respectively). I
 don't think we need to worry about the license.

 Why are we worried, exactly?

 The concerns I heard about were all about patent issues.


IANAL, but patents have nothing to do with licenses. Licenses apply to
specific implementations, whereas software patents (ironically, since
they're forbidden to in general, but they do it anyway for software
patents) apply to processes. Two implementations may differ, and be
under different licenses, but the same patent may apply to both.

It's a sick state of affairs when you can implement lz4 completely
yourself and still be unclear about whether your own, clean-room
implementation is encumbered by patents or not.

A lawyer has to sift through pattent applications to know whether this
particular implementation is encumbered, and that's a very
time-consuming process (and thus very expensive).

If you can take the effort, it would be greatly beneficial I imagine.
But I think you're underestimating what those lawyers will ask for it.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pluggable compression support

2013-06-25 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 However, can you tell me what exactly you are concerned about? lz4 is
 under the BSD license, and released by Google.  Why are we worried, exactly?

Patents.  The license on the code doesn't matter --- worst case, if
someone objected, we could rewrite the algorithm ourselves to get out
of an alleged copyright violation.  But if someone comes after us for
a patent violation we're screwed; or at least, our users who have
terabytes of data stored with an infringing algorithm are screwed.

Andres is right that we're paying closer attention to patent risks here
than we do most places.  That's because we know that the whole arena
of data compression is a minefield of patents.  It would be
irresponsible not to try to check.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread Dean Rasheed
On 25 June 2013 18:17, Josh Berkus j...@agliodbs.com wrote:
 Hackers,

 I'd like to take a straw poll here on how we should acknowledge
 reviewers.  Please answer the below with your thoughts, either on-list
 or via private email.

 How should reviewers get credited in the release notes?

 a) not at all
 b) in a single block titled Reviewers for this version at the bottom.
 c) on the patch they reviewed, for each patch


b) Unless they contribute enough to the patch to be considered a co-author.


 Should there be a criteria for a creditable review?

 a) no, all reviews are worthwhile
 b) yes, they have to do more than it compiles
 c) yes, only code reviews should count


a) Sometimes even it compiles can be worthwhile, if there is doubt
over platform compatibility. All contributions should be acknowledged
and encouraged.


 Should reviewers for 9.4 get a prize, such as a t-shirt, as a
 promotion to increase the number of non-submitter reviewers?

 a) yes
 b) no
 c) yes, but submitters and committers should get it too


b) Getting your name in the fine manual is reward enough ;-)

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pluggable compression support

2013-06-25 Thread Josh Berkus
On 06/25/2013 12:23 PM, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
 However, can you tell me what exactly you are concerned about? lz4 is
 under the BSD license, and released by Google.  Why are we worried, exactly?
 
 Patents.  The license on the code doesn't matter --- worst case, if
 someone objected, we could rewrite the algorithm ourselves to get out
 of an alleged copyright violation.  But if someone comes after us for
 a patent violation we're screwed; or at least, our users who have
 terabytes of data stored with an infringing algorithm are screwed.

Taking this off-list, because it is a legal matter.  Particularly, it's
legally problematic to discuss patents on a public mailing list, as we
found out with the ARC patent.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Kudos for Reviewers -- straw poll

2013-06-25 Thread David Fetter
On Tue, Jun 25, 2013 at 10:17:07AM -0700, Josh Berkus wrote:
 Hackers,
 
 I'd like to take a straw poll here on how we should acknowledge
 reviewers.  Please answer the below with your thoughts, either on-list
 or via private email.
 
 How should reviewers get credited in the release notes?
 
 a) not at all
 b) in a single block titled Reviewers for this version at the bottom.
 c) on the patch they reviewed, for each patch

c)  This keeps history better.

 Should there be a criteria for a creditable review?
 
 a) no, all reviews are worthwhile
 b) yes, they have to do more than it compiles
 c) yes, only code reviews should count

a).  If it turns out that people are gaming this, or appear to be
gaming this, we can revisit the policy.

 Should reviewers for 9.4 get a prize, such as a t-shirt, as a
 promotion to increase the number of non-submitter reviewers?
 
 a) yes
 b) no
 c) yes, but submitters and committers should get it too

b).  You want to be *extremely* careful when paying volunteers.  The
chances of damaging their intrinsic motivations are high, especially
when it's not stuff like covering their expenses.

http://www.iew.uzh.ch/wp/iewwp007.pdf

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hash partitioning.

2013-06-25 Thread Tom Lane
Christopher Browne cbbro...@gmail.com writes:
 There would indeed be merit in improving the partitioning apparatus,
 and actually, I think it's been a couple of years since there has been
 serious discussion of this.

We could certainly use a partitioning mechanism that's easier to use
than what we have now, which is basically build it yourself, here's
the parts bin.  There would also be some performance benefits from
moving the partitioning logic into hard-wired code.

However, I find it hard to think that hash partitioning as such is very
high on the to-do list.  As was pointed out upthread, the main practical
advantage of partitioning is *not* performance of routine queries, but
improved bulk-data management such as the ability to do periodic
housecleaning by dropping a partition.  If your partitioning is on a
hash, you've thrown away any such advantage, because there's no
real-world meaning to the way the data's been split up.  So I find range
and list partitioning way more plausible.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] LATERAL quals revisited

2013-06-25 Thread Tom Lane
I've been studying the bug reported at
http://www.postgresql.org/message-id/20130617235236.GA1636@jeremyevans.local
that the planner can do the wrong thing with queries like

SELECT * FROM
  i LEFT JOIN LATERAL (SELECT * FROM j WHERE i.n = j.n) j ON true;

I think the fundamental problem is that, because the i.n = j.n clause
appears syntactically in WHERE, the planner is treating it as if it were
an inner-join clause; but really it ought to be considered a clause of
the upper LEFT JOIN.  That is, semantically this query ought to be
equivalent to

SELECT * FROM
  i LEFT JOIN LATERAL (SELECT * FROM j) j ON i.n = j.n;

However, because distribute_qual_to_rels doesn't see the clause as being
attached to the outer join, it's not marked with the correct properties
and ends up getting evaluated in the wrong place (as a filter clause
not a join filter clause).  The bug is masked in the test cases we've
used so far because those cases are designed to let the clause get
pushed down into the scan of the inner relation --- but if it doesn't
get pushed down, it's evaluated the wrong way.

After some contemplation, I think that the most practical way to fix
this is for deconstruct_recurse and distribute_qual_to_rels to
effectively move such a qual to the place where it logically belongs;
that is, rather than processing it when we look at the lower WHERE
clause, set it aside for a moment and then add it back when looking at
the ON clause of the appropriate outer join.  This should be reasonably
easy to do by keeping a list of postponed lateral clauses while we're
scanning the join tree.

For there to *be* a unique appropriate outer join, we need to require
that a LATERAL-using qual clause that's under an outer join contain
lateral references only to the outer side of the nearest enclosing outer
join.  There's no such restriction in the spec of course, but we can
make it so by refusing to flatten a sub-select if pulling it up would
result in having a clause in the outer query that violates this rule.
There's already some code in prepjointree.c (around line 1300) that
attempts to enforce this, though now that I look at it again I'm not
sure it's covering all the bases.  We may need to extend that check.

I'm inclined to process all LATERAL-using qual clauses this way, ie
postpone them till we recurse back up to a place where they can
logically be evaluated.  That won't make any real difference when no
outer joins are present, but it will eliminate the ugliness that right
now distribute_qual_to_rels is prevented from sanity-checking the scope
of the references in a qual when LATERAL is present.  If we do it like
this, we can resurrect full enforcement of that sanity check, and then
throw an error if any postponed quals are left over when we're done
recursing.

Thoughts, better ideas?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-06-25 Thread Robert Haas
On Tue, Jun 25, 2013 at 1:15 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 I'm not sure it's a good idea to sleep proportionally to the time it took to
 complete the previous fsync. If you have a 1GB cache in the RAID controller,
 fsyncing the a 1GB segment will fill it up. But since it fits in cache, it
 will return immediately. So we proceed fsyncing other files, until the cache
 is full and the fsync blocks. But once we fill up the cache, it's likely
 that we're hurting concurrent queries. ISTM it would be better to stay under
 that threshold, keeping the I/O system busy, but never fill up the cache
 completely.

Isn't the behavior implemented by the patch a reasonable approximation
of just that?  When the fsyncs start to get slow, that's when we start
to sleep.   I'll grant that it would be better to sleep when the
fsyncs are *about* to get slow, rather than when they actually have
become slow, but we have no way to know that.  The only feedback we
have on how bad things are is how long it took the last fsync to
complete, so I actually think that's a much better way to go than any
fixed sleep - which will often be unnecessarily long on a well-behaved
system, and which will often be far too short on one that's having
trouble.  I'm inclined to think think Kondo-san has got it right.

I like your idea of putting a stake in the ground and assuming that
the fsync phase will turn out to be X% of the checkpoint, but I wonder
if we can be a bit more sophisticated, especially for cases where
checkpoint_segments is small.  When checkpoint_segments is large, then
we know that some of the data will get written back to disk during the
write phase, because the OS cache is only so big.  But when it's
small, the OS will essentially do nothing during the write phase, and
then it's got to write all the data out during the fsync phase.  I'm
not sure we can really model that effect thoroughly, but even
something dumb would be smarter than what we have now - e.g. use 10%,
but when checkpoint_segments  10, use 1/checkpoint_segments.  Or just
assume the fsync phase will take 30 seconds.  Or ... something.  I'm
not really sure what the right model is here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Improvement of checkpoint IO scheduler for stable transaction responses

2013-06-25 Thread Heikki Linnakangas

On 25.06.2013 23:03, Robert Haas wrote:

On Tue, Jun 25, 2013 at 1:15 PM, Heikki Linnakangas
hlinnakan...@vmware.com  wrote:

I'm not sure it's a good idea to sleep proportionally to the time it took to
complete the previous fsync. If you have a 1GB cache in the RAID controller,
fsyncing the a 1GB segment will fill it up. But since it fits in cache, it
will return immediately. So we proceed fsyncing other files, until the cache
is full and the fsync blocks. But once we fill up the cache, it's likely
that we're hurting concurrent queries. ISTM it would be better to stay under
that threshold, keeping the I/O system busy, but never fill up the cache
completely.


Isn't the behavior implemented by the patch a reasonable approximation
of just that?  When the fsyncs start to get slow, that's when we start
to sleep.   I'll grant that it would be better to sleep when the
fsyncs are *about* to get slow, rather than when they actually have
become slow, but we have no way to know that.


Well, that's the point I was trying to make: you should sleep *before* 
the fsyncs get slow.



The only feedback we have on how bad things are is how long it took
the last fsync to complete, so I actually think that's a much better
way to go than any fixed sleep - which will often be unnecessarily
long on a well-behaved system, and which will often be far too short
on one that's having trouble. I'm inclined to think think Kondo-san
has got it right.


Quite possible, I really don't know. I'm inclined to first try the 
simplest thing possible, and only make it more complicated if that's not 
good enough. Kondo-san's patch wasn't very complicated, but nevertheless 
a fixed sleep between every fsync, unless you're behind the schedule, is 
even simpler. In particular, it's easier to tie that into the checkpoint 
scheduler - I'm not sure how you'd measure progress or determine how 
long to sleep unless you assume that every fsync is the same.



I like your idea of putting a stake in the ground and assuming that
the fsync phase will turn out to be X% of the checkpoint, but I wonder
if we can be a bit more sophisticated, especially for cases where
checkpoint_segments is small.  When checkpoint_segments is large, then
we know that some of the data will get written back to disk during the
write phase, because the OS cache is only so big.  But when it's
small, the OS will essentially do nothing during the write phase, and
then it's got to write all the data out during the fsync phase.  I'm
not sure we can really model that effect thoroughly, but even
something dumb would be smarter than what we have now - e.g. use 10%,
but when checkpoint_segments  10, use 1/checkpoint_segments.  Or just
assume the fsync phase will take 30 seconds.


If checkpoint_segments  10, there isn't very much dirty data to flush 
out. This isn't really problem in that case - no matter how stupidly we 
do the writing and fsyncing. the I/O cache can absorb it. It doesn't 
really matter what we do in that case.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >