Re: [HACKERS] Patch for fail-back without fresh backup

2013-07-07 Thread Sawada Masahiko
On Mon, Jun 17, 2013 at 8:48 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On 17 June 2013 09:03, Pavan Deolasee pavan.deola...@gmail.com wrote:

 I agree. We should probably find a better name for this. Any suggestions ?

 err, I already made one...

 But that's not the whole story. I can see some utility in a patch that
 makes all WAL transfer synchronous, rather than just commits. Some
 name like synchronous_transfer might be appropriate. e.g.
 synchronous_transfer = all | commit (default).

 Since commits are more foreground in nature and this feature
 does not require us to wait during common foreground activities, we want a
 configuration where master can wait for synchronous transfers at other than
 commits. May we can solve that by having more granular control to the said
 parameter ?


 The idea of another slew of parameters that are very similar to
 synchronous replication but yet somehow different seems weird. I can't
 see a reason why we'd want a second lot of parameters. Why not just
 use the existing ones for sync rep? (I'm surprised the Parameter
 Police haven't visited you in the night...) Sure, we might want to
 expand the design for how we specify multi-node sync rep, but that is
 a different patch.


 How would we then distinguish between synchronous and the new kind of
 standby ?

 That's not the point. The point is Why would we have a new kind of
 standby? and therefore why do we need new parameters?

 I am told, one of the very popular setups for DR is to have one
 local sync standby and one async (may be cascaded by the local sync). Since
 this new feature is more useful for DR because taking a fresh backup on a
 slower link is even more challenging, IMHO we should support such setups.

 ...which still doesn't make sense to me. Lets look at that in detail.

 Take 3 servers, A, B, C with A and B being linked by sync rep, and C
 being safety standby at a distance.

 Either A or B is master, except in disaster. So if A is master, then B
 would be the failover target. If A fails, then you want to failover to
 B. Once B is the target, you want to failback to A as the master. C
 needs to follow the new master, whichever it is.

 If you set up sync rep between A and B and this new mode between A and
 C. When B becomes the master, you need to failback from B from A, but
 you can't because the new mode applied between A and C only, so you
 have to failback from C to A. So having the new mode not match with
 sync rep means you are forcing people to failback using the slow link
 in the common case.

 You might observe that having the two modes match causes problems if A
 and B fail, so you are forced to go to C as master and then eventually
 failback to A or B across a slow link. That case is less common and
 could be solved by extending sync transfer to more/multi nodes.

 It definitely doesn't make sense to have sync rep on anything other
 than a subset of sync transfer. So while it may be sensible in the
 future to make sync transfer a superset of sync rep nodes, it makes
 sense to make them the same config for now.
I have updated the patch.

we support following 2 cases.
1. SYNC server and also make same failback safe standby server
2. ASYNC server and also make same failback safe standby server

1.  changed name of parameter
  give up 'failback_safe_standby_names' parameter from the first patch.
  and changed name of parameter from 'failback_safe_mode ' to
'synchronous_transfer'.
  this parameter accepts 'all', 'data_flush' and 'commit'.

  -'commit'
'commit' means that master waits for corresponding WAL to flushed
to disk of standby server on commits.
but master doesn't waits for replicated data pages.

  -'data_flush'
'data_flush' means that master waits for replicated data page
(e.g, CLOG, pg_control) before flush to disk of master server.
but if user set to 'data_flush' to this parameter,
'synchronous_commit' values is ignored even if user set
'synchronous_commit'.

  -'all'
'all' means that master waits for replicated WAL and data page.

2. put SyncRepWaitForLSN() function into XLogFlush() function
  we have put SyncRepWaitForLSN() function into XLogFlush() function,
and change argument of XLogFlush().

they are setup case and need to set parameters.

- SYNC server and also make same failback safe standgy server (case 1)
  synchronous_transfer = all
  synchronous_commit = remote_write/on
  synchronous_standby_names = ServerName

- ASYNC server and also make same failback safe standgy server (case 2)
  synchronous_transfer = data_flush
  (synchronous_commit values is ignored)

- default SYNC replication
  synchronous_transfer = commit
  synchronous_commit = on
  synchronous_standby_names = ServerName

- default ASYNC replication
  synchronous_transfer = commit

ToDo
1. currently this patch supports synchronous transfer. so we can't set
different synchronous transfer mode to each server.
we need to improve the patch for support following cases.
   - SYNC standby 

Re: [HACKERS] Patch for fail-back without fresh backup

2013-07-07 Thread Sawada Masahiko
On Sun, Jul 7, 2013 at 4:19 PM, Sawada Masahiko sawada.m...@gmail.com wrote:
 On Mon, Jun 17, 2013 at 8:48 PM, Simon Riggs si...@2ndquadrant.com wrote:
 On 17 June 2013 09:03, Pavan Deolasee pavan.deola...@gmail.com wrote:

 I agree. We should probably find a better name for this. Any suggestions ?

 err, I already made one...

 But that's not the whole story. I can see some utility in a patch that
 makes all WAL transfer synchronous, rather than just commits. Some
 name like synchronous_transfer might be appropriate. e.g.
 synchronous_transfer = all | commit (default).

 Since commits are more foreground in nature and this feature
 does not require us to wait during common foreground activities, we want a
 configuration where master can wait for synchronous transfers at other than
 commits. May we can solve that by having more granular control to the said
 parameter ?


 The idea of another slew of parameters that are very similar to
 synchronous replication but yet somehow different seems weird. I can't
 see a reason why we'd want a second lot of parameters. Why not just
 use the existing ones for sync rep? (I'm surprised the Parameter
 Police haven't visited you in the night...) Sure, we might want to
 expand the design for how we specify multi-node sync rep, but that is
 a different patch.


 How would we then distinguish between synchronous and the new kind of
 standby ?

 That's not the point. The point is Why would we have a new kind of
 standby? and therefore why do we need new parameters?

 I am told, one of the very popular setups for DR is to have one
 local sync standby and one async (may be cascaded by the local sync). Since
 this new feature is more useful for DR because taking a fresh backup on a
 slower link is even more challenging, IMHO we should support such setups.

 ...which still doesn't make sense to me. Lets look at that in detail.

 Take 3 servers, A, B, C with A and B being linked by sync rep, and C
 being safety standby at a distance.

 Either A or B is master, except in disaster. So if A is master, then B
 would be the failover target. If A fails, then you want to failover to
 B. Once B is the target, you want to failback to A as the master. C
 needs to follow the new master, whichever it is.

 If you set up sync rep between A and B and this new mode between A and
 C. When B becomes the master, you need to failback from B from A, but
 you can't because the new mode applied between A and C only, so you
 have to failback from C to A. So having the new mode not match with
 sync rep means you are forcing people to failback using the slow link
 in the common case.

 You might observe that having the two modes match causes problems if A
 and B fail, so you are forced to go to C as master and then eventually
 failback to A or B across a slow link. That case is less common and
 could be solved by extending sync transfer to more/multi nodes.

 It definitely doesn't make sense to have sync rep on anything other
 than a subset of sync transfer. So while it may be sensible in the
 future to make sync transfer a superset of sync rep nodes, it makes
 sense to make them the same config for now.
 I have updated the patch.

 we support following 2 cases.
 1. SYNC server and also make same failback safe standby server
 2. ASYNC server and also make same failback safe standby server

 1.  changed name of parameter
   give up 'failback_safe_standby_names' parameter from the first patch.
   and changed name of parameter from 'failback_safe_mode ' to
 'synchronous_transfer'.
   this parameter accepts 'all', 'data_flush' and 'commit'.

   -'commit'
 'commit' means that master waits for corresponding WAL to flushed
 to disk of standby server on commits.
 but master doesn't waits for replicated data pages.

   -'data_flush'
 'data_flush' means that master waits for replicated data page
 (e.g, CLOG, pg_control) before flush to disk of master server.
 but if user set to 'data_flush' to this parameter,
 'synchronous_commit' values is ignored even if user set
 'synchronous_commit'.

   -'all'
 'all' means that master waits for replicated WAL and data page.

 2. put SyncRepWaitForLSN() function into XLogFlush() function
   we have put SyncRepWaitForLSN() function into XLogFlush() function,
 and change argument of XLogFlush().

 they are setup case and need to set parameters.

 - SYNC server and also make same failback safe standgy server (case 1)
   synchronous_transfer = all
   synchronous_commit = remote_write/on
   synchronous_standby_names = ServerName

 - ASYNC server and also make same failback safe standgy server (case 2)
   synchronous_transfer = data_flush
   (synchronous_commit values is ignored)

 - default SYNC replication
   synchronous_transfer = commit
   synchronous_commit = on
   synchronous_standby_names = ServerName

 - default ASYNC replication
   synchronous_transfer = commit

 ToDo
 1. currently this patch supports synchronous transfer. so we can't set
 different 

Re: [HACKERS] Add regression tests for ROLE (USER)

2013-07-07 Thread Fabien COELHO


Generally, I think that the tests which return a syntax error are of 
limited value and should probably be dropped.


I think that it is not that simple: it is a good value to check that the 
syntax error message conveys a useful information for the user, and that 
changes to the parser rules do not alter good quality error messages.


Moreover, the cost of such tests in time must be quite minimal.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to implement Gin method?

2013-07-07 Thread Martijn van Oosterhout
On Sun, Jul 07, 2013 at 10:00:16AM +0900, Kenji uno wrote:
 Hi.
 
 I want to try GIN and know programming information of GIN technology.
 
 Please teach me about these functions extractValue, extractQuery and 
 consistent.
 
 I have posted question at stack overflow.
 
 http://stackoverflow.com/questions/17489703/postgresql-how-to-implement-gin

The documentation refers to the authors pages:
http://www.sai.msu.su/~megera/wiki/Gin

Did they help at all?

Also, GIN cannot be just applied to anything. It works to be able to
index certain types of which are difficult any other way, like
full-text search.  If you give some idea of what you'd like to index
then we can give an idea of what the functions should do.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


[HACKERS] postgresMain() function

2013-07-07 Thread mohsen soodkhah mohammadi
hello .
In Postgres.c we have a switch for firstchar .  one of case of them is :
case 'Q' that is for simple queries.
I want to know the other cases were for what operations .
and:
is the case with  'Q' parameter for DDL and DML commands?


[HACKERS] WAL and XLOG

2013-07-07 Thread mohsen soodkhah mohammadi
hello.
I am reading about WAL and XLOG records.
I am beginner in them.
can you direct me and give me some document?
I want to know what did save in XLOG records exactly and who can create
XLOG records?


Re: [HACKERS] copy pasted include guard in attoptcache.h

2013-07-07 Thread Magnus Hagander
On Sat, Jul 6, 2013 at 12:12 PM, Andres Freund and...@2ndquadrant.com wrote:
 Hi,

 attoptcache.h currently uses #ifndef SPCCACHE_H. The attached patch
 fixes that.
 A quick
 $ grep -r '\#ifndef' src/include/|grep _H|awk '{print $2}'|sort|uniq -cd
 doesn't show any further duplicated ones.

Applied, thanks!


--
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to add regression tests for SCHEMA

2013-07-07 Thread Robins Tharakan
On 3 July 2013 10:19, Robert Haas robertmh...@gmail.com wrote:

 In this particular case, I think that adding a new set of tests isn't
 the right thing anyway.  Schemas are also known as namespaces, and the
 existing namespace test is where related test cases live.  Add these
 tests there instead.

 Rename regression users to regress_rol_nsp1 etc. per convention
 established in the CREATE OPERATOR patch.


Hi Robert,

PFA an updated patch:
- Renamed ROLEs as per new feedback (although the previous naming was also
as per an earlier feedback)
- Merged tests into namespace

--
Robins Tharakan


regress_schema_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: extension template

2013-07-07 Thread Markus Wanner
Salut Dimitri,

On 07/06/2013 10:30 PM, Dimitri Fontaine wrote:
 Yes, I did share this viewpoint over the naming of the feature, but Tom
 insisted that we already have those kind of templates for text search.

I think it's a question of what mental model we want extensions to
follow. See my other mail. IMO inline could well serve as a
differentiator against out-of-line, i.e. file-system based extensions
(or templates).

 Could you go into more details into your ideas here? I don't understand
 what you're suggesting.

Sorry for not making that clearer. See my follow-up mail template-ify
(binary) extensions:
http://archives.postgresql.org/message-id/51d72c1d.7010...@bluegap.ch

 Compiling pg_upgrade_support in contrib fails:

 $SRC/contrib/pg_upgrade_support/pg_upgrade_support.c:193:8:
   error: too few arguments to function ‘InsertExtensionTuple’
 
 I don't have that problem here. Will try to reproduce early next week.

'make' followed by 'make -C contrib' reproduces that for me.

 Will have a look at what it takes to implement support for better error
 messages. May I suggest to implement that later, though?

Great, thanks. I agree this is not a major issue and can be deferred.

 The idea here is to protect against mixing the file based extension
 installation mechanism with the catalogs one. I can see now that the
 already installed extension could have been installed using a template
 in the first place, so that message now seems strange.
 
 I still think that we shouldn't allow creating a template for an
 extension that is already installed, though. Do you have any suggestions
 for a better error message?

If we go for the template model, I beg to differ. In that mind-set, you
should be free to create (or delete) any kind of template without
affecting pre-existing extensions.

However, in case we follow the ancestor-derivate or link model with the
pg_depend connection, the error message seems fine.

 However, it's possible to enable an extension and then rename its
 template. The binding (as in pg_depend) is still there, but the above
 error (in that case extension $OLD_NAME already existing) certainly
 doesn't make sense. One can argue whether or not an extension with a
 different name is still the same extension at all...
 
 When renaming a template, the check against existing extensions of the
 same name is made against the new name of the template, so I don't see
 what you say here in the code. Do you have a test case?

Consider this (not actually tested again, only off the top of my head):

  # CREATE TEMPLATE FOR EXTENSION foo ...
  ERROR: Extension bar already existing

Uh? .. so what? is my reaction to such an error message. It fails to
make the distinction between template and extension. Or rather parent
and derivate. The link between the two is the actual reason for the failure.

In any case, I'm arguing this template renaming behavior (and the
subsequent error) are the wrong thing to do, anyways. Because an
extension being linked to a parent of a different name is weird and IMO
not an acceptable state.

In the link model, the name should better be thought of as a property of
the parent. A rename of it should thus rename the derived extensions in
all databases. That would prevent the nasty case of having a parent with
different name than the derivative extension. (I note that existing
file-system extensions do not work that way. They are a lot closer to
the template model. However, that's just an argument for following that
as well for inline extensions and dropping the pg_depend link between
extension and template.)

In the template model, renaming the template should not have an effect
on any extension. Neither should creation or deletion of any template.
Thus, creating a template with the same name as a pre-existing extension
(in any database) is a completely fine and valid operation. No error
needs to be thrown. This nicely shows why I currently favor the template
approach: it seems easier to understand *and* easier to implement.

 Trying to alter an inexistent or file-system stored extension doesn't
 throw an error, but silently succeeds. Especially in the latter case,
 that's utterly misleading. Please fix.
 
 Fixed in my github branch

Nice.

 That started to get me worried about the effects of a mixed
 installation, but I quickly figured it's not possible to have a full
 version on disk and then add an incremental upgrade via the system
 catalogs. I think that's a fair limitation, as mixed installations would
 pose their own set of issues. On the other hand, isn't ease of upgrades
 a selling point for this feature?
 
 The main issue to fix when you want to have that feature, which I want
 to have, is how to define a sane pg_dump policy around the thing. As we
 couldn't define that in the last rounds of reviews, we decided not to
 allow the case yet.

I bet that's because people have different mental models in mind. But I
probably stressed that point enough by now...

 I think that's a 

Re: [HACKERS] Review: extension template

2013-07-07 Thread Markus Wanner
On 07/06/2013 10:30 PM, Dimitri Fontaine wrote:
 I still think that we shouldn't allow creating a template for an
 extension that is already installed, though. Do you have any suggestions
 for a better error message?

Oh, I just realize that pg_extension_{template,control,uptmpl} are not
SHARED catalogs, but you need to install the template per-database and
then need to enable it - per-database *again*. Why is that?

If you want to just upload extensions to a database via libpq, that
should be a single step (maybe rather just CREATE EXTENSION ... AS ...)
If you want templates, those should be applicable to all databases, no?

Regards

Markus Wanner


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add some regression tests for SEQUENCE

2013-07-07 Thread Robins Tharakan
On 3 July 2013 10:13, Robert Haas robertmh...@gmail.com wrote:

 I think you should rename the roles used here to regress_rol_seq1 etc.
 to match the CREATE OPERATOR patch.


Please find updated patch:
- 'make check' successful with recent changes
- Renamed ROLEs as per feedback

--
Robins Tharakan


regress_sequence_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL and XLOG

2013-07-07 Thread Amit Langote
On Sun, Jul 7, 2013 at 3:48 PM, mohsen soodkhah mohammadi
mohsensoodk...@gmail.com wrote:
 hello.
 I am reading about WAL and XLOG records.
 I am beginner in them.
 can you direct me and give me some document?
 I want to know what did save in XLOG records exactly and who can create XLOG
 records?


You could start here:
http://www.postgresql.org/docs/9.2/static/wal-intro.html


--
Amit Langote


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2013-07-07 Thread Simon Riggs
On 3 January 2012 18:42, Tom Lane t...@sss.pgh.pa.us wrote:
 I wrote:
 Another point that requires some thought is that switching SnapshotNow
 to be MVCC-based will presumably result in a noticeable increase in each
 backend's rate of wanting to acquire snapshots.

 BTW, I wonder if this couldn't be ameliorated by establishing some
 ground rules about how up-to-date a snapshot really needs to be.
 Arguably, it should be okay for successive SnapshotNow scans to use the
 same snapshot as long as we have not acquired a new lock in between.
 If not, reusing an old snap doesn't introduce any race condition that
 wasn't there already.

Now that has been implemented using the above design, we can resubmit
the lock level reduction patch, with thanks to Robert.

Submitted patch passes original complaint/benchmark.

Changes
* various forms of ALTER TABLE, notably ADD constraint and VALIDATE
* CREATE TRIGGER

One minor coirrections to earlier thinking with respect to toast
tables. That might be later relaxed.

Full tests including proof of lock level reductions, plus docs.

--
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


reduce_lock_levels.v13.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] vacuumlo - use a cursor

2013-07-07 Thread Josh Kupershmidt
On Mon, Nov 12, 2012 at 5:14 PM, Andrew Dunstan and...@dunslane.net wrote:
 vacuumlo is rather simpleminded about dealing with the list of LOs to be
 removed - it just fetches them as a straight resultset. For one of my our
 this resulted in an out of memory condition.

Wow, they must have had a ton of LOs. With about 2M entries to pull
from vacuum_l, I observed unpatched vacuumlo using only about 45MB
RES. Still, the idea of using a cursor for the main loop seems like a
reasonable idea.

 The attached patch tries to
 remedy that by using a cursor instead. If this is wanted I will add it to
 the next commitfest. The actualy changes are very small - most of the patch
 is indentation changes due to the introduction of an extra loop.

I had some time to review this, some comments about the patch:

1. I see this new compiler warning:

vacuumlo.c: In function ‘vacuumlo’:
vacuumlo.c:306:5: warning: format ‘%lld’ expects argument of type
‘long long int’, but argument 4 has type ‘long int’ [-Wformat]

2. It looks like the the patch causes all the intermediate result-sets
fetched from the cursor to be leaked, rather negating its stated
purpose ;-) The PQclear() call should be moved up into the main loop.
With this fixed, I confirmed that vacuumlo now consumes a negligible
amount of memory when chewing through millions of entries.

3. A few extra trailing whitespaces were added.

4. The FETCH FORWARD count comes from transaction_limit. That seems
like a good-enough guesstimate, but maybe a comment could be added to
rationalize?

Some suggested changes attached with v2 patch (all except #4).

Josh


vacuumlo-cursor.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add more regression tests for dbcommands

2013-07-07 Thread Robins Tharakan
On 26 June 2013 01:55, Robins Tharakan thara...@gmail.com wrote:


 Code coverage improved from 36% to 68%.


Updated patch:
- Renamed ROLEs as per Robert's feedback (prepend regress_xxx)
- Added test to serial_schedule (missed out earlier).

--
Robins Tharakan


regress_db_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for ROLE (USER)

2013-07-07 Thread Tom Lane
Fabien COELHO coe...@cri.ensmp.fr writes:
 Generally, I think that the tests which return a syntax error are of 
 limited value and should probably be dropped.

 I think that it is not that simple: it is a good value to check that the 
 syntax error message conveys a useful information for the user, and that 
 changes to the parser rules do not alter good quality error messages.

It's good to check those things when a feature is implemented.  However,
once it's done, the odds of the bison parser breaking are very low.
Thus, the benefit of testing that over again thousands of times a day
is pretty tiny.

 Moreover, the cost of such tests in time must be quite minimal.

I'm not convinced (see above) and in any case the benefit is even more
minimal.

(Note that semantic errors, as opposed to syntax errors, are a different
question.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for ROLE (USER)

2013-07-07 Thread Andres Freund
On 2013-07-07 11:11:49 -0400, Tom Lane wrote:
 Fabien COELHO coe...@cri.ensmp.fr writes:
  Generally, I think that the tests which return a syntax error are of 
  limited value and should probably be dropped.
 
  I think that it is not that simple: it is a good value to check that the 
  syntax error message conveys a useful information for the user, and that 
  changes to the parser rules do not alter good quality error messages.
 
 It's good to check those things when a feature is implemented.  However,
 once it's done, the odds of the bison parser breaking are very low.
 Thus, the benefit of testing that over again thousands of times a day
 is pretty tiny.

There has been quite some talk about simplifying the grammar/scanner
though, if somebody starts to work on that *good* tests on syntax errors
might actually be rather worthwhile. Imo there's the danger of reducing
the specifity of error messages when doing so.
Granted, that probably mostly holds true for things actually dealing
with expressions...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] planner not choosing fastest estimated plan

2013-07-07 Thread Tom Lane
Jeff Janes jeff.ja...@gmail.com writes:
 I have a weird case where the planner doesn't choose the plan that it
 itself believes to be the fastest plan.

I poked into this a bit and found where things are going wrong.  The
ideal plan for this case involves a nestloop with a parameterized inner
scan over an inheritance tree (an append relation).  The Path for that
scan has to be built by set_append_rel_pathlist(), around lines 810-860
of allpaths.c in HEAD.  And what that code is doing is taking the
cheapest path for each appendrel member, then reparameterizing it
if necessary, which means modifying it to include enforcement of any
available parameterized join quals that it wasn't using already.
In this case what we've got for each child is a simple seqscan path
(enforcing no quals) and a bitmap indexscan path that uses the
parameterized joinqual perlupper(a.val1)=perlupper(b.val1).  Ordinarily
the bitmap path would be the cheapest and everything would be fine.
However, Jeff's example assigns a very high cost to the UDF, and the
bitmap scan's cost estimate includes (correctly) one evaluation of the
UDF.  With a high enough UDF cost, the bare seqscan is cheaper, and so
it gets picked.  Now, once we reparameterize the seqscan, which in this
case amounts to having it evaluate the parameterized joinqual at every
row, it's hugely expensive; so we end up with a very expensive
parameterized append path that isn't going to look attractive when join
paths are made, and the planner picks a hash or merge join instead.

So in short, the error lies in assuming that the cheapest plain path
will still be the cheapest one after reparameterization.  I think that
I did recognize that as a risk when I wrote this code, but supposed that
the possible addition of extra quals to evaluate wouldn't change the
outcome.  Jeff's example shows that actually the added quals can make a
huge difference.

The simplest correct logic would involve running through all the
available paths for the child relation, reparameterizing each one, and
only then comparing their costs.  That sounds a bit expensive though.
What we could do to ameliorate the planning cost is to still search for
the overall-cheapest child path, and if it is already parameterized the
way we want (which will frequently be the case, I think), then we're
done.  Only if it requires reparameterization do we have to go through
the more complicated process.

Thoughts, better ideas?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add tests for LOCK TABLE

2013-07-07 Thread Robins Tharakan
On 23 May 2013 18:19, Robins Tharakan thara...@gmail.com wrote:

 Please find attached a patch to take code-coverage of LOCK TABLE (
 src/backend/commands/lockcmds.c) from 57% to 84%.


Updated the patch:
- Updated ROLEs as per Robert's feedback
- Added test to serial_schedule.

--
Robins Tharakan


regress_lock_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-07 Thread Bruce Momjian
On Fri, Jul  5, 2013 at 11:03:56AM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Thu, Jul  4, 2013 at 04:29:20PM -0400, Tom Lane wrote:
  No, it isn't, or at least it's far from the only place.  If we're going
  to change this, we would also want to change the behavior of tests on
  RECORD values, which is something that would have to happen at runtime.
 
  I checked RECORD and that behaves with recursion:
 
 Apparently you don't even understand the problem.  All of these examples
 you're showing are constants.  Try something like
 
   declare r record;
   ...
   select ... into r ...
   if (r is null) ...

OK, I created the following test on git head (without my patch), and the
results look correct:

DO LANGUAGE plpgsql $$
DECLARE
r RECORD;
BEGIN

DROP TABLE IF EXISTS test;
CREATE TABLE test (x INT, y INT);

INSERT INTO test VALUES (1, NULL), (NULL, 1), (NULL, NULL);
FOR r IN SELECT * FROM test
LOOP
IF (r IS NULL)
THEN RAISE NOTICE 'true';
ELSE RAISE NOTICE 'false';
END IF;
END LOOP;
END;
$$;

NOTICE:  false
NOTICE:  false
NOTICE:  true

Am I missing something?

Is this an example of NOT NULL contraints not testing NULLs?

CREATE TABLE test3(x INT, y INT);
CREATE TABLE test5(z test3 NOT NULL);

INSERT INTO test5 VALUES (ROW(NULL, NULL));

SELECT * FROM test5;
  z
-
 (,)

Looks like I have to modify ExecEvalNullTest().  If I fix this, is it
going to cause problems with pg_upgraded databases now having values
that are no longer validated by the NOT NULL constraint?  

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for SET xxx

2013-07-07 Thread Robins Tharakan
On 18 June 2013 05:01, Szymon Guz mabew...@gmail.com wrote:

 I've checked the patch. Applies cleanly. Tests pass this time :)


Updated the patch:
- Updated ROLE names as per Robert's feedback (regress_xxx)
- Added test to serial_schedule

--
Robins Tharakan


regress_variable_v5.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strange IS NULL behaviour

2013-07-07 Thread Bruce Momjian
On Sun, Jul  7, 2013 at 01:04:05PM -0400, Bruce Momjian wrote:
 Looks like I have to modify ExecEvalNullTest().

Oops, I mean ExecConstraints().  This could be tricky.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-07 Thread Andres Freund
On 2013-06-28 21:47:47 +0200, Andres Freund wrote:
 So, from what I gather there's a slight leaning towards *not* storing
 the relation's oid in the WAL. Which means the concerns about the
 uniqueness issues with the syscaches need to be addressed. So far I know
 of three solutions:
 1) develop a custom caching/mapping module
 2) Make sure InvalidOid's (the only possible duplicate) can't end up the
syscache by adding a hook that prevents that on the catcache level
 3) Make sure that there can't be any duplicates by storing the oid of
the relation in a mapped relations relfilenode

So, here's 4 patches:
1) add RelationMapFilenodeToOid()
2) Add pg_class index on (reltablespace, relfilenode)
3a) Add custom cache that maps from filenode to oid
3b) Add catcache 'filter' that ensures the cache stays unique and use
that for the mapping
4) Add pg_relation_by_filenode() and use it in a regression test

3b) adds an optional 'filter' attribute to struct cachedesc in
syscache.c which is then passed to catcache.c. If it's existant
catcache.c uses it - after checking for a match in the cache - to
check whether the queried-for value possibly should end up in the
cache. If not it stores a whiteout entry as currently already done
for nonexistant entries.
It also reorders some catcache.h struct attributes to make sure
we're not growing them. Might make sense to apply that
independently, those are rather heavily used.

I slightly prefer 3b) because it's smaller, what's your opinions?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From cbedebac6a8d449a5127befe1525230c2132e06f Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Tue, 11 Jun 2013 23:25:26 +0200
Subject: [PATCH] wal_decoding: Add RelationMapFilenodeToOid function to
 relmapper.c

This function maps (reltablespace, relfilenode) to the table oid and thus acts
as a reverse of RelationMapOidToFilenode.
---
 src/backend/utils/cache/relmapper.c | 53 +
 src/include/utils/relmapper.h   |  2 ++
 2 files changed, 55 insertions(+)

diff --git a/src/backend/utils/cache/relmapper.c b/src/backend/utils/cache/relmapper.c
index 2c7d9f3..039aa29 100644
--- a/src/backend/utils/cache/relmapper.c
+++ b/src/backend/utils/cache/relmapper.c
@@ -180,6 +180,59 @@ RelationMapOidToFilenode(Oid relationId, bool shared)
 	return InvalidOid;
 }
 
+/* RelationMapFilenodeToOid
+ *
+ * Do the reverse of the normal direction of mapping done in
+ * RelationMapOidToFilenode.
+ *
+ * This is not supposed to be used during normal running but rather for
+ * information purposes when looking at the filesystem or the xlog.
+ *
+ * Returns InvalidOid if the OID is not know which can easily happen if the
+ * filenode is not of a relation that is nailed or shared or if it simply
+ * doesn't exists anywhere.
+ */
+Oid
+RelationMapFilenodeToOid(Oid filenode, bool shared)
+{
+	const RelMapFile *map;
+	int32		i;
+
+	/* If there are active updates, believe those over the main maps */
+	if (shared)
+	{
+		map = active_shared_updates;
+		for (i = 0; i  map-num_mappings; i++)
+		{
+			if (filenode == map-mappings[i].mapfilenode)
+return map-mappings[i].mapoid;
+		}
+		map = shared_map;
+		for (i = 0; i  map-num_mappings; i++)
+		{
+			if (filenode == map-mappings[i].mapfilenode)
+return map-mappings[i].mapoid;
+		}
+	}
+	else
+	{
+		map = active_local_updates;
+		for (i = 0; i  map-num_mappings; i++)
+		{
+			if (filenode == map-mappings[i].mapfilenode)
+return map-mappings[i].mapoid;
+		}
+		map = local_map;
+		for (i = 0; i  map-num_mappings; i++)
+		{
+			if (filenode == map-mappings[i].mapfilenode)
+return map-mappings[i].mapoid;
+		}
+	}
+
+	return InvalidOid;
+}
+
 /*
  * RelationMapUpdateMap
  *
diff --git a/src/include/utils/relmapper.h b/src/include/utils/relmapper.h
index 8f0b438..071bc98 100644
--- a/src/include/utils/relmapper.h
+++ b/src/include/utils/relmapper.h
@@ -36,6 +36,8 @@ typedef struct xl_relmap_update
 
 extern Oid	RelationMapOidToFilenode(Oid relationId, bool shared);
 
+extern Oid	RelationMapFilenodeToOid(Oid relationId, bool shared);
+
 extern void RelationMapUpdateMap(Oid relationId, Oid fileNode, bool shared,
 	 bool immediate);
 
-- 
1.8.3.251.g1462b67

From fc6022fcc9ba8394069870b0b2b0e32a4a648c70 Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Sun, 7 Jul 2013 18:38:56 +0200
Subject: [PATCH] Add index on pg_class(reltablespace, relfilenode)

Used by RelidByRelfilenode either via relfilenodemap.c or via a special
syscache.

Needs a CATVERSION bump.
---
 src/include/catalog/indexing.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 19268fb..4860e98 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -106,6 +106,8 @@ 

Re: [HACKERS] Review: extension template

2013-07-07 Thread Markus Wanner
On 07/07/2013 02:55 PM, Markus Wanner wrote:
 If you want to just upload extensions to a database via libpq..

Let's rephrase this in a (hopefully) more constructive way: I get the
impression you are trying to satisfy many different needs. Way more that
you need to scratch your own itch. To the point that I had trouble
figuring out what exactly the goal of your patch is.

My advice would be: Be brave! Dare to put down any request for
templates (including mine) and go for the simplest possible
implementation that allows you to create extensions via libpq. (Provided
that really is the itch you want to scratch. I'm still not quite sure I
got that right.)

As it stands, I get the impression the patch is trying to sell me a
feature that it doesn't really provide. If you stripped the template
stuff, including the CREATE TEMPLATE FOR EXTENSION command, but just
sold me this patch as a simple way to create an extension via libpq,
i.e. something closer to CREATE EXTENSION AS .. , I would immediately
buy that. Currently, while allowing an upload, it seems far from simple,
but adds quite a bit of unwanted complexity. If all I want is to upload
code for an extension via libpq, I don't want to deal with nifty
distinctions between templates and extensions.

Just my opinion, though. Maybe I'm still missing something.

Regards

Markus Wanner


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-07-07 Thread Peter Eisentraut
On Thu, 2013-07-04 at 02:18 -0700, Hitoshi Harada wrote:
 as someone suggested in the previous thread, it might be a variant of
 CAST.  CREATE CAST (hstore AS plpython2u) ?  Or CREATE LANGUAGE TRANSFORM
 might sound better.  In either case, I think we are missing the discussion
 on the standard overloading.

LANGUAGE isn't a concept limited to the server side in the SQL standard.
I could go with something like CREATE SERVER TRANSFORM.

 - dependency loading issue
 Although most of the use cases are via CREATE EXTENSION, it is not great to
 let users to load the dependency manually.  Is it possible to load
 hstore.so and plpython2u.so from _PG_init of hstore_plpython2u?  Because
 the author of transform should certainly know the name of shared library in
 the database installation, writing down the shared library names in the
 init function sounds reasonable.  Or do we still need to consider cases
 where plpython2u.so is renamed to something else?

I don't like my solution very much either, but I think I like this one
even less.  I think the identity of the shared library for the hstore
type is the business of the hstore extension, and other extensions
shouldn't mess with it.  The interfaces exposed by the hstore extension
are the types and functions, and that's what we are allowed to use.  If
that's not enough, we need to expose more interfaces.

 - function types
 Although I read the suggestion to use internal type as the argument of
 from_sql function, I guess it exposes another security risk.  Since we
 don't know what SQL type can safely be passed to the from_sql function, we
 cannot check if the function is the right one at the time of CREATE
 TRANSFORM.  Non-super user can add his user defined type and own it, and
 create a transform that with from_sql function that takes internal and
 crashes with this user-defined type.  A possible compromise is let only
 super user create transforms, or come up with nice way to allow
 func(sql_type) - internal signature.

Good point.  My original patch allowed func(sql_type) - internal, but I
took that out because people had security concerns.

I'd be OK with restricting transform creation to superusers in the first
cut.

 - create or replace causes inconsistency
 I tried:
   * create transform python to hstore (one way transform)
   * create function f(h hstore) language python
   * create or replace transform hstore to python and python to hstore (both
 ways)
   * call f() causes error, since it misses hstore to python transform. It
 is probably looking at the old definition

What error exactly?  Can you show the full test case?

There might be some caching going on.

 - create func - create transform is not prohibited
 I saw your post in the previous discussion:
 
   * I don't think recording dependencies on transforms used when creating
   functions is a good idea as the transform might get created after the
   functions already exists. That seems to be a pretty confusing behaviour.
 
  We need the dependencies, because otherwise dropping a transform would
  break or silently alter the behavior of functions that depend on it.
  That sounds like my worst nightmare, thinking of some applications that
  would be affected by that.  But your point is a good one.  I think this
  could be addressed by prohibiting the creation of a transform that
  affects functions that already exist.
 
 However I don't see this prohibition of create transform if there is
 already such function.  You are not planning to address this issue?

I had planned to implement that, but talking to some people most didn't
think it was useful or desirable.  It's still open for debate.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for DISCARD

2013-07-07 Thread Jeff Janes
On Sat, Jul 6, 2013 at 8:49 PM, Robins Tharakan thara...@gmail.com wrote:
 Thanks Fabrizio.

 Although parallel_schedule was a miss for this specific patch, however, I
 guess I seem to have missed out serial_schedule completely (in all patches)
 and then thanks for pointing this out. Subsequently Robert too noticed the
 miss at the serial_schedule end.

Why does serial_schedule even exist?  Couldn't we just run the
parallel schedule serially, like what happens when MAX_CONNECTIONS=1?


Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-07-07 Thread Peter Eisentraut
On Fri, 2013-07-05 at 12:04 -0700, Josh Berkus wrote:
 (a) transforms aren't like other contribs, in that they are dependant on
 other contribs before you install them.

That doesn't appear to be a reason for creating subdirectories.

 (b) we can expect maybe a dozen to 18 of them in core based on the data
 types there, and I hate to clutter up /contrib, and

Well, that's a matter of opinion.  I'd be more happy with 250 contribs
all on the same level versus a bunch of subdirectories structured based
on personal preferences.

But hey, we disagreed on config.sgml for similar reasons, IIRC. ;-)

 (c) I'd like to do a future feature which supports install all
 transforms functionality, which would be helped by having them in their
 own directory.

Installing all transforms by itself is not a sensible operation, because
you only want the transforms for the types and languages that you
actually use or have previously selected for installation.

I understand that this situation is unsatisfactory.  But I don't think
any dependency or package management system has solved it.  For example,
can any package management system make the following decision: user
install PHP, user installs PostgreSQL client = install php-pgsql
automatically.  I don't think so.  The only solutions are making PHP
dependent on PostgreSQL (or vice versa), or having the user install
php-pgsql explicitly.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-07 Thread Peter Eisentraut
On Sun, 2013-07-07 at 02:01 -0300, Claudio Freire wrote:
 You really want to test more than just the str. The patch contains
 casts to int and float, which is something existing PLPython code will
 be doing, so it's good to test it still can be done with decimal.

Let's remember that we are regression testing PL/Python here, not
Python.  The functionality of PL/Python is to convert a numeric to
Decimal, and that's what we are testing.  What Python can or cannot do
with that is not our job to test.

 Existing python code will also expect the number to be a float, and
 will try to operate against other floats. That's not going to work
 anymore, that has to go into release notes.

Right, this will be listed in the release notes under upgrade caveats.





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-07 Thread Peter Eisentraut
On Sun, 2013-07-07 at 17:21 +0200, Szymon Guz wrote:
 I think that these tests are much better, so they should go into
 trunk.
 As for Python 2.5 I think we could modify the code and makefile (with
 additional documentation info) so the decimal code wouldn't be
 compiled
 with python 2.5. 

I'd welcome updated tests, if you want to work on that.  But they would
need to work uniformly for Python 2.4 through 3.3.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add session_preload_libraries configuration parameter

2013-07-07 Thread Peter Eisentraut
On Fri, 2013-06-21 at 11:51 +0200, Dimitri Fontaine wrote:
 I found it strange that those two paras read differently for saying
 the
 same thing?
 
  +preloaded at connection start.  This parameter cannot be
 changed after
  +the start of a particular session.  If a specified library
 is not
 
  +The parameter value only takes effect at the start of the
 connection.
  +Subsequent changes have no effect.  If a specified library
 is not

The first says that changing the parameter after connection start will
result in an error.  The second says that you can change the parameter
after connection start, but it won't have any effect.

This is just a result of the various GUC context mechanisms.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-07 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 3b) Add catcache 'filter' that ensures the cache stays unique and use
 that for the mapping

 I slightly prefer 3b) because it's smaller, what's your opinions?

This is just another variation on the theme of kluging the catcache to
do something it shouldn't.  You're still building a catcache on a
non-unique index, and that is going to lead to trouble.

(I'm a bit surprised that there is no Assert in catcache.c checking
that the index nominated to support a catcache is unique ...)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add regression tests for DISCARD

2013-07-07 Thread Robins Tharakan
On 7 July 2013 14:08, Jeff Janes jeff.ja...@gmail.com wrote:

 Why does serial_schedule even exist?  Couldn't we just run the
 parallel schedule serially, like what happens when MAX_CONNECTIONS=1?


Well, I am sure it works that way, without errors.
The 'why' still eludes me though, just that its required for this
submission.

--
Robins Tharakan


Re: [HACKERS] Review: extension template

2013-07-07 Thread Dimitri Fontaine
Markus Wanner mar...@bluegap.ch writes:
 Oh, I just realize that pg_extension_{template,control,uptmpl} are not
 SHARED catalogs, but you need to install the template per-database and
 then need to enable it - per-database *again*. Why is that?

Because the current model is not serving us well enough, with a single
module version per major version of PostgreSQL. Meaning for all the
clusters on the server, and all the databases in them.

We want to be able to have postgis 1.5 and 2.x installed in two
different databases in the same cluster, don't we?

Well the current patch we still can't because of the dynamically shared
object side of things, but that's not a good reason to impose the same
limitation on to the template idea.

   Currently, while allowing an upload, it seems far from simple,
 but adds quite a bit of unwanted complexity. If all I want is to upload
 code for an extension via libpq, I don't want to deal with nifty
 distinctions between templates and extensions.

 Just my opinion, though. Maybe I'm still missing something.

Yes: dump  restore.

After playing around with several ideas around that in the past two
development cycles, the community consensus clearly is that extensions
are *NEVER* going to be part of your dump scripts.

Now when using templates you have no other source to install the
extensions from at pg_restore time, given what I just said.

The whole goal of the template idea is to offer a way to dump and
restore the data you need for CREATE EXTENSION to just work at restore
time, even when you sent the data over the wire.

Current extension are managed on the file system, the contract is that
it is the user's job to maintain and ship that, externally to PostgreSQL
responsibilities. All that PostgreSQL knows about is to issue the CREATE
EXTENSION command at pg_restore time.

With templates or in-line extensions, the contract is that the user asks
PostgreSQL to manage its extensions in the same way it does for the
other objects on the system. The design we found to address that is
called Extension Templates and is implemented in the current patch.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-07 Thread Szymon Guz
On 7 July 2013 21:35, Peter Eisentraut pete...@gmx.net wrote:

 On Sun, 2013-07-07 at 17:21 +0200, Szymon Guz wrote:
  I think that these tests are much better, so they should go into
  trunk.
  As for Python 2.5 I think we could modify the code and makefile (with
  additional documentation info) so the decimal code wouldn't be
  compiled
  with python 2.5.

 I'd welcome updated tests, if you want to work on that.  But they would
 need to work uniformly for Python 2.4 through 3.3.



Well... I don't know what to do and which solution is better.

This patch works, but the tests are not working on some old machines.

This patch works, but changes the plpython functions, so I assume that it
will provide errors to some existing functions. I've noticed yesterday that
you cannot run code like `Decimal(10) - float(10)`. So if a function
accepts a numeric parameter 'x', which currently is converted to float,
then the code like `x - float(10)` currently works, and will not work after
this change.

Introducing decimal.Decimal also breaks python earlier than 2.4, as the
decimal module has been introduced in 2.4. We could use the old conversion
for versions before 2.4, and the new for 2.4 and newer. Do we want it to
work like this? Do we want to have different behaviour for different python
versions? I'm not sure if anyone still uses Python 2.3, but I've already
realised that the patch breaks all the functions for 2.3 which use numeric
argument.

I assume that the patch will be rolled back, if it the tests don't work on
some machines, right?

szymon


Re: [HACKERS] changeset generation v5-01 - Patches git tree

2013-07-07 Thread Andres Freund
On 2013-07-07 15:43:17 -0400, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  3b) Add catcache 'filter' that ensures the cache stays unique and use
  that for the mapping
 
  I slightly prefer 3b) because it's smaller, what's your opinions?
 
 This is just another variation on the theme of kluging the catcache to
 do something it shouldn't.  You're still building a catcache on a
 non-unique index, and that is going to lead to trouble.

I don't think the lurking dangers really are present. The index
essentially *is* unique since we filter away anything non-unique. The
catcache code hardly can be confused by tuples it never sees. That would
even work if we started preloading catcaches by doing scans of the
entire underlying relation or by caching all of a page when reading one
of its tuples.

I can definitely see that there are aesthetical reasons against doing
3b), that's why I've also done 3a). So I'll chalk you up to voting for
that...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add session_preload_libraries configuration parameter

2013-07-07 Thread Dimitri Fontaine
Peter Eisentraut pete...@gmx.net writes:
 This is like shared_preload_libraries except that it takes effect at
 backend start and can be changed without a full postmaster restart.  It
 is like local_preload_libraries except that it is still only settable by
 a superuser.  This can be a better way to load modules such as
 auto_explain.

I finally took enough time to actually understand (I think) your
proposal. The key point seems to be around ALTER ROLE SET and such like
dynamic setting of a local_preload_libraries.

Would it make sense to review the local_preload_libraries semantics
instead? It looks like session_preload_libraries only adds flexibility,
and that current usages of local_preload_libraries would be served as
well by the new setting.

Did you figure out a case where a local_preload_libraries setting would
stop working if made into a session_preload_libraries setting?

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: extension template

2013-07-07 Thread Markus Wanner
Hello Dimitri,

On 07/07/2013 09:51 PM, Dimitri Fontaine wrote:
 Markus Wanner mar...@bluegap.ch writes:
 Oh, I just realize that pg_extension_{template,control,uptmpl} are not
 SHARED catalogs, but you need to install the template per-database and
 then need to enable it - per-database *again*. Why is that?
 
 Because the current model is not serving us well enough, with a single
 module version per major version of PostgreSQL. Meaning for all the
 clusters on the server, and all the databases in them.

That's not an excuse for letting the user duplicate the effort of
installing templates for every version of his extension in every database.

 We want to be able to have postgis 1.5 and 2.x installed in two
 different databases in the same cluster, don't we?

The extension, yes. The template versions, no. There's utterly no point
in having different 2.0 versions of the same extension in different
databases.

 Well the current patch we still can't because of the dynamically shared
 object side of things, but that's not a good reason to impose the same
 limitation on to the template idea.

Without a dynamically shared object, you can well have different
versions of an extension at work in different databases already.

 After playing around with several ideas around that in the past two
 development cycles, the community consensus clearly is that extensions
 are *NEVER* going to be part of your dump scripts.

Sounds strange to me. If you want Postgres to manage extensions, it
needs the ability to backup and restore them. Otherwise, it doesn't
really ... well ... manage them.

 Now when using templates you have no other source to install the
 extensions from at pg_restore time, given what I just said.
 
 The whole goal of the template idea is to offer a way to dump and
 restore the data you need for CREATE EXTENSION to just work at restore
 time, even when you sent the data over the wire.

Which in turn violates the above cited community consensus, then. You
lost me here. What's your point? I thought the goal of your patch was
the ability to upload an extension via libpq.

How does that address my concerns about usability and understandability
of how these things work? Why the strange dependencies between templates
and extensions? Or the ability to rename the template, but not the
extension - while still having the later depend on the former?

These things are what I'm opposing to. And I don't believe it
necessarily needs to be exactly that way for dump and restore to work.
Quite the opposite, in fact. Simpler design usually means simpler backup
and restore procedures.

 Current extension are managed on the file system, the contract is that
 it is the user's job to maintain and ship that, externally to PostgreSQL
 responsibilities. All that PostgreSQL knows about is to issue the CREATE
 EXTENSION command at pg_restore time.
 
 With templates or in-line extensions, the contract is that the user asks
 PostgreSQL to manage its extensions in the same way it does for the
 other objects on the system.

Understood.

 The design we found to address that is
 called Extension Templates and is implemented in the current patch.

I placed my concerns with the proposed implementation. It's certainly
not the only way how Postgres can manage its extensions. And I still
hope we can come up with something that's simpler to use and easier to
understand.

However, I'm not a committer nor have I written code for this. I did my
review and proposed two possible (opposite) directions for clean up and
simplification of the design. I would now like others to chime in.

Regards

Markus Wanner


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: PL/Python: Convert numeric to Decimal

2013-07-07 Thread Claudio Freire
On Sun, Jul 7, 2013 at 4:28 PM, Peter Eisentraut pete...@gmx.net wrote:
 On Sun, 2013-07-07 at 02:01 -0300, Claudio Freire wrote:
 You really want to test more than just the str. The patch contains
 casts to int and float, which is something existing PLPython code will
 be doing, so it's good to test it still can be done with decimal.

 Let's remember that we are regression testing PL/Python here, not
 Python.  The functionality of PL/Python is to convert a numeric to
 Decimal, and that's what we are testing.  What Python can or cannot do
 with that is not our job to test.

I was just proposing to test behavior likely to be expected by
PL/Python functions, but you probably know better than I.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [Review] Add SPI_gettypmod() to return a field's typemod from a TupleDesc

2013-07-07 Thread Noah Misch
I find the SPI interface support functions quaint.  They're thin wrappers,
of ancient origin, around standard backend coding patterns.  They have the
anti-feature of communicating certain programming errors through return
value/SPI_result rather than elog()/Assert().  The chance that we could
substantially refactor the underlying primary backend APIs and data structures
while keeping these SPI wrappers unchanged seems slight.

On Tue, Jun 25, 2013 at 07:19:58PM -0700, Mark Wong wrote:
 +functionSPI_gettypmod/function returns the type-specific data 
 supplied
 +at table creation time.  For example: the max length of a varchar field.

SPI callers typically have no business interpreting the value, that being the
distinct purview of each type implementation.  The text type does set its
typmod to 4 + max length, but other code should not know that.  SPI callers
can use this to convey a typmod for later use, though.

 +   para
 +The type-specific data supplied at table creation time of the specified
 +column or symbolInvalidOid/symbol on error.  On error,
 +varnameSPI_result/varname is set to
 +symbolSPI_ERROR_NOATTRIBUTE/symbol.
 +   /para

You have it returning -1, not InvalidOid.  Per Amit's review last year, I'm
wary of returning -1 in the error case.  But I suspect most callers will, like
the two callers you add, make a point of never passing an invalid argument and
then not bother checking for error.  So, no big deal.


I mildly recommend we reject this patch as such, remove the TODO item, remove
the XXX comments this patch removes, and plan not to add more trivial SPI
wrappers.  If consensus goes otherwise, I think we should at least introduce
SPI_getcollation() at the same time.  Code that needs to transfer one of them
very often needs to transfer the other.  Having API coverage for just one
makes it easier for hackers to miss that.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WAL

2013-07-07 Thread mohsen soodkhah mohammadi
hello.
I am reading about WAL and XLOG records.
I am beginner in them.
can you direct me and give me some document?


Re: [HACKERS] Re: [Review] Add SPI_gettypmod() to return a field's typemod from a TupleDesc

2013-07-07 Thread Peter Eisentraut
On Sun, 2013-07-07 at 20:15 -0400, Noah Misch wrote:
 I mildly recommend we reject this patch as such, remove the TODO item,
 remove
 the XXX comments this patch removes, and plan not to add more trivial
 SPI
 wrappers.  If consensus goes otherwise, I think we should at least
 introduce
 SPI_getcollation() at the same time.  Code that needs to transfer one
 of them
 very often needs to transfer the other.  Having API coverage for just
 one
 makes it easier for hackers to miss that. 

The question is, what would one do with those values?  It's hard to see
when you would need the typmod and the collation of a result set.  There
might be cases, but enough to provide a special API for it?




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add session_preload_libraries configuration parameter

2013-07-07 Thread Peter Eisentraut
On Sun, 2013-07-07 at 22:50 +0200, Dimitri Fontaine wrote:
 Would it make sense to review the local_preload_libraries semantics
 instead? It looks like session_preload_libraries only adds
 flexibility,
 and that current usages of local_preload_libraries would be served as
 well by the new setting.
 
 Did you figure out a case where a local_preload_libraries setting
 would
 stop working if made into a session_preload_libraries setting? 

local_preload_libraries allows unprivileged users to preload whitelisted
libraries.  session_preload_libraries allows superusers to preload any
library.  It's hard to see how to consolidate those, at least without
adding another setting that whitelists the libraries somehow, at which
point you haven't really gained anything in terms of complexity.

I don't know of any actual legitimate uses of local_preload_libraries.
I recall that the plpgsql debugger was meant to use it, but doesn't
anymore.  So it's hard to judge what to do about this, without any
actual use cases.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [Review] Add SPI_gettypmod() to return a field's typemod from a TupleDesc

2013-07-07 Thread Noah Misch
On Sun, Jul 07, 2013 at 08:55:01PM -0400, Peter Eisentraut wrote:
 On Sun, 2013-07-07 at 20:15 -0400, Noah Misch wrote:
  I mildly recommend we reject this patch as such, remove the TODO item,
  remove
  the XXX comments this patch removes, and plan not to add more trivial
  SPI
  wrappers.  If consensus goes otherwise, I think we should at least
  introduce
  SPI_getcollation() at the same time.  Code that needs to transfer one
  of them
  very often needs to transfer the other.  Having API coverage for just
  one
  makes it easier for hackers to miss that. 
 
 The question is, what would one do with those values?  It's hard to see
 when you would need the typmod and the collation of a result set.  There
 might be cases, but enough to provide a special API for it?

Good point.  One of the ways PL/pgSQL uses it is to feed a result datum back
into a future query as a Param node.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: FILTER for aggregates [was Re: [HACKERS] Department of Redundancy Department: makeNode(FuncCall) division]

2013-07-07 Thread David Fetter
On Sat, Jul 06, 2013 at 11:49:21AM +0100, Dean Rasheed wrote:
 On 5 July 2013 18:23, David Fetter da...@fetter.org wrote:
  Please find attached changes based on the above.
 
 
 This looks good. The grammar changes are smaller and neater now on top
 of the makeFuncCall() patch.
 
 Overall I think this patch offers useful additional functionality, in
 compliance with the SQL spec, which should be handy to simplify
 complex grouping queries.
 
 There's a minor typo in syntax.sgml: for each input row, each row
 matching same. --- fix attached.

That is actually correct grammar, but may not be the easiest to
translate.

 I think this is ready for committer.

Thanks :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresMain() function

2013-07-07 Thread Amit Kapila
On Sunday, July 07, 2013 3:46 PM mohsen soodkhah mohammadi wrote:

 hello .
 In Postgres.c we have a switch for firstchar .  one of case of them is :
case 'Q' that is for simple queries.
 I want to know the other cases were for what operations .
 and:
 is the case with  'Q' parameter for DDL and DML commands?

SQL Commands can be executed using 2 sub-protocols, 'simple query' ,
'extended query'

In 'simple query' protocol, the frontend just sends a textual query string,
which is parsed and immediately executed by the backend. 
In 'extended query' protocol, processing of queries is separated into
multiple steps: parsing, binding of parameter values, and execution.

'Q' message is used for 'simple query'. It can be used for any SQL command
'P', 'B', 'E' are most important messages for 'extended query', these are
used for prepared statements.

You can read the below link for detailed message flow description:
http://www.postgresql.org/docs/devel/static/protocol.html

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL

2013-07-07 Thread Hari Babu
 

On Saturday, July 06, 2013 10:00 AM mohsen soodkhah mohammadi wrote:

hello.

I am reading about WAL and XLOG records.

I am beginner in them.

can you direct me and give me some document?

 

Please go through the README file which is present in the following folder
for more details.

Src/backend/access/transam/

 

Regards,

Hari babu.



Re: [HACKERS] in-catalog Extension Scripts and Control parameters (templates?)

2013-07-07 Thread Jaime Casanova
On Sat, Jul 6, 2013 at 2:25 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Jaime Casanova ja...@2ndquadrant.com writes:
 not sure if you're wrong. but at the very least, you miss a
 heap_freetuple(oldtup) there, because get_catalog_object_by_oid()

 Well, oldtup can be either a syscache copy or a heap tuple. I've been
 looking at other call sites and they don't free their tuple either. So
 I'm leaving it at that for now.

 no, that code is not unchanged because function
 get_ext_ver_list_from_catalog() comes from your patch.

 Yes. Here's the relevant hunk:


No. That's get_ext_ver_list_from_files() and that function is
unchanged (except for
the name). I'm talking about get_ext_ver_list_from_catalog() which is
a different
function.

--
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers