[HACKERS] Equivalence Rules

2014-03-02 Thread Ali Piroozi
Hi

My question is:
Does PostgreSQL implements equivalence rules(from those are listed in
email's attachment)?
Which function or which part of source code(in PostgreSQL ) implements
the equivalence rules?
I think, this should be implemented in query optimization part of
PostgreSQL, but which rule
and where, I don't know?
I want to use that(function or part of source code), to produce the
equivalence Relational Algebras (based on equivalence rules in
attachment) for a given SQL query(Relational Algebra).

Thanks


EquivalenceRules.pdf
Description: Adobe PDF document

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: new long psql parameter --on-error-stop

2014-03-02 Thread Pavel Stehule
2014-03-01 23:53 GMT+01:00 Fabrízio de Royes Mello fabriziome...@gmail.com
:


 On Sat, Mar 1, 2014 at 5:37 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:
 
  Hello
 
  here is a prototype:
 
  bash-4.1$ /usr/local/pgsql/bin/psql --help-variables
  List of some variables (options) for use from command line.
  Complete list you find in psql section in the PostgreSQL documentation.
 
  psql variables:
  Usage:
psql --set=NAME=VALUE
or \set NAME VALUE in interactive mode
 
AUTOCOMMIT when is on, successful SQL command is automatically
 commited
COMP_KEYWORD_CASE  determines which letter case to use when completing
 an SQL key word
ECHO   all lines from input can be written to standard
 output
ECHO_HIDDENdisplay queries for internal commands (same as -E
 option)
FETCH_COUNThow many rows should be for one page (default 0
 unlimited)
HISTFILE   file name that be used for store history list
HISTSIZE   the number of commands to store in the command
 history
ON_ERROR_ROLLBACK  when is on, raise ROLLBACK on error automatically
ON_ERROR_STOP  when is set, then batch execution stop immediately
 after error
VERBOSITY  control verbosity of error reports [default,
 verbose, terse]
 
  Printing options:
  Usage:
psql --pset=NAME[=VALUE]
or \pset NAME [VALUE] in interactive mode
 
border number of border style
fieldsep   specify field separator for unaligned output
fieldsep_zero  field separator in unaligned mode will be zero
format set output format [unaligned, aligned, wrapped,
 html, latex, ..]
linestyle  sets the border line drawing style [ascii,
 old-ascii, unicode]
null   sets the string to be printed in place of a null
 value
pager  when the pager option is off, the pager program is
 not used
recordsep  specifies the record (line) separator to use in
 unaligned output format
recordsep_zero record separator be in unaligned output format a
 zero byte
title  sets the table title for any subsequently printed
 tables
tuples_onlyin tuples-only mode, only actual table data is shown
 
  Environment options:
  Usage:
NAME=VALUE, [NAME=VALUE] psql ...
or \setenv NAME [VALUE] in interactive mode
 
COLUMNSnumber of columns for wrapped format
PAGER  used pager
PGHOST same as the host connection parameter
PGDATABASE same as the dbname connection parameter
PGUSER same as the user connection parameter
PGPASSWORD possibility to set password
PSQL_EDITOR, EDITOR, VISUAL  editor used by \e \ef commands
PSQL_EDITOR_LINE_NUMBER_ARG  style how to line number is used in editor
PSQL_HISTORY   alternative location for the command history file
PSQL_RCalternative location of the user's .psqlrc file
SHELL  command executed by the \! command
TMPDIR directory for storing temporary files
 
  For more information consult the psql section in the PostgreSQL
  documentation.
 

 The patch is ok (apply to master and apply to master without errors).

 Maybe we must show the possible values for each variable/option too.


Not all options are writeable - and too long option list should be less
readable - It should not to supply documentation



 Thinking more about it, would be nice if we have the possibility to show
 help for commands too. Some like that:


This can be implemented as alias probably, so it is not necessary - but I
agree, so it is interesting and valid idea

Regards

Pavel



 $ psql -H vacuum
 Command: VACUUM
 Description: garbage-collect and optionally analyze a database
 Syntax:
 VACUUM [ ( { FULL | FREEZE | VERBOSE | ANALYZE } [, ...] ) ] [ table_name
 [ (column_name [, ...] ) ] ]
 VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ table_name ]
 VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ table_name [ (column_name
 [, ...] ) ] ]

 $ psql --help-command=vacuum
 Command: VACUUM
 Description: garbage-collect and optionally analyze a database
 Syntax:
 VACUUM [ ( { FULL | FREEZE | VERBOSE | ANALYZE } [, ...] ) ] [ table_name
 [ (column_name [, ...] ) ] ]
 VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ table_name ]
 VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ table_name [ (column_name
 [, ...] ) ] ]

 It's only an idea that occurred to me reading this thread!

 Grettings,

 --
 Fabrízio de Royes Mello
 Consultoria/Coaching PostgreSQL
  Timbira: http://www.timbira.com.br
  Blog sobre TI: http://fabriziomello.blogspot.com
  Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
  Twitter: http://twitter.com/fabriziomello



Re: Fwd: [HACKERS] patch: make_timestamp function

2014-03-02 Thread Pavel Stehule
Hello

updated version - a precheck is very simple, and I what I tested it is
enough

Regards

Pavel




2014-02-28 15:11 GMT+01:00 Alvaro Herrera alvhe...@2ndquadrant.com:

 Pavel Stehule escribió:

  so still I prefer to allow numeric time zones.
 
  What I can:
 
  a) disallow numeric only timezone without prefix + or -
 
  or
 
  b) add + prefix to time zone, when number is possitive.
 
  I prefer @a.

 I can live with (a) too.  But I wonder if we should restrict the allowed
 tz even further, for example to say that there must always be either 2
 digits (no colon) or 4 digits, with or without a colon.

 --
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

commit a9acac784c64fdba144ea0ae3a817fbc8cf4fa45
Author: Pavel Stehule pavel.steh...@gooddata.com
Date:   Sun Mar 2 10:55:37 2014 +0100

fix make_timestamptz

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index ff50328..ce6d00e 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -6723,6 +6723,32 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})');
row
 entry
  indexterm
+  primarymake_interval/primary
+ /indexterm
+ literal
+  function
+   make_interval(parameteryears/parameter typeint/type DEFAULT 0,
+   parametermonths/parameter typeint/type DEFAULT 0,
+   parameterweeks/parameter typeint/type DEFAULT 0,
+   parameterdays/parameter typeint/type DEFAULT 0,
+   parameterhours/parameter typeint/type DEFAULT 0,
+   parametermins/parameter typeint/type DEFAULT 0,
+   parametersecs/parameter typedouble precision/type DEFAULT 0.0)
+  /function
+ /literal
+/entry
+entrytypeinterval/type/entry
+entry
+ Create interval from years, months, weeks, days, hours, minutes and
+ seconds fields
+/entry
+entryliteralmake_interval(days := 10)/literal/entry
+entryliteral10 days/literal/entry
+   /row
+
+   row
+entry
+ indexterm
   primarymake_time/primary
  /indexterm
  literal
@@ -6744,6 +6770,57 @@ SELECT SUBSTRING('XY1234Z', 'Y*?([0-9]{1,3})');
row
 entry
  indexterm
+  primarymake_timestamp/primary
+ /indexterm
+ literal
+  function
+   make_timestamp(parameteryear/parameter typeint/type,
+   parametermonth/parameter typeint/type,
+   parameterday/parameter typeint/type,
+   parameterhour/parameter typeint/type,
+   parametermin/parameter typeint/type,
+   parametersec/parameter typedouble precision/type)
+  /function
+ /literal
+/entry
+entrytypetimestamp/type/entry
+entry
+ Create timestamp from year, month, day, hour, minute and seconds fields
+/entry
+entryliteralmake_timestamp(1-23, 7, 15, 8, 15, 23.5)/literal/entry
+entryliteral2013-07-15 08:15:23.5/literal/entry
+   /row
+
+   row
+entry
+ indexterm
+  primarymake_timestamptz/primary
+ /indexterm
+ literal
+  function
+   make_timestamptz(parameteryear/parameter typeint/type,
+   parametermonth/parameter typeint/type,
+   parameterday/parameter typeint/type,
+   parameterhour/parameter typeint/type,
+   parametermin/parameter typeint/type,
+   parametersec/parameter typedouble precision/type,
+   optional parametertimezone/parameter typetext/type /optional)
+  /function
+ /literal
+/entry
+entrytypetimestamp with time zone/type/entry
+entry
+ Create timestamp with time zone from year, month, day, hour, minute
+ and seconds fields. When parametertimezone/parameter is not specified,
+ then current time zone is used.
+/entry
+entryliteralmake_timestamp(1-23, 7, 15, 8, 15, 23.5)/literal/entry
+entryliteral2013-07-15 08:15:23.5+01/literal/entry
+   /row
+
+   row
+entry
+ indexterm
   primarynow/primary
  /indexterm
  literalfunctionnow()/function/literal
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 04dfbb0..59a6f85 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -822,3 +822,9 @@ CREATE OR REPLACE FUNCTION
 CREATE OR REPLACE FUNCTION
   json_populate_recordset(base anyelement, from_json json, use_json_as_text boolean DEFAULT false)
   RETURNS SETOF anyelement LANGUAGE internal STABLE ROWS 100  AS 'json_populate_recordset';
+
+CREATE OR REPLACE FUNCTION
+  make_interval(years int4 DEFAULT 0, months int4 DEFAULT 0, weeks int4 DEFAULT 0,
+days int4 DEFAULT 0, hours int4 DEFAULT 0, mins int4 DEFAULT 0,
+secs double precision DEFAULT 0.0)
+ 

Re: [HACKERS] Equivalence Rules

2014-03-02 Thread Antonin Houska
There are 2 kinds of rules in this document: for joins and for set
operations.

As for joins, I think they are all about *inner* joins. Postgres (IMO)
implements them by not doing anything special if query only contains
inner joins.

On the other hand, attention has to be paid if there's at least one
*outer* join in the query. Identities summarized in 'Valid OUTER JOIN
Optimizations' section of optimizer/README come into play then. I think
make_outerjoininfo() is the code to recognize these relationships in the
original query, and join_is_legal() then to check if new joins (those
not present in the original query) do not change the semantics.

(As for set operations, someone else needs to explain.)

// Antonin Houska (Tony)


On 03/02/2014 09:02 AM, Ali Piroozi wrote:
 Hi
 
 My question is:
 Does PostgreSQL implements equivalence rules(from those are listed in
 email's attachment)?
 Which function or which part of source code(in PostgreSQL ) implements
 the equivalence rules?
 I think, this should be implemented in query optimization part of
 PostgreSQL, but which rule
 and where, I don't know?
 I want to use that(function or part of source code), to produce the
 equivalence Relational Algebras (based on equivalence rules in
 attachment) for a given SQL query(Relational Algebra).
 
 Thanks
 
 
 
 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC 2014 - mentors, students and admins

2014-03-02 Thread Tan Tran
Hi Greg, pgsql-advocacy, and pgsql-hackers,

I'm interested in doing my GSoC project on this idea. I'm new to indexing and 
WAL, which I haven't encountered in my classes, but it sounds interesting and 
valuable to Postgresql. So here's my draft proposal. Do you mind giving your 
opinion and corrections? With your help I'll add some technical detail to my 
plans.

Thanks,
Tan Tran

Introduction
In write-ahead logging (WAL), all modifications to a database are 
written to a write-ahead log before being flushed to disk at periodic 
checkpoints. This method saves I/O operations, enables a continuous backup, 
and, in the case of database failure, guarantees data integrity up until the 
last saved checkpoint. In Postgresql’s implementation, transactions are written 
to XLog, which is divided into 16MB files (“segments”) that together comprise a 
complete history of transactions. Transactions are continually appended to the 
latest segment, while checkpointing continually archives segments up until the 
last checkpoint. Internally, a suite of XLog structures and functions 
interfaces with the various resource managers so they can log a sufficient 
amount of data to restore data (“redo”) in case of failure.
Another Postgresql feature is the creation of indexes on a invariant 
custom field; for example, on the LastName of a Person even though the primary 
key is ID. These custom indexes speed up row lookup. Postgres currently 
supports four index types: B-tree, GiST, and GIN, and hash. Indexes on the 
former three are WAL-recoverable, but hashing is not.

2. Proposal
As a GSoC student, I will implement WAL recovery of hash indexes using 
the other index types’ WAL code as a guide. Roughly, I will:
- Devise a way to store and retrieve hashing data within the XLog data 
structures. 
- In the existing skeleton for hash_redo(XLogRecPtr lsn, XLogRecord *record) in 
hash.c, branch to code for the various redo operations: creating an index, 
inserting into an index, deleting an index, and page operations (split, delete, 
update?).
- Code each branch by drawing on examples from btree_redo, gin_redo, and 
gist_redo, the existing XLog code of the other index types.

Benefits
Hash index searching is O(1), which is asymptotically faster than the O(n lg n) 
searching of a B-tree, and does not require custom indexing functions like GIN 
and GIST inherently do. Therefore it is desirable for rows that will only be 
retrieved on an equality or inequality relation. However, two things currently 
stand in the way of its popular use. From the Postgresql documentation,
“Hash index operations are not presently WAL-logged, so hash indexes 
might need to be rebuilt with REINDEX after a database crash if there were 
unwritten changes. Also, changes to hash indexes are not replicated over 
streaming or file-based replication after the initial base backup, so they give 
wrong answers to queries that subsequently use them. For these reasons, hash 
index use is presently discouraged.”
My project would solve the first problem, after which I would like to stay on 
and fix the second.

To be written: Quantifiable Results, Schedule, Completeness Criteria, Bio


On Feb 28, 2014, at 6:21 AM, Greg Stark st...@mit.edu wrote:

 On Tue, Jan 28, 2014 at 5:34 PM, Thom Brown t...@linux.com wrote:
 Who would be up for mentoring this year?  And are there any project
 ideas folk would like to suggest?
 
 I mentored in the past and felt I didn't do a very good job because I
 didn't really understand the project the student was working on.
 
 There's precisely one project that I feel I would be competent to
 mentor at this point. Making hash indexes WAL recoverable. This is
 something that's easy to define the scope of and easy to determine if
 the student is on track and easy to measure when finished. It's
 something where as far as I can tell all the mentor work will be
 purely technical advice.
 
 Also it's something the project really really needs and is perfectly
 sized for a GSOC project IMHO. Also it's a great project for a student
 who might be interested in working on Postgres in the future since it
 requires learning all our idiosyncratic build and source conventions
 but doesn't require huge or controversial architectural changes.
 
 I fear a number of items in the Wiki seem unrealistically large
 projects for GSOC IMNSHO.
 
 -- 
 greg
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] gaussian distribution pgbench

2014-03-02 Thread Fabien COELHO


Hello Alvaro  Tom,


Alvaro Herrera alvhe...@2ndquadrant.com writes:

Seems that in the review so far, Fabien has focused mainly in the
mathematical properties of the new random number generation.  That seems
perfectly fine, but no comment has been made about the chosen UI for the
feature.
 Per the few initial messages in the thread, in the patch as submitted 
you ask for a gaussian random number by using \setgaussian, and 
exponential via \setexp.  Is this the right UI?


I thought it would be both concise  clear to have that as another form of 
\set*.


If I had it designed from the start, I think I may have put only \set 
with some functions such as uniform, gaussian and so on. but once 
there is a set and a setrandom for uniform, this suggested other settings 
would have their own set commands as well. Also, the number of expected 
arguments is not the same, so it may make the parsing code less obvious.
Finally, this is not a language heavily used, so I would emphasize 
simpler code over more elegant features, for once.


Currently you get an evenly distributed number with \setrandom.  There 
is nothing that makes it obvious on \setgaussian by itself that it 
produces random numbers.


Well, gaussian or exp are kind of a clue, at least to my 
mathematically-oriented mind.


Perhaps we should simply add a new argument to \setrandom, instead of 
creating new commands for each distribution?  I would guess that, in 
the future, we're going to want other distributions as well.


+1 for an argument to \setrandom instead of separate commands.



Not sure what it would look like; perhaps
\setrandom foo 1 10 gaussian


There is an additional argument expected. That would make:

  \setrandom foo 1 10 [uniform]
  \setrandom foo 1 :size gaussian 3.6
  \setrandom foo 1 100 exponential 7.2


FWIW, I think this style is sufficient; the others seem overcomplicated
for not much gain.  I'm not strongly attached to that position though.


If there is a change, I agree that one simple style is enough, especially 
as the parsing code is rather low-level already.


So I'm basically fine with the current status of the patch, but I would
be okay with a \setrandom as well.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] heapgetpage() and -takenDuringRecovery

2014-03-02 Thread Andres Freund
Hi,

I am currently playing around with Robert's suggestion to get rid of
changeset extraction's reusage of SnapshotData fields (basically that
xip contains committed, not uncommited transactions) by using NodeTag
similar to many other (families of) structs.

While reading around which references to SnapshotData's members exist, I
once more came about the following tidbit in heapgetpage():
/*
 * If the all-visible flag indicates that all tuples on the page are
 * visible to everyone, we can skip the per-tuple visibility tests.
 *
 * Note: In hot standby, a tuple that's already visible to all
 * transactions in the master might still be invisible to a read-only
 * transaction in the standby. We partly handle this problem by tracking
 * the minimum xmin of visible tuples as the cut-off XID while marking a
 * page all-visible on master and WAL log that along with the visibility
 * map SET operation. In hot standby, we wait for (or abort) all
 * transactions that can potentially may not see one or more tuples on 
the
 * page. That's how index-only scans work fine in hot standby. A crucial
 * difference between index-only scans and heap scans is that the
 * index-only scan completely relies on the visibility map where as heap
 * scan looks at the page-level PD_ALL_VISIBLE flag. We are not sure if
 * the page-level flag can be trusted in the same way, because it might
 * get propagated somehow without being explicitly WAL-logged, e.g. via 
a
 * full page write. Until we can prove that beyond doubt, let's check 
each
 * tuple for visibility the hard way.
 */
all_visible = PageIsAllVisible(dp)  !snapshot-takenDuringRecovery;

I don't think this is neccessary = 9.2. The are two only interestings place
where PD_ALL_VISIBLE is set:
a) lazy_vacuum_page() where a xl_heap_clean is logged *before*
   PD_ALL_VISIBLE/the vm is touched and that causes recovery
   conflicts. The heap page is locked for cleanup at that point. As the
   logging of xl_heap_clean sets the page's LSN there's no way the page
   can appear on the standby too early.
b) empty pages in lazy_scan_heap(). If they always were empty, there's
   no need for conflicts. The only other way I can see to end up there
   is a previous heap_page_prune() that repaired fragmentation. But that
   logs a WAL record with conflict information.

So, we could just remove this?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Magnus Hagander
On Sun, Mar 2, 2014 at 6:20 AM, Noah Misch n...@leadboat.com wrote:

 On Sat, Mar 01, 2014 at 05:51:46PM -0500, Andrew Dunstan wrote:
  On 03/01/2014 05:10 PM, Tom Lane wrote:
  One other thought here: is it actually reasonable to expend a lot of
 effort
  on the Windows case?  I'm not aware that people normally expect a
 Windows
  box to have multiple users at all, let alone non-mutually-trusting
 users.
 
  As Stephen said, it's fairly unusual. There are usually quite a few
  roles, but it's rare to have more than one human type role
  connected to the machine at a given time.

 I, too, agree it's rare.  Rare enough to justify leaving the vulnerability
 open on Windows, indefinitely?  I'd say not.  Windows itself has been
 pushing
 steadily toward better multi-user support over the past 15 years or so.
 Releasing software for Windows as though it were a single-user platform is
 backwards-looking.  We should be a model in this area, not a straggler.


Terminal Services have definitely become more common over time, but with
faster and cheaper virtualization, a lot of people have switched to that
instead, which would remove the problem of course.

I wonder how common it actually is, though, to *build postgres* on a
terminal services machine with other users on it...

Not saying we can't ignore it, and I gree that we should not be a straggler
on this, so doing a proper fix wwould definitely be the better.


 I'd be happy doing nothing in this case, or not very much. e.g.
  provide a password but not with great cryptographic strength.

 One option that would simplify things is to fix only non-Windows in the
 back
 branches, via socket protection, and fix Windows in HEAD only.  We could
 even
 do so by extending HAVE_UNIX_SOCKETS support to Windows through named
 pipes.


That could certainly be a useful feature of it's own. But as you say,
non-backpatchable.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Stephen Frost
* Dave Page (dp...@pgadmin.org) wrote:
 It's not that rare in my experience - certainly there are far more single 
 user installations, but Terminal Server configurations are common for 
 deploying apps Citrix-style or VDI. The one and only Windows server 
 maintained by the EDB infrastructure team is a terminal server for example.

Sure- but do you have a full build environment there for building PG?
That's really what I'm referring to as being relatively rare.  I'm very
familiar with terminal servers, but those are almost always used for
getting access to IE or other corporate dependencies, or for coming in
from remote, or running Windows-only applications.  We've got a terminal
server at my current job, and I ran a whole slew of them at my last job
and in neither case did we have development tools installed.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 One option that would simplify things is to fix only non-Windows in the back
 branches, via socket protection, and fix Windows in HEAD only.  We could even
 do so by extending HAVE_UNIX_SOCKETS support to Windows through named pipes.

+1 for that solution, if it's not an unreasonable amount of work to add
named-pipe sockets in Windows.  That would offer a feature to Windows
users that they didn't have before, ie the ability to restrict connections
based on filesystem permissions; so it seems useful quite aside from any
make check considerations.

There's an independent question of whether the regression tests will work
for make installcheck against a server that's not set up for trust auth.
I'm inclined to think that we can leave it to the user to generate
appropriate passwords if he's using password auth, but don't we still
need some test procedure adjustments?

Also, to what extent does any of this affect buildfarm animals?  Whatever
we do for make check will presumably make those tests safe for them,
but how are the postmasters they test under make installcheck set up?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread james

On 02/03/2014 15:30, Magnus Hagander wrote:
Terminal Services have definitely become more common over time, but 
with faster and cheaper virtualization, a lot of people have switched 
to that instead, which would remove the problem of course.


I wonder how common it actually is, though, to *build postgres* on a 
terminal services machine with other users on it...


Well, the banks I've contracted at recently are all rather keen on 
virtual desktops for developers, and some of those are terminal 
services.  We're a headache, and packaging up all the things we need is 
a pain, so there is some mileage in buying grunty servers and doing 
specific installs that are then shared, rather than making an MSI 
generally available.


Also I have experience of being given accounts for jenkins etc that are 
essentially terminal services logins, and having these things unable to 
maintain a software stack can effectively disqualify tech we would 
otherwise use.




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal, patch: allow multiple plpgsql plugins

2014-03-02 Thread Marko Tiikkaja

Hi Pavel,

The extra semicolons are still in there; around line 525 in this patch. 
 However, I removed them to compile the patch, but I can't compile my 
plugin on OS X.  The plugin is simple, it just does:


void
_PG_init(void)
{
DirectFunctionCall1(plpgsql_register_plugin, 
pgt_plpgsql_plugin_struct);

}

I get:

Undefined symbols for architecture x86_64:
  _plpgsql_register_plugin, referenced from:
  __PG_init in plpgtest.o

I'm guessing this is because PL/PgSQL is a shared library and not in 
core?  Is there a way around this?



Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] GSoC 2014 - mentors, students and admins

2014-03-02 Thread Tan Tran
Earlier I posted an email to this thread that I realize hijacked the 
discussion. Please continue replying to here instead.
 
On Feb 28, 2014, at 6:59 AM, Karol Trzcionka karl...@gmail.com wrote:

 W dniu 27.02.2014 22:25, Thom Brown pisze:
 On 27 February 2014 21:08, David Fetter da...@fetter.org wrote:
 For MADlib, no.  Are you asking for mentors in general?
 
 Ah yes, I should clarify.  Yes, mentors in general.
 In general I can help but I'm not sure if I'm not too fresh in pgsql ;) 
 However after GSOC as student I can try the another side.
 Regards,
 Karol



[HACKERS] GSoC on WAL-logging hash indexes

2014-03-02 Thread Tan Tran
Hi all,

Earlier I posted this in the wrong thread. Please excuse the double posting.

Tan Tran

Begin forwarded message:

 From: Tan Tran tankimt...@gmail.com
 Subject: Re: [HACKERS] GSoC 2014 - mentors, students and admins
 Date: March 2, 2014 at 5:03:14 AM PST
 To: Greg Stark st...@mit.edu
 Cc: pgsql-advocacy pgsql-advoc...@postgresql.org, PostgreSQL-development 
 pgsql-hackers@postgresql.org
 
 Hi Greg, pgsql-advocacy, and pgsql-hackers,
 
 I'm interested in doing my GSoC project on this idea. I'm new to indexing and 
 WAL, which I haven't encountered in my classes, but it sounds interesting and 
 valuable to Postgresql. So here's my draft proposal. Do you mind giving your 
 opinion and corrections? With your help I'll add some technical detail to my 
 plans.
 
 Thanks,
 Tan Tran
 
 Introduction
   In write-ahead logging (WAL), all modifications to a database are 
 written to a write-ahead log before being flushed to disk at periodic 
 checkpoints. This method saves I/O operations, enables a continuous backup, 
 and, in the case of database failure, guarantees data integrity up until the 
 last saved checkpoint. In Postgresql’s implementation, transactions are 
 written to XLog, which is divided into 16MB files (“segments”) that together 
 comprise a complete history of transactions. Transactions are continually 
 appended to the latest segment, while checkpointing continually archives 
 segments up until the last checkpoint. Internally, a suite of XLog structures 
 and functions interfaces with the various resource managers so they can log a 
 sufficient amount of data to restore data (“redo”) in case of failure.
   Another Postgresql feature is the creation of indexes on a invariant 
 custom field; for example, on the LastName of a Person even though the 
 primary key is ID. These custom indexes speed up row lookup. Postgres 
 currently supports four index types: B-tree, GiST, and GIN, and hash. Indexes 
 on the former three are WAL-recoverable, but hashing is not.
 
 2. Proposal
   As a GSoC student, I will implement WAL recovery of hash indexes using 
 the other index types’ WAL code as a guide. Roughly, I will:
 - Devise a way to store and retrieve hashing data within the XLog data 
 structures. 
 - In the existing skeleton for hash_redo(XLogRecPtr lsn, XLogRecord *record) 
 in hash.c, branch to code for the various redo operations: creating an index, 
 inserting into an index, deleting an index, and page operations (split, 
 delete, update?).
 - Code each branch by drawing on examples from btree_redo, gin_redo, and 
 gist_redo, the existing XLog code of the other index types.
 
 Benefits
 Hash index searching is O(1), which is asymptotically faster than the O(n lg 
 n) searching of a B-tree, and does not require custom indexing functions like 
 GIN and GIST inherently do. Therefore it is desirable for rows that will only 
 be retrieved on an equality or inequality relation. However, two things 
 currently stand in the way of its popular use. From the Postgresql 
 documentation,
   “Hash index operations are not presently WAL-logged, so hash indexes 
 might need to be rebuilt with REINDEX after a database crash if there were 
 unwritten changes. Also, changes to hash indexes are not replicated over 
 streaming or file-based replication after the initial base backup, so they 
 give wrong answers to queries that subsequently use them. For these reasons, 
 hash index use is presently discouraged.”
 My project would solve the first problem, after which I would like to stay on 
 and fix the second.
 
 To be written: Quantifiable Results, Schedule, Completeness Criteria, Bio
 
 
 On Feb 28, 2014, at 6:21 AM, Greg Stark st...@mit.edu wrote:
 
 On Tue, Jan 28, 2014 at 5:34 PM, Thom Brown t...@linux.com wrote:
 Who would be up for mentoring this year?  And are there any project
 ideas folk would like to suggest?
 
 I mentored in the past and felt I didn't do a very good job because I
 didn't really understand the project the student was working on.
 
 There's precisely one project that I feel I would be competent to
 mentor at this point. Making hash indexes WAL recoverable. This is
 something that's easy to define the scope of and easy to determine if
 the student is on track and easy to measure when finished. It's
 something where as far as I can tell all the mentor work will be
 purely technical advice.
 
 Also it's something the project really really needs and is perfectly
 sized for a GSOC project IMHO. Also it's a great project for a student
 who might be interested in working on Postgres in the future since it
 requires learning all our idiosyncratic build and source conventions
 but doesn't require huge or controversial architectural changes.
 
 I fear a number of items in the Wiki seem unrealistically large
 projects for GSOC IMNSHO.
 
 -- 
 greg
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 

Re: [HACKERS] [PATCH] Negative Transition Aggregate Functions (WIP)

2014-03-02 Thread Dean Rasheed
On 25 February 2014 12:33, Florian Pflug f...@phlo.org wrote:
 On Feb24, 2014, at 17:50 , Dean Rasheed dean.a.rash...@gmail.com wrote:
 On 20 February 2014 01:48, Florian Pflug f...@phlo.org wrote:
 On Jan29, 2014, at 13:45 , Florian Pflug f...@phlo.org wrote:
 In fact, I'm
 currently leaning towards just forbidding non-strict forward transition
 function with strict inverses, and adding non-NULL counters to the
 aggregates that then require them. It's really only the SUM() aggregates
 that are affected by this, I think.

 I finally got around to doing that, and the results aren't too bad. The
 attached patches required that the strictness settings of the forward and
 reverse transition functions agree, and employ exactly the same 
 NULL-skipping
 logic we always had.

 The only aggregates seriously affected by that change were SUM(int2) and
 SUM(int4).

 I haven't looked at this in any detail yet, but that seems much neater
 to me. It seems perfectly sensible that the forward and inverse
 transition functions should have the same strictness settings, and
 enforcing that keeps the logic simple, as well as hopefully making it
 easier to document.

 Good to hear that you agree! I'll try to find some time to update the docs.


I finally got round to looking at this in more detail. Sorry for the
delay. Here is my more detailed review of the base patch.

Overall, I think that it is in reasonable shape, and as I said I think
the approach of enforcing matching strictness settings on the forward
and inverse transition functions is much simpler and neater. I have a
few comments, some cosmetic, and a couple more substantive:


* In a couple of places:

errmsg(stricness of forward and reverse transition functions must match)

- misspelling: stricness.
- reverse should be inverse to match the terminology used elsewhere.


* Grammatical error in the comment for lookup_agg_function() - you
should drop the word both.


* In show_windowagg_info(), this calculation looks suspicious to me:

double tperrow = winaggstate-aggfwdtrans /
(inst-nloops * inst-ntuples);

If the node is executed multiple times, aggfwdtrans will be reset in
each loop, so the transitions per row figure will be under-estimated.
ISTM that if you want to report on this, you'd need aggfwdtrans to be
reset once per query, but I'm not sure exactly how to do that.

Here's a test case:

explain (verbose, analyse)
  select sum(i) over (rows between 4 preceding and current row)
from generate_series(1, 10) i;

which outputs 10 rows with an average of 1 transition per row, but
doing the same window aggregate twice in a nested loop:

explain (verbose, analyse)
  select * from (values (10), (10)) v(x),
  lateral
  (select sum(i) over (rows between 4 preceding and current row)
 from generate_series(1, x) i) t;

outputs 20 rows, but only reports 0.5 transitions per row.

Actually, I think it's misleading to only count forward transition
function calls, because a call to the inverse transition function
still represents a state transition, and is likely to be around the
same cost. For a window of size 2, there would not be much advantage
to using inverse transition functions, because it would be around 2
transitions per row either way.


* The function comment for build_aggregate_fnexprs() needs to be
updated to reference the inverse transition function. I'd also be
tempted to have it allow invtransfnexpr be a NULL pointer, if the
inverse transition function expression tree is not required. Then
ExecInitAgg() could simply pass NULL, instead of having the local
variable invtransfnexpr with the slightly cryptic comment needed but
never used.


* In struct WindowStatePerAggData, I think you should change the field
order to transfn_oid, invtransfn_oid and then finalfn_oid. It's only a
small thing, but that's the order those 3 functions are referred to
everywhere else.


* In struct WindowStatePerAggData, the comment for transValueCount
should read number of aggregated values.


* If AggCheckCallContext() is called from a window function, and it
asks for an aggcontext, it will fail because calledaggno will be -1.
That can't currently happen for any of our built-in window functions,
and I'm not sure if it's likely to happen in the future, but I think
it would be better to defend against that possibility just in case. So
I think it ought to return the shared context in that case, as the
original code would have done.


* In advance_windowaggregate(), this code

if (peraggstate-transfn.fn_strict) {

is against the project style, which is to have curly braces on new
lines. But also, that test condition is the same as the preceding
block, so the 2 blocks could just be merged.


* I was wondering about the case of a forward transition function
returning NULL in the presence of an inverse transition function. In
this patch there are 3 pieces of code that test for that:

1). advance_windowaggregate() errors out if the forward transition
function 

Re: [HACKERS] proposal, patch: allow multiple plpgsql plugins

2014-03-02 Thread Pavel Stehule
Hi


2014-03-02 19:59 GMT+01:00 Marko Tiikkaja ma...@joh.to:

 Hi Pavel,

 The extra semicolons are still in there; around line 525 in this patch.
  However, I removed them to compile the patch, but I can't compile my
 plugin on OS X.  The plugin is simple, it just does:

 void
 _PG_init(void)
 {
 DirectFunctionCall1(plpgsql_register_plugin,
 pgt_plpgsql_plugin_struct);
 }

 I get:

 Undefined symbols for architecture x86_64:
   _plpgsql_register_plugin, referenced from:
   __PG_init in plpgtest.o

 I'm guessing this is because PL/PgSQL is a shared library and not in core?
  Is there a way around this?


yes, PLpgSQL is not referenced and, if I remember well, clang is too
restrictive.

probably
http://stackoverflow.com/questions/17281901/ignoring-an-undefined-symbol-in-a-dynamic-library-from-xcode

or you can add a reference on plpgsql to your Makefile

Regards

Pavel



 Regards,
 Marko Tiikkaja



Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Andrew Dunstan


On 03/02/2014 01:27 PM, Tom Lane wrote:


Also, to what extent does any of this affect buildfarm animals?  Whatever
we do for make check will presumably make those tests safe for them,
but how are the postmasters they test under make installcheck set up?



Nothing special.

   bin/initdb -U buildfarm --locale=$locale data-$locale
   ...
   bin/pg_ctl -D data-$locale -l logfile -w start


We have wide control over what's done, just let me know what's wanted. 
For example, it would be pretty simple to make it use a non-standard 
socket directory and turn tcp connections off on Unix, or to set up 
password auth for that matter, assuming we already have a strong password.


I generally assume that people aren't running buildfarm animals on 
general purpose multi-user machines, but it might be as well to take 
precautions.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Stephen Frost
* james (ja...@mansionfamily.plus.com) wrote:
 Well, the banks I've contracted at recently are all rather keen on
 virtual desktops for developers, and some of those are terminal
 services.  We're a headache, and packaging up all the things we need
 is a pain, so there is some mileage in buying grunty servers and
 doing specific installs that are then shared, rather than making an
 MSI generally available.
 
 Also I have experience of being given accounts for jenkins etc that
 are essentially terminal services logins, and having these things
 unable to maintain a software stack can effectively disqualify tech
 we would otherwise use.

And what are the feelings security on these multi-user development
environments?  Is everyone on them trusted users, or are there
untrusted / general accounts?

The issue here is about how much effort to go to in order to secure the
PostgreSQL system that is started up to do the regression tests.  It's
already set up to only listen on localhost and will run with only the
privileges of the user running the tests.  The concern is that another
user on the same system could gain access to the account which is
running the 'make check' by connecting over localhost to the PostgreSQL
instance and being superuser there, which would allow executing
commands, etc, as that other user (eg: with COPY PIPE).

THanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] proposal, patch: allow multiple plpgsql plugins

2014-03-02 Thread Marko Tiikkaja

On 3/2/14, 8:47 PM, Pavel Stehule wrote:

2014-03-02 19:59 GMT+01:00 Marko Tiikkaja ma...@joh.to:

Undefined symbols for architecture x86_64:
   _plpgsql_register_plugin, referenced from:
   __PG_init in plpgtest.o

I'm guessing this is because PL/PgSQL is a shared library and not in core?
  Is there a way around this?



yes, PLpgSQL is not referenced and, if I remember well, clang is too
restrictive.

probably
http://stackoverflow.com/questions/17281901/ignoring-an-undefined-symbol-in-a-dynamic-library-from-xcode

or you can add a reference on plpgsql to your Makefile


That seems unbelievably ugly, but worse, loading the library in 
shared_preload_libraries doesn't work:


14782  FATAL:  could not load library 
/usr/local/pgsql/lib/plpgtest.so: 
dlopen(/usr/local/pgsql/lib/plpgtest.so, 10): Symbol not found: 
_plpgsql_register_plugin

  Referenced from: /usr/local/pgsql/lib/plpgtest.so
  Expected in: flat namespace
 in /usr/local/pgsql/lib/plpgtest.so

I even tried putting plpgsql.so before it in the list, but no go.


Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Josh Berkus
On 03/02/2014 12:17 PM, Stephen Frost wrote:
 The issue here is about how much effort to go to in order to secure the
 PostgreSQL system that is started up to do the regression tests.  It's
 already set up to only listen on localhost and will run with only the
 privileges of the user running the tests.  The concern is that another
 user on the same system could gain access to the account which is
 running the 'make check' by connecting over localhost to the PostgreSQL
 instance and being superuser there, which would allow executing
 commands, etc, as that other user (eg: with COPY PIPE).

My $0.02:  Not a lot of effort.

A) Few users run the regression tests at all, because they use packages.

B) Of the users who do self-builds, most do so on secure systems deep
inside the corporate firewall.

C) A related attack requires not only access to the host but good timing
as well, or the ability to leave a booby-trap program on the system.

D) If the host is compromised, the user gains access to the build user
... which should be a regular, unprivilged, shell user.

The only way I can see this being of real use to an attacker is if they
could use this exploit to create a wormed version of PostgresQL on the
target build system.  Is that possible?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Stephen Frost
* Josh Berkus (j...@agliodbs.com) wrote:
 The only way I can see this being of real use to an attacker is if they
 could use this exploit to create a wormed version of PostgresQL on the
 target build system.  Is that possible?

I don't see why it wouldn't be- once the attacker is on the box as any
user, they could gain access to the account doing the builds and then
build whatever they want.  Of course, if they've been able to compromise
an account on the host it's entirely likely they've already been able to
gain admin access (probably more easily than going through PG to get at
the build user) and then it's a moot point.

All that said- if we can use named pipes on Windows, ala what we do on
Unix, I'm all for it..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Magnus Hagander
On Sun, Mar 2, 2014 at 7:27 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Noah Misch n...@leadboat.com writes:
  One option that would simplify things is to fix only non-Windows in the
 back
  branches, via socket protection, and fix Windows in HEAD only.  We could
 even
  do so by extending HAVE_UNIX_SOCKETS support to Windows through named
 pipes.

 +1 for that solution, if it's not an unreasonable amount of work to add
 named-pipe sockets in Windows.  That would offer a feature to Windows
 users that they didn't have before, ie the ability to restrict connections
 based on filesystem permissions; so it seems useful quite aside from any
 make check considerations.


I think it might be a bigger piece of work than we'd like - and IIRC that's
one of the reasons we didn't do it from the start. Named pipes on windows
do act as files on Windows, but they do *not* act as sockets. As in, they
return HANDLEs, not SOCKETs, and you can't recv() and send() on them.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Noah Misch
On Sun, Mar 02, 2014 at 01:27:18PM -0500, Tom Lane wrote:
 Noah Misch n...@leadboat.com writes:
  One option that would simplify things is to fix only non-Windows in the back
  branches, via socket protection, and fix Windows in HEAD only.  We could 
  even
  do so by extending HAVE_UNIX_SOCKETS support to Windows through named pipes.
 
 +1 for that solution, if it's not an unreasonable amount of work to add
 named-pipe sockets in Windows.  That would offer a feature to Windows
 users that they didn't have before, ie the ability to restrict connections
 based on filesystem permissions; so it seems useful quite aside from any
 make check considerations.

Agreed.  Windows named pipes do not go through the winsock API, so it might
take a good amount of muddle to achieve this.  If it doesn't work out, we'll
revisit use of MD5 authentication for regression tests.  Also, I'd be just as
happy for someone else to do the primary development on such a project.


Concerning the immediate fix for non-Windows systems, does any modern system
ignore modes of Unix domain sockets?  It appears to be a long-fixed problem:

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-1999-1402
http://unix.stackexchange.com/questions/83032/which-systems-do-not-honor-socket-read-write-permissions

Nonetheless, it would be helpful for folks to test any rare platforms they
have at hand.  Start a postmaster with --unix-socket-permissions= and
attempt to connect via local socket.  If psql gives something other than
psql: could not connect to server: Permission denied, please report it.

 There's an independent question of whether the regression tests will work
 for make installcheck against a server that's not set up for trust auth.
 I'm inclined to think that we can leave it to the user to generate
 appropriate passwords if he's using password auth, but don't we still
 need some test procedure adjustments?

Right.  To have make installcheck-world work against a cluster requiring md5
authentication, I would use the makecheck-secure-v3.patch test suite changes.
I suppose that's a good thing to nail down, even if testing against md5 does
not become the norm.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby doesn't come up on some situation.

2014-03-02 Thread Kyotaro HORIGUCHI
Hello,

At Fri, 28 Feb 2014 14:45:58 +0200, Heikki Linnakangas 
hlinnakan...@vmware.com wrote in 53108506.2010...@vmware.com
  Yes, but the same stuation could be made by restarting crashed
  secondary.
 
 Yeah.
 
  I have no idea about the scenario on whitch this behavior was regarded
  as
  undesirable but anyway I think that the secondry should start
  accepting
  client just after crash recovery is completed.
 
 Agreed, this is a bug.
 
 I don't think your patch is the right fix for this though. Setting
 minRecoveryPoint to EndRecPtr is the right thing to do; EndRecPtr
 points to the end of the last read and replayed record. What's wrong
 in this case is lastReplayedEndRecptr. At the beginning of recovery,
 it's initialized to the REDO point, but with a shutdown checkpoint,
 that's not quite right. When starting from a shutdown checkpoint, REDO
 points to the beginning of the shutdown record, but we've already
 effectively replayed it. The next record we replay is the one after
 the checkpoint.

It's more reasonable. I felt uncelar about that but I forgot to
doubt the correctness of lastReplayedEndRecptr then, but surely
the shutdown record itself was effectively alredy replayed when
the recored is inseretd.

 To see that, I added some elog(LOG) calls:
 
 ~/pgsql.93stable$ bin/postmaster -D data
 LOG:  database system was shut down at 2014-02-28 14:06:18 EET
 LOG:  ReadCheckpointRecord: 0/16479C98
 LOG:  database system is ready to accept connections
 LOG:  autovacuum launcher started
 ^CLOG:  received fast shutdown request
 LOG:  aborting any active transactions
 LOG:  autovacuum launcher shutting down
 LOG:  shutting down
 LOG: INSERT @ 0/16479D00: prev 0/16479C98; xid 0; len 72: XLOG -
 checkpoint: redo 0/16479D00; tli 1; prev tli 1; fpw true; xid
 0/793393; oid 24988; multi 655288; offset 1356722; oldest xid 687 in
 DB 1; oldest multi 1 in DB 1; oldest running xid 0; shutdown
 LOG:  xlog flush request 0/16479D68; write 0/0; flush 0/0
 LOG:  database system is shut down
 ~/pgsql.93stable$ bin/postmaster -D data
 LOG:  database system was shut down at 2014-02-28 14:06:23 EET
 LOG:  ReadCheckpointRecord: 0/16479D00
 LOG:  database system is ready to accept connections
 LOG:  autovacuum launcher started
 Killed
 
 At this point, the last record is the shutdown checkpoint, beginning
 at 16479D00, and the server has been killed (immediate shutdown).
 
 ~/pgsql.93stable$ cp recovery.conf data/recovery.conf
 ~/pgsql.93stable$ bin/postmaster -D data
 LOG: database system was interrupted; last known up at 2014-02-28
 14:06:29 EET
 LOG:  entering standby mode
 LOG:  ReadCheckpointRecord: 0/16479D00
 LOG: database system was not properly shut down; automatic recovery in
 progress
 LOG:  record with zero length at 0/16479D68
 LOG:  reached end of WAL in pg_xlog, entering archive recovery
 LOG:  EndRecPtr: 0/16479D68 lastReplayedEndRecPtr: 0/16479D00
 FATAL: could not connect to the primary server: could not connect to
 server: Connection refused
 ...
 
 Recovery starts from the checkpoint record, but lastReplayedEndRecPtr
 is set to the *beginning* of the checkpoint record, even though the
 checkpoint record has already been effectively replayed, by the feat
 of starting recovery from it. EndRecPtr correctly points to the end of
 the checkpoint record. Because of the incorrect lastReplayedEndRecPtr
 value, the CheckRecoveryConsistency() call concludes that it's not
 consistent.

I completely understood the behavior thanks to your detailed
explanation. (And how to use log messages effectively :-)

I agree that the fix is appropriate.

 I believe the attached fix is the right way to fix this.

It also worked for me. Thank you.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby doesn't come up on some situation.

2014-03-02 Thread Kyotaro HORIGUCHI
Ouch! It brought another bug.

 I completely understood the behavior thanks to your detailed
 explanation. (And how to use log messages effectively :-)

Sorry, I just found that it's wrong, and found another problem
brought by your patch.

 I agree that the fix is appropriate.
 
  I believe the attached fix is the right way to fix this.
 
 It also worked for me. Thank you.

| * as if we had just replayed the record before the REDO location
| * (or the checkpoint record itself, if it's a shutdown checkpoint).

The test script following raises assertion failure. It's added
with 'non-shutdown' checkpoint' just before shutting down
immediately. Starting server aborts with the following message.

| LOG:  database system was not properly shut down; automatic recovery in 
progress
| TRAP: FailedAssertion(!(((oldestActiveXID) != ((TransactionId) 0))), File: 
xlog.c, Line: 6771)
| LOG:  startup process (PID 28561) was terminated by signal 6: Aborted

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

===
#! /bin/sh

killall postgres
rm -rf $PGDATA/*
initdb
pg_ctl start -w
sleep 1
psql postgres -c 'checkpoint'
pg_ctl stop -m i
cat  $PGDATA/recovery.conf EOF
standby_mode = 'on'
primary_conninfo = 'host=localhost port= user=repuser application_name=pm01 
keepalives_idle=60 keepalives_interval=5 keepalives_count=5'
#restore_command = '/bin/true'
recovery_target_timeline = 'latest'
EOF
cat  $PGDATA/postgresql.conf EOF
#log_min_messages = debug5
hot_standby = on
EOF
pg_ctl start



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Hot standby doesn't come up on some situation.

2014-03-02 Thread Kyotaro HORIGUCHI
Correcting one point of my last mail.

 Ouch! It brought another bug.

My patch also did.

regards,


  I completely understood the behavior thanks to your detailed
  explanation. (And how to use log messages effectively :-)
 
 Sorry, I just found that it's wrong, and found another problem
 brought by your patch.
 
  I agree that the fix is appropriate.
  
   I believe the attached fix is the right way to fix this.
  
  It also worked for me. Thank you.
 
 | * as if we had just replayed the record before the REDO location
 | * (or the checkpoint record itself, if it's a shutdown checkpoint).
 
 The test script following raises assertion failure. It's added
 with 'non-shutdown' checkpoint' just before shutting down
 immediately. Starting server aborts with the following message.
 
 | LOG:  database system was not properly shut down; automatic recovery in 
 progress
 | TRAP: FailedAssertion(!(((oldestActiveXID) != ((TransactionId) 0))), 
 File: xlog.c, Line: 6771)
 | LOG:  startup process (PID 28561) was terminated by signal 6: Aborted
 
 regards,
 
 -- 
 Kyotaro Horiguchi
 NTT Open Source Software Center
 
 ===
 #! /bin/sh
 
 killall postgres
 rm -rf $PGDATA/*
 initdb
 pg_ctl start -w
 sleep 1
 psql postgres -c 'checkpoint'
 pg_ctl stop -m i
 cat  $PGDATA/recovery.conf EOF
 standby_mode = 'on'
 primary_conninfo = 'host=localhost port= user=repuser 
 application_name=pm01 keepalives_idle=60 keepalives_interval=5 
 keepalives_count=5'
 #restore_command = '/bin/true'
 recovery_target_timeline = 'latest'
 EOF
 cat  $PGDATA/postgresql.conf EOF
 #log_min_messages = debug5
 hot_standby = on
 EOF
 pg_ctl start
 

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch FORCE_NULL option for copy COPY in CSV mode

2014-03-02 Thread Ian Lawrence Barwick
2014-03-02 8:26 GMT+09:00 Andrew Dunstan and...@dunslane.net:

 On 01/29/2014 10:59 AM, Ian Lawrence Barwick wrote:

 2014/1/29 Ian Lawrence Barwick barw...@gmail.com:

 2014-01-29 Andrew Dunstan and...@dunslane.net:

 On 01/28/2014 05:55 AM, Ian Lawrence Barwick wrote:


 Hi Payal

 Many thanks for the review, and my apologies for not getting back to
 you earlier.

 Updated version of the patch attached with suggested corrections.

 On a very quick glance, I see that you have still not made adjustments
 to
 contrib/file_fdw to accommodate this new option. I don't see why this
 COPY
 option should be different in that respect.

 Hmm, that idea seems to have escaped me completely. I'll get onto it
 forthwith.

 Striking while the keyboard is hot... version with contrib/file_fdw
 modifications
 attached.



 I have reviewed this. Generally it's good, but the author has made a
 significant error - the idea is not to force a quoted empty string to null,
 but to force a quoted null string to null, whatever the null string might
 be. The default case has these the same, but if you specify a non-empty null
 string they aren't.

The author slaps himself on the forehead while regretting he was temporally
constricted when dealing with the patch and never thought to look beyond
the immediate use case.

Thanks for the update, much appreciated.

 That difference actually made the file_fdw regression results plain wrong,
 in my view, in that they expected a quoted empty string to be turned to null
 even when the null string was something else.

 I've adjusted this and the docs and propose to apply the attached patch in
 the next day or two unless there are any objections.

Unless I'm overlooking something, output from SELECT * FROM text_csv;
in 'output/file_fdw.source' still needs updating?


Regards

Ian Barwick
diff --git a/contrib/file_fdw/data/text.csv b/contrib/file_fdw/data/text.csv
new file mode 100644
index ed348a9..f55d9cf
*** a/contrib/file_fdw/data/text.csv
--- b/contrib/file_fdw/data/text.csv
***
*** 1,4 
! AAA,aaa
! XYZ,xyz
! NULL,NULL
! ABC,abc
--- 1,5 
! AAA,aaa,123,
! XYZ,xyz,,321
! NULL,NULL,NULL,NULL
! NULL,NULL,NULL,NULL
! ABC,abc,,
diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
new file mode 100644
index 5639f4d..7fb1dbc
*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
*** struct FileFdwOption
*** 48,56 
  
  /*
   * Valid options for file_fdw.
!  * These options are based on the options for COPY FROM command.
!  * But note that force_not_null is handled as a boolean option attached to
!  * each column, not as a table option.
   *
   * Note: If you are adding new option for user mapping, you need to modify
   * fileGetOptions(), which currently doesn't bother to look at user mappings.
--- 48,56 
  
  /*
   * Valid options for file_fdw.
!  * These options are based on the options for the COPY FROM command.
!  * But note that force_not_null and force_null are handled as boolean options
!  * attached to a column, not as table options.
   *
   * Note: If you are adding new option for user mapping, you need to modify
   * fileGetOptions(), which currently doesn't bother to look at user mappings.
*** static const struct FileFdwOption valid_
*** 69,75 
  	{null, ForeignTableRelationId},
  	{encoding, ForeignTableRelationId},
  	{force_not_null, AttributeRelationId},
! 
  	/*
  	 * force_quote is not supported by file_fdw because it's for COPY TO.
  	 */
--- 69,75 
  	{null, ForeignTableRelationId},
  	{encoding, ForeignTableRelationId},
  	{force_not_null, AttributeRelationId},
! 	{force_null, AttributeRelationId},
  	/*
  	 * force_quote is not supported by file_fdw because it's for COPY TO.
  	 */
*** file_fdw_validator(PG_FUNCTION_ARGS)
*** 187,192 
--- 187,193 
  	Oid			catalog = PG_GETARG_OID(1);
  	char	   *filename = NULL;
  	DefElem*force_not_null = NULL;
+ 	DefElem*force_null = NULL;
  	List	   *other_options = NIL;
  	ListCell   *cell;
  
*** file_fdw_validator(PG_FUNCTION_ARGS)
*** 243,252 
  		}
  
  		/*
! 		 * Separate out filename and force_not_null, since ProcessCopyOptions
! 		 * won't accept them.  (force_not_null only comes in a boolean
! 		 * per-column flavor here.)
  		 */
  		if (strcmp(def-defname, filename) == 0)
  		{
  			if (filename)
--- 244,253 
  		}
  
  		/*
! 		 * Separate out filename and column-specific options, since
! 		 * ProcessCopyOptions won't accept them.
  		 */
+ 
  		if (strcmp(def-defname, filename) == 0)
  		{
  			if (filename)
*** file_fdw_validator(PG_FUNCTION_ARGS)
*** 255,270 
  		 errmsg(conflicting or redundant options)));
  			filename = defGetString(def);
  		}
  		else if (strcmp(def-defname, force_not_null) == 0)
  		{
  			if (force_not_null)
  ereport(ERROR,
  		(errcode(ERRCODE_SYNTAX_ERROR),
! 		 errmsg(conflicting or redundant options)));
  			force_not_null = 

Re: [HACKERS] Hot standby doesn't come up on some situation.

2014-03-02 Thread Kyotaro HORIGUCHI
Hello,

 | * as if we had just replayed the record before the REDO location
 | * (or the checkpoint record itself, if it's a shutdown checkpoint).
 
 The test script following raises assertion failure. It's added
 with 'non-shutdown' checkpoint' just before shutting down
 immediately. Starting server aborts with the following message.
 
 | LOG:  database system was not properly shut down; automatic recovery in 
 progress
 | TRAP: FailedAssertion(!(((oldestActiveXID) != ((TransactionId) 0))), 
 File: xlog.c, Line: 6771)
 | LOG:  startup process (PID 28561) was terminated by signal 6: Aborted

This is because the checkpoint was done with 'wal_level =
minimal'. The server restarts correctly by starting the server
with 'wal_level = hot_standby' at first.

It looks a mere mis-op. The log message looks unfriendly but I'm
uncertain of the necessity of changing it.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] gaussian distribution pgbench

2014-03-02 Thread KONDO Mitsumasa

(2014/03/02 22:32), Fabien COELHO wrote:

Alvaro Herrera alvhe...@2ndquadrant.com writes:

Seems that in the review so far, Fabien has focused mainly in the
mathematical properties of the new random number generation.  That seems
perfectly fine, but no comment has been made about the chosen UI for the
feature.
 Per the few initial messages in the thread, in the patch as submitted you ask
for a gaussian random number by using \setgaussian, and exponential via
\setexp.  Is this the right UI?

I thought it would be both concise  clear to have that as another form of 
\set*.

Yeah, but we got only two or three? concise. So I agree with discussing about 
UI.


There is an additional argument expected. That would make:

   \setrandom foo 1 10 [uniform]
   \setrandom foo 1 :size gaussian 3.6
   \setrandom foo 1 100 exponential 7.2
It's good design. I think it will become more low overhead at part of parsing in 
pgbench, because comparison of strings will be redeced(maybe). And I'd like to 
remove [uniform], beacause we have to have compatibility for old scripts, and 
random function always gets uniform distribution in common sense of programming.


However, new grammer is little bit long in user script. It seems trade-off that 
are visibility of scripts and user writing cost.


Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Patch FORCE_NULL option for copy COPY in CSV mode

2014-03-02 Thread Andrew Dunstan


On 03/02/2014 10:06 PM, Ian Lawrence Barwick wrote:

2014-03-02 8:26 GMT+09:00 Andrew Dunstan and...@dunslane.net:

On 01/29/2014 10:59 AM, Ian Lawrence Barwick wrote:

2014/1/29 Ian Lawrence Barwick barw...@gmail.com:

2014-01-29 Andrew Dunstan and...@dunslane.net:

On 01/28/2014 05:55 AM, Ian Lawrence Barwick wrote:


Hi Payal

Many thanks for the review, and my apologies for not getting back to
you earlier.

Updated version of the patch attached with suggested corrections.

On a very quick glance, I see that you have still not made adjustments
to
contrib/file_fdw to accommodate this new option. I don't see why this
COPY
option should be different in that respect.

Hmm, that idea seems to have escaped me completely. I'll get onto it
forthwith.

Striking while the keyboard is hot... version with contrib/file_fdw
modifications
attached.



I have reviewed this. Generally it's good, but the author has made a
significant error - the idea is not to force a quoted empty string to null,
but to force a quoted null string to null, whatever the null string might
be. The default case has these the same, but if you specify a non-empty null
string they aren't.

The author slaps himself on the forehead while regretting he was temporally
constricted when dealing with the patch and never thought to look beyond
the immediate use case.

Thanks for the update, much appreciated.


That difference actually made the file_fdw regression results plain wrong,
in my view, in that they expected a quoted empty string to be turned to null
even when the null string was something else.

I've adjusted this and the docs and propose to apply the attached patch in
the next day or two unless there are any objections.

Unless I'm overlooking something, output from SELECT * FROM text_csv;
in 'output/file_fdw.source' still needs updating?





Yes, you're right. Will fix.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal, patch: allow multiple plpgsql plugins

2014-03-02 Thread Pavel Stehule
Dne 2. 3. 2014 21:55 Marko Tiikkaja ma...@joh.to napsal(a):

 On 3/2/14, 8:47 PM, Pavel Stehule wrote:

 2014-03-02 19:59 GMT+01:00 Marko Tiikkaja ma...@joh.to:

 Undefined symbols for architecture x86_64:

_plpgsql_register_plugin, referenced from:
__PG_init in plpgtest.o

 I'm guessing this is because PL/PgSQL is a shared library and not in
core?
   Is there a way around this?


 yes, PLpgSQL is not referenced and, if I remember well, clang is too
 restrictive.

 probably

http://stackoverflow.com/questions/17281901/ignoring-an-undefined-symbol-in-a-dynamic-library-from-xcode

 or you can add a reference on plpgsql to your Makefile


 That seems unbelievably ugly, but worse, loading the library in
shared_preload_libraries doesn't work:

 14782  FATAL:  could not load library /usr/local/pgsql/lib/plpgtest.so:
dlopen(/usr/local/pgsql/lib/plpgtest.so, 10): Symbol not found:
_plpgsql_register_plugin
   Referenced from: /usr/local/pgsql/lib/plpgtest.so
   Expected in: flat namespace
  in /usr/local/pgsql/lib/plpgtest.so

 I even tried putting plpgsql.so before it in the list, but no go.



 Regards,
 Marko Tiikkaja
In this moment, pls, try to use Load plpgsql

Regards

pavel


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Tom Lane
Noah Misch n...@leadboat.com writes:
 Concerning the immediate fix for non-Windows systems, does any modern system
 ignore modes of Unix domain sockets?  It appears to be a long-fixed problem:

What I was envisioning was that we'd be relying on the permissions of the
containing directory to keep out bad guys.  Permissions on the socket
itself might be sufficient, but what does it save us to assume that?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Triggers on foreign tables

2014-03-02 Thread Ronan Dunklau
Hello.

Did you have time to review the latest version of this patch ? Is there 
anything I can do to get this ready for commiter ?

Thank you for all the work performed so far.




Le mardi 4 février 2014 13:16:22 Ronan Dunklau a écrit :
 Le lundi 3 février 2014 23:28:45 Noah Misch a écrit :
  On Sun, Feb 02, 2014 at 11:53:51AM +0100, Ronan Dunklau wrote:
   Le jeudi 30 janvier 2014 14:05:08 Noah Misch a écrit :
On Thu, Jan 23, 2014 at 03:17:35PM +0100, Ronan Dunklau wrote:
 What do you think about this approach ? Is there something I missed
 which
 would make it not sustainable ?

Seems basically reasonable.  I foresee multiple advantages from having
one
tuplestore per query level as opposed to one for the entire
transaction.
You would remove the performance trap of backing up the tuplestore by
rescanning. It permits reclaiming memory and disk space in
AfterTriggerEndQuery() rather than at end of transaction.  You could
remove
ate_ptr1 and ate_ptr2 from AfterTriggerEventDataFDW and just store the
flags word: depending on AFTER_TRIGGER_2CTIDS, grab either the next
one
or
the next two tuples from the tuplestore.  Using work_mem per
AfterTriggerBeginQuery() instead of per transaction is no problem. 
What
do
you think of that design change?
   
   I agree that this design is better, but I have some objections.
   
   We can remove ate_ptr2 and rely on the AFTER_TRIGGER_2CTIDS flag, but
   the
   rescanning and ate_ptr1 (renamed ate_tupleindex in the attached patch)
   can't go away.
   
   Consider for example the case of a foreign table with more than one
   AFTER
   UPDATE triggers. Unless we store the tuples once for each trigger, we
   will
   have to rescan the tuplestore.
  
  Will we?  Within a given query level, when do (non-deferred) triggers
  execute in an order other than the enqueue order?
 
 Let me explain what I had in mind.
 
 Looking at the code in AfterTriggerSaveEvent:
 
 - we build a template AfterTriggerEvent, and store the tuple(s)
 - for each suitable after trigger that matches the trigger type, as well as
 the WHEN condition if any, a copy of the previously built AfterTriggerEvent
 is queued
 
 Later, those events are fired in order.
 
 This means that more than one event can be fired for one tuple.
 
 Take this example:
 
 CREATE TRIGGER trig_row_after1
 AFTER UPDATE ON rem2
 FOR EACH ROW
 WHEN (NEW.f1 % 5  3)
 EXECUTE PROCEDURE trigger_func('TRIG1');
 
 CREATE TRIGGER trig_row_after2
 AFTER UPDATE ON rem2
 FOR EACH ROW
 WHEN (NEW.f1 % 5  4)
 EXECUTE PROCEDURE trigger_func('TRIG2');
 
 UPDATE rem2 set f2 = 'something';
 
 Assuming 5 rows with f1 as a serial, the fired AfterTriggerEvent's
 ate_tupleindex will be, in that order. Ass
 
 0-0-2-2-4-8-8
 
 So, at least a backward seek is required for trig_row_after2 to be able to
 retrieve a tuple that was already consumed when firing trig_row_after1.
 
 On a side note, this made me realize that it is better to avoid storing a
 tuple entirely if there is no enabled trigger (the f1 = 4 case above). The
 attached patch does that, so the previous sequence becomes:
 
 0-0-2-2-4-6-6
 
 It also prevents from initalizing a tuplestore at all if its not needed.
 
   To mitigate the effects of this behaviour, I added the option to perform
   a
   reverse_seek when the looked-up tuple is nearer from the current index
   than
   from the start.
  
  If there's still a need to seek within the tuplestore, that should get rid
  of the O(n^2) effect.  I'm hoping that per-query-level tuplestores will
  eliminate the need to seek entirely.
 
 I think the only case when seeking is still needed is when there are more
 than one after trigger that need to be fired, since the abovementioned
 change prevents from seeking to skip tuples.
 
If you do pursue that change, make sure the code still does the right
thing
when it drops queued entries during subxact abort.
   
   I don't really understand what should be done at that stage. Since
   triggers on foreign tables are not allowed to be deferred, everything
   should be cleaned up at the end of each query, right ? So, there
   shouldn't be any queued entries.
  
  I suspect that's right.  If you haven't looked over
  AfterTriggerEndSubXact(), please do so and ensure all its actions still
  make sense in the context of this new kind of trigger storage.
 
 You're right, I missed something here. When aborting a subxact, the
 tuplestores for queries below the subxact query depth should be cleaned, if
 any, because AfterTriggerEndQuery has not been called for the failing query.
 
 The attached patch fixes that.
 
 The attached patch checks this, and add documentation for this
 limitation.
 I'm not really sure about how to phrase that correctly in the error
 message
 and the documentation. One can store at most INT_MAX foreign tuples,
 which
 means that at most INT_MAX insert or delete or 

Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 The only way I can see this being of real use to an attacker is if they
 could use this exploit to create a wormed version of PostgresQL on the
 target build system.  Is that possible?

It's theoretically possible, since having broken into the build user's
account they could modify the already-built-but-not-yet-packaged PG
executables.

Having said that, though, I concur with the feeling that this probably
isn't a useful exploit in practice.  On Red Hat's build systems, for
example, different packages are built in different chroots.  So even if
a malicious package is being built concurrently, it could not reach the
postmaster's socket.  A breakin would only be possible for somebody who
had outside-the-chroots control of the build machine ... in which case
they can hack pretty much any built package pretty much any way they
want, without need for anything as fiddly as this.

Other vendors might do things differently, but it still seems likely
that there would be easier exploits available to anyone who's managed
to get control on a machine used for package building.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-02 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 Noah Misch n...@leadboat.com writes:
  Concerning the immediate fix for non-Windows systems, does any modern system
  ignore modes of Unix domain sockets?  It appears to be a long-fixed problem:
 
 What I was envisioning was that we'd be relying on the permissions of the
 containing directory to keep out bad guys.  Permissions on the socket
 itself might be sufficient, but what does it save us to assume that?

Agreed- the general approach to this, from what I've seen, is to handle
it with the directory.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] gaussian distribution pgbench

2014-03-02 Thread Fabien COELHO



   \setrandom foo 1 10 [uniform]
   \setrandom foo 1 :size gaussian 3.6
   \setrandom foo 1 100 exponential 7.2
It's good design. I think it will become more low overhead at part of parsing 
in pgbench, because comparison of strings will be redeced(maybe). And I'd 
like to remove [uniform], beacause we have to have compatibility for old 
scripts, and random function always gets uniform distribution in common sense 
of programming.


I just put uniform as an optional default, hence the brackets.

Otherwise, what I would have in mind if this would be designed from 
scratch:


  \set foo 124
  \set foo string value (?)
  \set foo :variable
  \set foo 12 + :shift

And then

  \set foo uniform 1 10
  \set foo gaussian 1 10 4.2
  \set foo exponential 1 100 5.2

or maybe functions could be repended with something like uniform.
But that would be for another life:-)

However, new grammer is little bit long in user script. It seems trade-off 
that are visibility of scripts and user writing cost.


Yep.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal, patch: allow multiple plpgsql plugins

2014-03-02 Thread Pavel Stehule
2014-03-03 6:09 GMT+01:00 Pavel Stehule pavel.steh...@gmail.com:


 Dne 2. 3. 2014 21:55 Marko Tiikkaja ma...@joh.to napsal(a):

 
  On 3/2/14, 8:47 PM, Pavel Stehule wrote:
 
  2014-03-02 19:59 GMT+01:00 Marko Tiikkaja ma...@joh.to:
 
  Undefined symbols for architecture x86_64:
 
 _plpgsql_register_plugin, referenced from:
 __PG_init in plpgtest.o
 
  I'm guessing this is because PL/PgSQL is a shared library and not in
 core?
Is there a way around this?
 
 
  yes, PLpgSQL is not referenced and, if I remember well, clang is too
  restrictive.
 
  probably
 
 http://stackoverflow.com/questions/17281901/ignoring-an-undefined-symbol-in-a-dynamic-library-from-xcode
 
  or you can add a reference on plpgsql to your Makefile
 
 
  That seems unbelievably ugly, but worse, loading the library in
 shared_preload_libraries doesn't work:
 
  14782  FATAL:  could not load library
 /usr/local/pgsql/lib/plpgtest.so:
 dlopen(/usr/local/pgsql/lib/plpgtest.so, 10): Symbol not found:
 _plpgsql_register_plugin
Referenced from: /usr/local/pgsql/lib/plpgtest.so
Expected in: flat namespace
   in /usr/local/pgsql/lib/plpgtest.so
 
  I even tried putting plpgsql.so before it in the list, but no go.

 
 
  Regards,
  Marko Tiikkaja
 In this moment, pls, try to use Load plpgsql

I though about it this morning - we should to move plugin registration to
core - it should to work like ddl loader

a) it can solve problems with loading
b) it can be usable for all PL environment.

Pavel



 Regards

 pavel