Re: [HACKERS] CREATE TABLE IF NOT EXISTS AS

2013-11-20 Thread Craig Ringer
On 11/20/2013 03:41 PM, Pavel Stehule wrote:
 
 It'd be great if there was a sane way to implement CREATE OR REPLACE
 TABLE - since that's what people really want a lot of the time. Ensure
 that at the end of this command the table looks like this. There's just
 no sane way to do that for a non-empty table.
 
 
 I disagree - CREATE OR REPLACE FUNCTION has sense, because new function
 can has same interface (and will be overwriten) or different (and there
 will be two functions). So with CREATE OR REPLACE TABLE there are two
 new questions - has to new table respect original interface and what
 about content? I don't think so CREATE OR REPLACE TABLE is good idea.

Er, that's what I was saying. It'd be great if it were possible to do it
sanely, but it isn't.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Easily reading debug_print_plan

2013-11-20 Thread Craig Ringer
Hi all

I'm spending a lot of time staring at parse and plan trees at the
moment, and I'm finding reading them rather cumbersome.

For those of you who do this a lot, do you use any sort of tooling to
help you out? Just being able to collapse and expand subtrees would be a
lifesaver.

If it's a hassle for others too, how would you feel about using json as
an output format in future releases? It'd be pretty simple to retrofit
by the looks, though updating the regression tests would be a PITA. My
main concern would be effects on back-patching.

If the conensus opinion on that is hell no or if it's a much bigger
job than I thought I'll just write a converter that ingests node trees
and spits out json I can view using existing tools.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Easily reading debug_print_plan

2013-11-20 Thread Antonin Houska
On 11/20/2013 09:12 AM, Craig Ringer wrote:
 Hi all
 
 I'm spending a lot of time staring at parse and plan trees at the
 moment, and I'm finding reading them rather cumbersome.
 
 For those of you who do this a lot, do you use any sort of tooling to
 help you out?

vim editor. The '%' shortcut can be used to jump between opening and
closing brackets and thus skip smaller or bigger parts of the output.
IMO, this output is primarily for hackers (as opposed to application
developers or users) and hacker should know at least a few vim shortcuts
anyway :-)

// Tony



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Easily reading debug_print_plan

2013-11-20 Thread Craig Ringer
On 11/20/2013 04:12 PM, Craig Ringer wrote:
 Hi all
 
 I'm spending a lot of time staring at parse and plan trees at the
 moment, and I'm finding reading them rather cumbersome.
 
 For those of you who do this a lot, do you use any sort of tooling to
 help you out? Just being able to collapse and expand subtrees would be a
 lifesaver.
 
 If it's a hassle for others too, how would you feel about using json as
 an output format in future releases? It'd be pretty simple to retrofit
 by the looks, though updating the regression tests would be a PITA. My
 main concern would be effects on back-patching.

Inevitably, as soon as I post I realise why it's hell no.

The same representation is used for storing rules. So it can't be
changed for BC reasons and compactness/performance.

I'll just post-process.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Easily reading debug_print_plan

2013-11-20 Thread Craig Ringer
On 11/20/2013 04:22 PM, Antonin Houska wrote:
 vim editor. The '%' shortcut can be used to jump between opening and
 closing brackets and thus skip smaller or bigger parts of the output.
 IMO, this output is primarily for hackers (as opposed to application
 developers or users) and hacker should know at least a few vim shortcuts
 anyway :-)

That's what I'm currently doing, I just wanted something that makes it
quicker and easier. Jumping around the tree is good, but easy
collapse/expand would be much better.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CLUSTER FREEZE

2013-11-20 Thread David Rowley
On Tue, Nov 19, 2013 at 11:35 PM, David Rowley dgrowle...@gmail.com wrote:

 I think that the patch should include some sort of notes in the documents
 to say that cluster performs freezing of tuples. I've attached a patch
 which adds something there, but I'm not 100% sure it is the right thing.
 Perhaps it should just read:

 Cluster also performs aggressive freezing of tuples similar to VACUUM
 FULL FREEZE.

 Although it's not exactly the same as you can perform a vacuum full freeze
 on a relation which does not have the clustered index set.

 I'll delay a bit to see if anyone else has any comments about what the
 docs should read, but I think it is pretty much ready for a commiter's eyes.


Hi Thomas,

I'm just going to mark the patch as waiting for author for now until you
get the chance to add some changes to the documents.

Everything else looks ok.

Regards

David Rowley


 Regards

 David Rowley




 Thanks
 Thomas Munro


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers





[HACKERS] VACUUM for TOASTed objects

2013-11-20 Thread Soroosh Sardari
Hi

The vacuum procedure do rewrite for a table but, what happened if the table
has some TOASTed columns?

Please, help me to find a module or function in source code which is
responsible for
vaccuming the TOAST relation.

Regards,
Soroosh Sardari


Re: [HACKERS] Autoconf 2.69 update

2013-11-20 Thread Oskari Saarenmaa

15.11.2013 05:00, Peter Eisentraut kirjoitti:

I'm proposing that we upgrade our Autoconf to 2.69, which is the latest
right now (release date 2012-04-24).  There are no changes in the source
needed, just tweak the version number in configure.in (see below) and
run autoreconf.  I've compared the configure output before and after on
a few boxes, and there were no significant changes.


+1.  Autoconf 2.63 doesn't seem to be available as a package on recent 
Linux distributions and would make things easier for me.



-m4_if(m4_defn([m4_PACKAGE_VERSION]), [2.63], [], [m4_fatal([Autoconf version 
2.63 is required.
+m4_if(m4_defn([m4_PACKAGE_VERSION]), [2.69], [], [m4_fatal([Autoconf version 
2.69 is required.
  Untested combinations of 'autoconf' and PostgreSQL versions are not
  recommended.  You can remove the check from 'configure.in' but it is then
  your responsibility whether the result works or not.])])


ISTM autoconf has been better with backwards compatibility lately. 
Maybe the fatal error could be changed to a warning and/or the check for 
version == 2.63 be replaced with a check for version = 2.63?  Without a 
strict requirement for a certain autoconf version it would make sense to 
also drop the built autoconf artifacts from the git repository which 
would make diffs shorter and easier to review when touching configure.in.


That said, it looks like autoconf 2.67 (from Debian 6) can't handle = in 
a cflags test, so maybe not..


/ Oskari

***

# Generated by GNU Autoconf 2.67 for PostgreSQL 9.4devel.

...

checking whether gcc supports -fexcess-precision=standard... 
./configure: line 4528: 
pgac_cv_prog_cc_cflags__fexcess_precision_standard=no: command not found




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -d option for pg_isready is broken

2013-11-20 Thread Fujii Masao
On Wed, Nov 20, 2013 at 3:12 AM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Nov 19, 2013 at 1:10 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Tue, Nov 19, 2013 at 11:51 PM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Nov 15, 2013 at 9:01 PM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
 On Wed, Nov 13, 2013 at 9:37 PM, Josh Berkus j...@agliodbs.com wrote:
 handyrep@john:~/handyrep$ pg_isready --version
 pg_isready (PostgreSQL) 9.3.1

 handyrep@john:~/handyrep$ pg_isready -h john -p 5432 -U postgres -d
 postgres -q
 pg_isready: missing = after postgres in connection info string

 handyrep@john:~/handyrep$ pg_isready --host=john --port=5432
 --user=postgres --dbname=postgres
 pg_isready: missing = after postgres in connection info string

 handyrep@john:~/handyrep$ pg_isready -h john -p 5432 -U postgres
 john:5432 - accepting connections

 so apparently the -d option:

 a) doesn't work, and
 b) doesn't do anything

 I suggest simply removing it from the utility.

 I'll note that the -U option doesn't appear to do anything relevant
 either, but at least it doesn't error unnecessarily:

 handyrep@john:~/handyrep$ pg_isready -h john -p 5432 -U no_such_user
 john:5432 - accepting connections


 The attached patch fix it.

 Well, there still seem to be some problems.

 [rhaas pgsql]$ src/bin/scripts/pg_isready -d host=sgdasasg
 /tmp:5432 - no attempt

 I added some debugging instrumentation and it looks like there's a
 problem with this code:

 if (strcmp(def-keyword, hostaddr) == 0 ||
 strcmp(def-keyword, host) == 0)

 The problem is that we end up executing the code contained in that
 block twice, once for host and once for hostaddr.  Thus:

 [rhaas pgsql]$ src/bin/scripts/pg_isready -d host=sgdasasg
 def-keyword=host pghost_str=sgdasasg
 def-keyword=hostaddr pghost_str=/tmp
 def-keyword=port pgport_str=5432
 /tmp:5432 - no attempt

 Oops.

 Nevertheless, that's kind of a separate problem, so we could address
 in a separate commit.  But even ignoring that, this still isn't right:
 according to the fine documentation, -d will be expanded not only if
 it contains an =, but also if it starts with a valid URI prefix.  This
 would break that documented behavior.

 http://www.postgresql.org/docs/9.3/static/app-pg-isready.html

 Attached is the updated version of the patch. Is there any other bug?

 Not that I can see, but it's not very future-proof.  If libpq changes
 its idea of what will provoke database-name expansion, this will again
 be broken.

Yeah, I agree that we should make the logic of pg_isready more future-proof
in 9.4dev. One idea is to expose internal_ping() as a libpq function. Then,
instead of just calling PQpingParams(), we can do PQconnectStartParams(),
libpq-function-version-of-internal_ping(), PQhost(), PQport(), and PQfinish().
That is, we get to know the host and port information by calling
PQhost() and PQport(),
after trying to connect to the server.

But we cannot use this idea in 9.3 because it's too late to add new
libpq function there.
Also I don't think that the minor version release would change that
libpq's logic in 9.3.
So, barring any objections, I will commit the latest version of the
patch in 9.3.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] COPY table FROM STDIN doesn't show count tag

2013-11-20 Thread Amit Khandekar
On 19 November 2013 16:05, Rajeev rastogi rajeev.rast...@huawei.com wrote:

  On 19 November 2013, Amit Khandekar wrote:

 On 18 November 2013 18:00, Rajeev rastogi rajeev.rast...@huawei.com
 wrote:

 On 18 November 2013, Amit Khandekar wrote:

  Please find the patch for the same and let me know your suggestions.

 In this call :

   success = handleCopyIn(pset.db,
 pset.cur_cmd_source,

  
   PQbinaryTuples(*results), intres)  success;

   if (success  intres)

   success =
 PrintQueryResults(intres);

 Instead of handling of the result status this way, what if we use the
 ProcessResult()  argument 'result' to pass back the COPY result status to
 the caller ? We already call PrintQueryResults(results) after the
 ProcessResult() call. So we don't have to  have a COPY-specific
 PrintQueryResults() call. Also, if there is a subsequent SQL command in the
 same query string, the consequence of the patch is that the client prints
 both COPY output and the last command output. So my suggestion would also
 allow us to be consistent with the general behaviour that only the last SQL
 command output is printed in case of multiple SQL commands. Here is how it
 gets printed with your patch :

  Thank you for valuable comments. Your suggestion is absolutely correct.

  psql -d postgres -c \copy tab from '/tmp/st.sql' delimiter ' ';
 insert into tab values ('lll', 3)

 COPY 1

 INSERT 0 1

 This is not harmful, but just a matter of consistency.

 I hope you meant to write test case as *psql -d postgres -c \copy tab
 from stdin; insert into tab values ('lll', 3), *as if we are reading
 from file, then the above issue does not come.

 I meant COPY with a slash. \COPY is equivalent to COPY FROM STDIN. So the
 issue can also be reproduced by :

 \COPY tab from 'client_filename' ...



  I have modified the patch as per your comment and same is attached with
 this mail.



 Thanks. The COPY FROM looks good.

 OK..Thanks



 With the patch applied, \COPY TO 'data_file' command outputs the  COPY
 status into the data file, instead of printing it in the psql session.



 postgres=# \copy tab to '/tmp/fout';

 postgres=#



 $ cat /tmp/fout

 ee  909

 COPY 1

 This is probably because client-side COPY overrides the pset.queryFout
 with its own destination file, and while printing the COPY status,
 the overridden file pointer is not yet reverted back.



 This looks to be an issue without our new patch also. Like I tried
 following command and output was as follows:

 rajeev@linux-ltr9:~/9.4gitcode/install/bin ./psql -d postgres -c \copy
 tbl to 'new.txt';insert into tbl values(55);

 rajeev@linux-ltr9:~/9.4gitcode/install/bin cat new.txt

 5

 67

 5

 67

 2

 2

 99

 1

 1

 INSERT 0 1


Ok. Yes it is an existing issue. Because we are now printing the COPY
status even for COPY TO, the existing issue surfaces too easily with the
patch. \COPY TO is a pretty common scenario. And it does not have to have a
subsequent another command to reproduce the issue Just a single \COPY TO
command reproduces the issue.



 I have fixed the same as per your suggestion by resetting the
 pset.queryFout after the function call “handleCopyOut”.


! pset.queryFout = stdout;

The original pset.queryFout may not be stdout. psql -o option can override
the stdout default.

I think solving the \COPY TO part is going to be a  different (and an
involved) issue to solve than the COPY FROM. Even if we manage to revert
back the queryFout, I think ProcessResult() is not the right place to do
it. ProcessResult() should not assume that  somebody else has changed
queryFout. Whoever has changed it should revert it. Currently, do_copy() is
indeed doing this correctly:

save_file = *override_file;
*override_file = copystream;
success = SendQuery(query.data);
 *override_file = save_file;

But the way SendQuery() itself processes the results and prints them, is
conflicting with the above.

So I think it is best to solve this as a different issue, and we should ,
for this commitfest,  fix only COPY FROM. Once the \COPY existing issue is
solved, only then we can start printing the \COPY TO status as well.




 Please let me know in-case of any other issues.



 Thanks and Regards,

 Kumar Rajeev Rastogi



Re: [HACKERS] VACUUM for TOASTed objects

2013-11-20 Thread Michael Paquier
On Wed, Nov 20, 2013 at 5:46 PM, Soroosh Sardari
soroosh.sard...@gmail.com wrote:
 Hi

 The vacuum procedure do rewrite for a table but, what happened if the table
 has some TOASTed columns?

 Please, help me to find a module or function in source code which is
 responsible for
 vaccuming the TOAST relation.
A toast table is vacuumed with its master table when vacuum is done on
this master table. Have a look at vacuum.c:1150~:
/*
 * If the relation has a secondary toast rel, vacuum that too while we
 * still hold the session lock on the master table.  Note however that
 * analyze will not get done on the toast table.  This
is good, because
 * the toaster always uses hardcoded index access and statistics are
 * totally unimportant for toast relations.
 */
if (toast_relid != InvalidOid)
vacuum_rel(toast_relid, vacstmt, false, for_wraparound);
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Heikki Linnakangas

On 19.11.2013 16:22, Andres Freund wrote:

On 2013-11-19 15:20:01 +0100, Andres Freund wrote:

Imo something the attached patch should be done. The description I came
up with is:

 Fix Hot-Standby initialization of clog and subtrans.


Looks ok for a back-patchable fix.

It's a bit bizarre that the ExtendSUBTRANS loop in 
ProcArrayApplyRecoveryInfo looks different from the one in 
RecordKnownAssignedTransactionIds, but both look correct to me.


In master, it'd be nice to do some further cleanup. Some gripes:

In ProcArrayApplyXidAssignment, I wonder if it would be best to just 
remove the (standbyState == STANDBY_INITIALIZED) check altogether. The 
KnownAssignedXidsRemoveTree() that follows is harmless if there is 
nothing in the known-assigned-xids array (xact_redo_commit does it in 
STANDBY_INITIALIZED state too). The other thing that's done after that 
check is updating lastOverflowedXid, and AFAICS it would be harmless to 
update that, too. It will be overwritten by the 
ProcArrayApplyRecoveryInfo() call that comes later.


Clog, subtrans and multixact are all handled differently. Extensions of 
clog and multixact are WAL-logged, but extensions of subtrans are not. 
They all have a Startup function, but it has a slightly different 
purpose. StartupCLOG only sets latest_page_number, but StartupSUBTRANS 
and StartupMultiXact zero out the current page. For CLOG, the TrimCLOG() 
function does that. Why is clog handled differently from multixact?


StartupCLOG() and StartupMultiXact set latest_page_number, but 
StartupSUBTRANS does not. Is that a problem for subtrans? StartupCLOG() 
and StartupMultiXact() are called at different stages in hot standby - 
StartupCLOG() is called at the beginning of recovery, but 
StartupMultiXact() is only called at end of recovery. Why the 
discrepancy, does latest_page_number need to be set during hot standby 
or not?


I think we should bite the bullet, and WAL-log the extension of 
subtrans, too. Then make the startup and extension procedure for all the 
SLRUs the same.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New option for pg_basebackup, to specify a different directory for pg_xlog

2013-11-20 Thread Haribabu kommi
On 19 November 2013 19:12 Fujii Masao wrote:
 On Tue, Nov 19, 2013 at 9:14 PM, Haribabu kommi
 haribabu.ko...@huawei.com wrote:
  On 18 November 2013 23:30 Fujii Masao wrote:
  On Tue, Nov 19, 2013 at 12:01 AM, Haribabu kommi
  haribabu.ko...@huawei.com wrote:
 
  Thanks for newer version of the patch!
 
  I found that the empty base directory is created and remains even
  when the same directory is specified in both -D and --xlogdir and
 the
  error occurs.
  I think that it's
  better to throw an error in that case before creating any new
 directory.
  Thought?
 
  By creating the base directory only the patch finds whether provided
  base and Xlog directories are same or not? To solve this problem
  following options are possible
 
  1. No problem as it is just an empty base directory, so it can be
 reused in the next time
 Leave it as it is.
  2. Once the error is identified, the base directory can be deleted.
  3. write a new function to remove the parent references from the
 provided path to identify
 the absolute path used for detecting base and Xlog directories are
 same or not?
 
  Please provide your suggestions.
 
  +xlogdir = get_absolute_path(xlog_dir);
 
  xlog_dir must be an absolute path. ISTM we don't do the above. No?
 
  It is required. As user can provide the path as
 /home/installation/bin/../bin/data.
  The provided path is considered as absolute path only but while
  comparing the same With data directory path it will not match.
 
 Okay, maybe I understand you. In order to know the real absolute path,
 basically we need to create the directory specified in --xlogdir,
 change the working directory to it and calculate the parent path.
 You're worried about the cases as follows, for example.
 Right?
 
 * path containing .. like /aaa/bbb/../ccc is specified in --xlogdir
 * symbolic link is specified in --xlogdir
 
 On the second thought, I'm thinking that it might be overkill to add
 such not simple code for that small benefit.

I tried using of stat'ing in two directories, which is having a problem in 
windows.
So modified old approach to detect limited errors. Updated patch attached. 
This will detect and throw an error in the following scenarios.
1. pg_basebackup -D /home/data --xlogdir=/home/data
2. pg_basebackup -D data --xlogdir=/home/data -- home is the CWD
3. pg_basebackup -D ../data --xlogdir=/data -- home is the CWD

Please let me know your suggestions.

Regards,
Hari babu.


UserSpecifiedxlogDir_v6.patch
Description: UserSpecifiedxlogDir_v6.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: pre-commit triggers

2013-11-20 Thread Ian Lawrence Barwick
2013/11/20 Tom Lane t...@sss.pgh.pa.us:
 Robert Haas robertmh...@gmail.com writes:
 On Mon, Nov 18, 2013 at 9:39 AM, Ian Lawrence Barwick barw...@gmail.com 
 wrote:
 I'd expect this to lead to a failed transaction block,
 or at least some sort of notice that the transaction itself
 has been rolled back.

 Ending up in a failed transaction block would be wrong.  If the user
 does a BEGIN, a bunch of stuff, and a COMMIT, they're entitled to
 assume without checking that they are no longer in a transaction
 block.

 Absolutely.  There are plenty of ways to fail at COMMIT already,
 eg deferred foreign key constraints.  This shouldn't act any
 different.

Ah OK, I see how that works. Thanks for the explanation.

Ian Barwick


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Get more from indices.

2013-11-20 Thread Etsuro Fujita
Kyotaro HORIGUCHI wrote:
 Hello, I've totally refactored the series of patches and cut out the
appropriate portion as 'unique (and non-nullable) index stuff'.

 As the discussion before, it got rid of path distinctness. This patch
works only on index 'full-orederedness', i.e., unique index on non-nullable
columns.

This is interesting!

I took a quick look at the patch.  Here is my initial comment.  I don't
think it's so good to set the pathkeys of a unique-index path to
query_pathkeys after the scan/join optimization because it can't exploit the
full-orderedness property in implementing mergejoins, i.e., it can't skip
doing an explicit sort that is considered unnecessary from the property.
So, I think the path's pathkeys should be set to query_pathkeys before the
scan/join optimization, i.e., at the index-path creation time.

If you wouldn't mind, I'd like to rework on the patch.

Thanks,

Best regards,
Etsuro Fujita



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block

2013-11-20 Thread Bruce Momjian
On Tue, Nov 19, 2013 at 10:21:47PM -0500, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Does anyone know if this C comment justifies why ABORT is a NOTICE and
  not WARNING?
 
  /*
   * The user issued ABORT when not inside a transaction. Issue a
   * NOTICE and go to abort state.  The upcoming call to
   * CommitTransactionCommand() will then put us back into the
   * default state.
   */
 
 It's just describing the implementation, not defending the design choice.
 
 My personal standpoint is that I don't care much whether these messages
 are NOTICE or WARNING.  What I'm not happy about is promoting cases that
 have been non-error conditions for years into ERRORs.

I don't remember any cases where that was suggested.

 Now, if we wanted to go the other way (downgrade some ERRORs to WARNINGs),
 that would not create an application compatibility problem in my view.

Yes, that was my initial plan, but many didn't like it.

 And on the third hand, there's Emerson's excellent advice: A foolish
 consistency is the hobgoblin of little minds.  I'm not convinced that
 trying to make all these cases have the same message level is actually
 a good goal.  It seems entirely plausible that some are more dangerous
 than others.

The attached patch changes ABORT from NOTICE to WARNING, and documents
that all other are errors.  This top-level logic idea came from Robert
Haas, and it has some level of consistency.

Interesting that ABORT was already documented as returning a warning:

   Issuing commandABORT/ when not inside a transaction does
   no harm, but it will provoke a warning message.
  ---

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c
new file mode 100644
index 0591f3f..495684e
*** a/src/backend/access/transam/xact.c
--- b/src/backend/access/transam/xact.c
*** PreventTransactionChain(bool isTopLevel,
*** 2961,2966 
--- 2961,2969 
   *	use of the current statement's results.  Likewise subtransactions.
   *	Thus this is an inverse for PreventTransactionChain.
   *
+  *	While top-level transaction control commands (BEGIN/COMMIT/ABORT)
+  *	outside of transactions issue warnings, these generate errors.
+  *
   *	isTopLevel: passed down from ProcessUtility to determine whether we are
   *	inside a function.
   *	stmtType: statement type name, for error messages.
*** UserAbortTransactionBlock(void)
*** 3425,3436 
  
  			/*
  			 * The user issued ABORT when not inside a transaction. Issue a
! 			 * NOTICE and go to abort state.  The upcoming call to
  			 * CommitTransactionCommand() will then put us back into the
  			 * default state.
  			 */
  		case TBLOCK_STARTED:
! 			ereport(NOTICE,
  	(errcode(ERRCODE_NO_ACTIVE_SQL_TRANSACTION),
  	 errmsg(there is no transaction in progress)));
  			s-blockState = TBLOCK_ABORT_PENDING;
--- 3428,3439 
  
  			/*
  			 * The user issued ABORT when not inside a transaction. Issue a
! 			 * WARNING and go to abort state.  The upcoming call to
  			 * CommitTransactionCommand() will then put us back into the
  			 * default state.
  			 */
  		case TBLOCK_STARTED:
! 			ereport(WARNING,
  	(errcode(ERRCODE_NO_ACTIVE_SQL_TRANSACTION),
  	 errmsg(there is no transaction in progress)));
  			s-blockState = TBLOCK_ABORT_PENDING;
diff --git a/src/test/regress/expected/errors.out b/src/test/regress/expected/errors.out
new file mode 100644
index fa0bd82..4061512
*** a/src/test/regress/expected/errors.out
--- b/src/test/regress/expected/errors.out
*** ERROR:  column name oid conflicts with
*** 114,120 
  -- TRANSACTION STUFF
  -- not in a xact
  abort;
! NOTICE:  there is no transaction in progress
  -- not in a xact
  end;
  WARNING:  there is no transaction in progress
--- 114,120 
  -- TRANSACTION STUFF
  -- not in a xact
  abort;
! WARNING:  there is no transaction in progress
  -- not in a xact
  end;
  WARNING:  there is no transaction in progress

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information schema parameter_default implementation

2013-11-20 Thread Rodolfo Campero
Peter,

This patch no longer applies, because CATALOG_VERSION_NO
in src/include/catalog/catversion.h has changed. I touched the patch and
got it to apply without other problems (I haven't built yet).

Regards,


2013/11/14 Peter Eisentraut pete...@gmx.net

 Updated patch attached.

 On Sat, 2013-11-09 at 12:09 +0530, Amit Khandekar wrote:
2) I found the following check a bit confusing, maybe you can make
  it
better
if (!argmodes || argmodes[i] == PROARGMODE_IN || argmodes[i] ==
PROARGMODE_INOUT || argmodes[i] == PROARGMODE_VARIADIC)
  
   Factored that out into a separate helper function.
 
 
  The statement needs to be split into 80 columns width lines.

 done

3) Function level comment for pg_get_function_arg_default() is
missing.
  
   I think the purpose of the function is clear.
 
  Unless a reader goes through the definition, it is not obvious whether
  the second function argument represents the parameter position in
  input parameters or it is the parameter position in *all* the function
  parameters (i.e. proargtypes or proallargtypes). I think at least a
  one-liner description of what this function does should be present.

 done

  
4) You can also add comments inside the function, for example the
comment for the line:
nth = inputargn - 1 - (proc-pronargs - proc-pronargdefaults);
  
   Suggestion?
 
  Yes, it is difficult to understand what's the logic behind this
  calculation, until we go read the documentation related to
  pg_proc.proargdefaults. I think this should be sufficient:
  /*
  * proargdefaults elements always correspond to the last N input
  arguments,
  * where N = pronargdefaults. So calculate the nth_default accordingly.
  */

 done

  There should be an initial check to see if nth_arg is passed a
  negative value. It is used as an index into the argmodes array, so a
  -ve array index can cause a crash. Because
  pg_get_function_arg_default() can now be called by a user also, we
  need to validate the input values. I am ok with returning with an
  error Invalid argument.

 done

  ---
  Instead of :
  deparse_expression_pretty(node, NIL, false, false, 0, 0)
  you can use :
  deparse_expression(node, NIL, false, false)
 
 done



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rodolfo Campero
Anachronics S.R.L.
Tel: (54 11) 4899 2088
rodolfo.camp...@anachronics.com
http://www.anachronics.com


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Andres Freund
On 2013-11-20 12:48:50 +0200, Heikki Linnakangas wrote:
 On 19.11.2013 16:22, Andres Freund wrote:
 On 2013-11-19 15:20:01 +0100, Andres Freund wrote:
 Imo something the attached patch should be done. The description I came
 up with is:
 
  Fix Hot-Standby initialization of clog and subtrans.
 
 Looks ok for a back-patchable fix.
 
 It's a bit bizarre that the ExtendSUBTRANS loop in
 ProcArrayApplyRecoveryInfo looks different from the one in
 RecordKnownAssignedTransactionIds, but both look correct to me.

That's because of the different upper bounds (nextxid) vs xid]), but
yea, I wondered about that as well.

 In master, it'd be nice to do some further cleanup. Some gripes:
 
 In ProcArrayApplyXidAssignment, I wonder if it would be best to just remove
 the (standbyState == STANDBY_INITIALIZED) check altogether. The
 KnownAssignedXidsRemoveTree() that follows is harmless if there is nothing
 in the known-assigned-xids array (xact_redo_commit does it in
 STANDBY_INITIALIZED state too). The other thing that's done after that check
 is updating lastOverflowedXid, and AFAICS it would be harmless to update
 that, too. It will be overwritten by the ProcArrayApplyRecoveryInfo() call
 that comes later.

I was thinking about removing it entirely in the patch, but chose not to
do so. I don't really care which way we go.

 Clog, subtrans and multixact are all handled differently. Extensions of clog
 and multixact are WAL-logged, but extensions of subtrans are not. They all
 have a Startup function, but it has a slightly different purpose.
 StartupCLOG only sets latest_page_number, but StartupSUBTRANS and
 StartupMultiXact zero out the current page. For CLOG, the TrimCLOG()
 function does that. Why is clog handled differently from multixact?

I'd guess clog and multixact are handled differently because multixact
supposedly is never queried during recovery. But I don't that's actually
still true, thinking of 9.3's changes around fkey locks and
HeapTupleGetUpdateXid().
So it's probably time to split StartupMultiXact similar to clog's routines.

 StartupCLOG() and StartupMultiXact set latest_page_number, but
 StartupSUBTRANS does not. Is that a problem for subtrans?

I don't think it is, the difference is that StartupSUBTRANS() zeroes out
the current subtrans page which will set latest_page_number, the other's
access the pages normally, which doesn't set it.

 StartupCLOG() and
 StartupMultiXact() are called at different stages in hot standby -
 StartupCLOG() is called at the beginning of recovery, but StartupMultiXact()
 is only called at end of recovery. Why the discrepancy, does
 latest_page_number need to be set during hot standby or not?

latest_page_number primarily is an optimization, isn't it? Except for a
safeguard check in SimpleLruTruncate() it should only influence victim
buffer initialization. But: slru.c explicitly doesn't initialize
-latest_page_number, which means it's zeroed from a memset slightly
above. Which seems we might cause problems when performing truncations
during recovery, before the first page is zeroed (which'd set
latest_page_number again).
...
Hm. Do we actually *ever* truncate the multixact slru during recovery?
clog.c's truncations are WAL logged, TruncateSUBTRANS() is performed by
restartpoints, but there's no callers to TruncateMultiXact but
vac_truncate_clog and it's not logged? That doesn't seem right.

 I think we should bite the bullet, and WAL-log the extension of subtrans,
 too. Then make the startup and extension procedure for all the SLRUs the
 same.

Hm. I don't really see a big advantage in that? I am all for trying to
bring more symetry to the startup routines, but I don't think forcing
WAL logging for something scrapped every restart is necessary for that.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information schema parameter_default implementation

2013-11-20 Thread Rodolfo Campero
2013/11/20 Rodolfo Campero rodolfo.camp...@anachronics.com

 Peter,

 This patch no longer applies, because CATALOG_VERSION_NO
 in src/include/catalog/catversion.h has changed. I touched the patch and
 got it to apply without other problems (I haven't built yet).


Make fails:
[...]
make -C catalog schemapg.h
make[3]: se ingresa al directorio
`/home/rodolfo/trabajo/postgresql/src/backend/catalog'
cd ../../../src/include/catalog  '/usr/bin/perl' ./duplicate_oids
3846
make[3]: *** [postgres.bki] Error 1
[...]

OID = 3846 is duplicated, see:

./src/include/catalog/pg_proc.h:1976:DATA(insert OID = 3846 (
 pg_get_function_arg_default [...]
./src/include/catalog/pg_proc.h:4679:DATA(insert OID = 3846 (
make_date [...]


Re: [HACKERS] COPY table FROM STDIN doesn't show count tag

2013-11-20 Thread Amit Khandekar
On 20 November 2013 17:40, Rajeev rastogi rajeev.rast...@huawei.com wrote:

 You mean to say that I should change the patch to keep only COPY FROM
 related changes and remove changes related to COPY TO.

 If yes, then I shall change the patch accordingly  and also mention same
 in documentation also.

 Please let me know about this so that I can share the modified patch.


RIght.




 Thanks and Regards,

 Kumar Rajeev Rastogi



Re: [HACKERS] COPY table FROM STDIN doesn't show count tag

2013-11-20 Thread Robert Haas
On Wed, Nov 20, 2013 at 4:56 AM, Amit Khandekar
amit.khande...@enterprisedb.com wrote:
 So I think it is best to solve this as a different issue, and we should ,
 for this commitfest,  fix only COPY FROM. Once the \COPY existing issue is
 solved, only then we can start printing the \COPY TO status as well.

I actually think that we should probably fix the \COPY issue first.
Otherwise, we may end up (for example) changing COPY FROM in one
release and COPY TO in the next release, and that would be annoying.
It does cause application compatibility problems to some degree when
we change things like this, so it's useful to avoid doing it multiple
times.  And I can't really see a principled reason for COPY FROM and
COPY TO to behave differently, either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Robert Haas
On Tue, Nov 19, 2013 at 1:43 PM, David Johnston pol...@yahoo.com wrote:
 IMO A reasonable default cast function should error if the json contents
 require anything more than a straight parse to be stored into jsonb.  If the
 user still needs to make the conversion we should have a standard and
 configurable parser function with json input and jsonb output.  In this case
 the key-keep options would be keep first encountered or keep last
 encountered or fail on duplicate the last of which would be the default.

 I have not really pondered storing scalars into jsonb but before pondering
 usability are there any technical concerns.  If the goal is to share the
 backend with hstore then current hstore does not allow for this and so the
 json aspect would either transfer back over or it would need customized
 code.

I confess to being a bit perplexed by why we want hstore and json to
share a common binary format.  hstore doesn't store hierarchical data;
json does.  If we design a binary format for json, doesn't that just
obsolete store?  Why go to a lot of trouble to extend our home-grown
format if we've got a standard format to plug into?

The thing that's really missing in all of these discussions (AFAICS)
is the issue of creating index support for these types.  If using some
variant of the existing hstore format makes that easier, then I guess
I understand the connection - but I'm not sure why or how that would
be the case, and it would be nice to make the connection more
explicit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Easily reading debug_print_plan

2013-11-20 Thread Dimitri Fontaine
Craig Ringer cr...@2ndquadrant.com writes:
 That's what I'm currently doing, I just wanted something that makes it
 quicker and easier. Jumping around the tree is good, but easy
 collapse/expand would be much better.

As I'm using psql under an Emacs M-x shell buffer, I wanted to
experiment with your problem here, so that fellow PostgreSQL hackers
using Emacs too can benefit already. I'm sure most already have some
setup for that, but maybe not.

Using the external package hide-region as documented at the following
place solved it:

  http://www.emacswiki.org/emacs/HideRegion

Now I can easily collapse and expand regions directly from within the
M-x shell psql buffer, no copy paste, no effort needed. When the point
is on an opening block { it will do the right thing, if I want to hide
something more detailed I can select a region then hide it.

Of course using the default following tool makes things really easier as
a single key stroke will default to selecting the right region, and
another one will expand the selection to the next logical block.

  C-M-SPC runs the command mark-sexp

Oh, and you can define the region using your mouse too.

As often, when searching for text based interactive manipulation
tooling, the best I could find is the one I'm already using, Emacs ;-)

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2013-11-20 Thread Jaime Casanova
On Tue, Nov 19, 2013 at 3:32 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-19 22:09:48 +0900, Michael Paquier wrote:
 On Tue, Nov 19, 2013 at 2:27 AM, Andres Freund and...@2ndquadrant.com 
 wrote:

  * I am not sure I like recovery.trigger as a name. It seems to close
to what I've seen people use to trigger failover and too close to
trigger_file.

 This name was chosen and kept in accordance to the spec of this
 feature. Looks fine for me...

 I still think start_as_standby.trigger or such would be much clearer
 and far less likely to be confused with the promotion trigger file.


the function of the file is to inform the server it's in recovery and
it needs to consider recovery parameters, not to make the server a
standby. yes, i admit that is half the way to make the server a
standby. for example, if you are doing PITR and stopping the server
before some specific point (recovery_target_*) then
start_as_standby.trigger will has no meaning and could confuse
people


  * You made recovery_target_inclusive/pause_at_recovery_target PGC_SIGHUP
- did you review that actually works? Imo that should be changed in a
separate commit.

 Yep, I'd rather see those parameters as PGC_POSTMASTER... As of now,
 once recovery is started those parameter values do not change once
 readRecoveryCommandFile is kicked. Having them SIGHUP would mean that
 you could change them during recovery. Sounds kind of dangerous, no?

 I think it's desirable to make them changeable during recovery, but it's
 a separate patch.


uh! i read the patch and miss that. will change


  * Why did you change some of the recovery gucs to lowercase names, but
left out XLogRestoreCommand?

 This was part of the former patch, perhaps you are right and keeping
 the names as close as possible to the old ones would make sense to
 facilitate maintenance across versions.

 I think lowercase is slightly more consistent with the majority of the
 other GUCs, but if you change it you should change all the new GUC variables.


This one was my change, in the original patch is called
restore_command and i changed it because archive_command's parameter
is called XLogArchiveCommand so i wanted the name to follow the same
format.

i can return it to the original name if no one likes XLogRestoreCommand

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2013-11-20 Thread Andres Freund
On 2013-11-19 22:24:19 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2013-11-19 22:09:48 +0900, Michael Paquier wrote:
  On Tue, Nov 19, 2013 at 2:27 AM, Andres Freund and...@2ndquadrant.com 
  wrote:
  * Why did you change some of the recovery gucs to lowercase names, but
  left out XLogRestoreCommand?
 
  This was part of the former patch, perhaps you are right and keeping
  the names as close as possible to the old ones would make sense to
  facilitate maintenance across versions.
 
  I think lowercase is slightly more consistent with the majority of the
  other GUCs, but if you change it you should change all the new GUC 
  variables.
 
 Please *don't* create any more mixed-case GUC names.  The spelling of
 TimeZone and the one or two other historical exceptions was a very
 unfortunate thing; it's confused more people than it's helped.
 Put in some underscores if you feel a need for word boundaries.

That's a misunderstanding - I was only talking about the variables below
the GUCs, no the GUC's name. The patch changed quite some variable
names, around, but left others leaving an inconsistent casing of related
variables...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Andrew Dunstan


On 11/20/2013 07:52 AM, Robert Haas wrote:

On Tue, Nov 19, 2013 at 1:43 PM, David Johnston pol...@yahoo.com wrote:

IMO A reasonable default cast function should error if the json contents
require anything more than a straight parse to be stored into jsonb.  If the
user still needs to make the conversion we should have a standard and
configurable parser function with json input and jsonb output.  In this case
the key-keep options would be keep first encountered or keep last
encountered or fail on duplicate the last of which would be the default.

I have not really pondered storing scalars into jsonb but before pondering
usability are there any technical concerns.  If the goal is to share the
backend with hstore then current hstore does not allow for this and so the
json aspect would either transfer back over or it would need customized
code.

I confess to being a bit perplexed by why we want hstore and json to
share a common binary format.  hstore doesn't store hierarchical data;
json does.  If we design a binary format for json, doesn't that just
obsolete store?  Why go to a lot of trouble to extend our home-grown
format if we've got a standard format to plug into?

The thing that's really missing in all of these discussions (AFAICS)
is the issue of creating index support for these types.  If using some
variant of the existing hstore format makes that easier, then I guess
I understand the connection - but I'm not sure why or how that would
be the case, and it would be nice to make the connection more
explicit.




Oleg and Teodor have done quite a lot of work on a version of hstore 
that supports nested structures. See their pgcon talk. With some 
additions it has become in effect a non-standard notation for json. 
Rather than repeat that work, my suggestion has been that they abstract 
the common parts into a library that can be used by jsonb or whatever we 
end up calling it as well as nested hstore. I understand Teodor is 
working on this.


In general I share your feelings, though.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2013-11-20 Thread Andres Freund
On 2013-11-20 08:10:44 -0500, Jaime Casanova wrote:
 On Tue, Nov 19, 2013 at 3:32 PM, Andres Freund and...@2ndquadrant.com wrote:
  On 2013-11-19 22:09:48 +0900, Michael Paquier wrote:
  On Tue, Nov 19, 2013 at 2:27 AM, Andres Freund and...@2ndquadrant.com 
  wrote:
 
   * I am not sure I like recovery.trigger as a name. It seems to close
 to what I've seen people use to trigger failover and too close to
 trigger_file.
 
  This name was chosen and kept in accordance to the spec of this
  feature. Looks fine for me...
 
  I still think start_as_standby.trigger or such would be much clearer
  and far less likely to be confused with the promotion trigger file.
 
 
 the function of the file is to inform the server it's in recovery and
 it needs to consider recovery parameters, not to make the server a
 standby. yes, i admit that is half the way to make the server a
 standby. for example, if you are doing PITR and stopping the server
 before some specific point (recovery_target_*) then
 start_as_standby.trigger will has no meaning and could confuse
 people

'recovery' includes crash recovery, that's why I quite dislike your
function name since it's not crash recovery you're checking for since
during that we certainly do not want to interpet those parameters.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Handling GIN incomplete splits

2013-11-20 Thread Michael Paquier
Here are some comments about the 4th patch.
1) Compiles without warnings, passes make check.
2) s/ginFinshSplit/ginFinishSplit
3) Server crashes when trying to create a gin index index creation (see
example of previous email with pg_trgm). Here is the backtrace of the crash:
* thread #1: tid = 0x15221, 0x7fff8c59f866
libsystem_kernel.dylib`__pthread_kill + 10, queue = 'com.apple.main-thread,
stop reason = signal SIGABRT
frame #0: 0x7fff8c59f866 libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x7fff8e60735c libsystem_pthread.dylib`pthread_kill + 92
frame #2: 0x7fff91586bba libsystem_c.dylib`abort + 125
frame #3: 0x00010e953ed9
postgres`ExceptionalCondition(conditionName=0x00010ea31055,
errorType=0x00010e9b5973, fileName=0x00010ea2fd3d, lineNumber=1175)
+ 137 at assert.c:54
frame #4: 0x00010e79073a
postgres`UnpinBuffer(buf=0x00010f37f9c0, fixOwner='\0') + 282 at
bufmgr.c:1175
frame #5: 0x00010e793465 postgres`ReleaseBuffer(buffer=3169) + 565
at bufmgr.c:2540
frame #6: 0x00010e414e43
postgres`freeGinBtreeStack(stack=0x7fa2138adcf8) + 67 at ginbtree.c:196
frame #7: 0x00010e415330
postgres`ginInsertValue(btree=0x7fff51807e80, stack=0x7fa2138a6dd8,
insertdata=0x7fff51807e70, buildStats=0x7fff51809fa0) + 1216 at
ginbtree.c:728
frame #8: 0x00010e404ebf
postgres`ginEntryInsert(ginstate=0x7fff51807fe0, attnum=1, key=7828073,
category='\0', items=0x000117d0ab28, nitem=76,
buildStats=0x7fff51809fa0) + 1055 at gininsert.c:218
frame #9: 0x00010e405ad6
postgres`ginbuild(fcinfo=0x7fff5180a050) + 1590 at gininsert.c:392
frame #10: 0x00010e9609ba
postgres`OidFunctionCall3Coll(functionId=2738, collation=0,
arg1=4693605424, arg2=4693760992, arg3=140334089145912) + 186 at fmgr.c:1649
frame #11: 0x00010e4f4b30
postgres`index_build(heapRelation=0x000117c2bc30,
indexRelation=0x000117c51be0, indexInfo=0x7fa213888e38,
isprimary='\0', isreindex='\0') + 464 at index.c:1963
frame #12: 0x00010e4f2f07
postgres`index_create(heapRelation=0x000117c2bc30,
indexRelationName=0x7fa213888b30, indexRelationId=16445, relFileNode=0,
indexInfo=0x7fa213888e38, indexColNames=0x7fa2138892d8,
accessMethodObjectId=2742, tableSpaceId=0,
collationObjectId=0x7fa213889330, classObjectId=0x7fa213889350,
coloptions=0x7fa213889370, reloptions=0, isprimary='\0',
isconstraint='\0', deferrable='\0', initdeferred='\0',
allow_system_table_mods='\0', skip_build='\0', concurrent='\0',
is_internal='\0') + 3591 at index.c:1082
frame #13: 0x00010e5da885
postgres`DefineIndex(stmt=0x7fa213888b90, indexRelationId=0,
is_alter_table='\0', check_rights='\x01', skip_build='\0', quiet='\0') +
4181 at indexcmds.c:595
frame #14: 0x00010e7dd4a3
postgres`ProcessUtilitySlow(parsetree=0x7fa21384b530,
queryString=0x7fa21384a838, context=PROCESS_UTILITY_TOPLEVEL,
params=0x, dest=0x7fa21384b918,
completionTag=0x7fff5180b020) + 2931 at utility.c:1163
frame #15: 0x00010e7dc4d7
postgres`standard_ProcessUtility(parsetree=0x7fa21384b530,
queryString=0x7fa21384a838, context=PROCESS_UTILITY_TOPLEVEL,
params=0x, dest=0x7fa21384b918,
completionTag=0x7fff5180b020) + 3511 at utility.c:873
frame #16: 0x00010e7db719
postgres`ProcessUtility(parsetree=0x7fa21384b530,
queryString=0x7fa21384a838, context=PROCESS_UTILITY_TOPLEVEL,
params=0x, dest=0x7fa21384b918,
completionTag=0x7fff5180b020) + 185 at utility.c:352
frame #17: 0x00010e7db0e5
postgres`PortalRunUtility(portal=0x7fa213889638,
utilityStmt=0x7fa21384b530, isTopLevel='\x01', dest=0x7fa21384b918,
completionTag=0x7fff5180b020) + 325 at pquery.c:1187
frame #18: 0x00010e7da002
postgres`PortalRunMulti(portal=0x7fa213889638, isTopLevel='\x01',
dest=0x7fa21384b918, altdest=0x7fa21384b918,
completionTag=0x7fff5180b020) + 514 at pquery.c:1318
frame #19: 0x00010e7d95c4
postgres`PortalRun(portal=0x7fa213889638, count=9223372036854775807,
isTopLevel='\x01', dest=0x7fa21384b918, altdest=0x7fa21384b918,
completionTag=0x7fff5180b020) + 964 at pquery.c:816
frame #20: 0x00010e7d4d77
postgres`exec_simple_query(query_string=0x7fa21384a838) + 1207 at
postgres.c:1048
frame #21: 0x00010e7d3fc1 postgres`PostgresMain(argc=1,
argv=0x7fa21301a3c0, dbname=0x7fa21301a228,
username=0x7fa21301a208) + 2753 at postgres.c:3992
frame #22: 0x00010e75868c
postgres`BackendRun(port=0x7fa212e00240) + 700 at postmaster.c:4085
frame #23: 0x00010e757c81
postgres`BackendStartup(port=0x7fa212e00240) + 433 at postmaster.c:3774
frame #24: 0x00010e7544fe postgres`ServerLoop + 606 at
postmaster.c:1585
frame #25: 0x00010e751d74 postgres`PostmasterMain(argc=3,
argv=0x7fa212c041f0) + 5380 at postmaster.c:1240
frame 

Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-19 09:33:34 -0500, Andrew Dunstan wrote:

 On 11/19/2013 09:20 AM, Andres Freund wrote:
 Imo this warrants and expedited point release :(

+1

 I presume anyone who is vulnerable to it would need to recreate
 their secondary servers to get rid of potential problems?

 Yes. There's less expensive ways to do it, but those seem to
 complicated to suggest.

Wouldn't a database VACUUM FREEZE fix it, with WAL-logged writing
of everything that doesn't yet have hint bits set?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Dynamic Shared Memory stuff

2013-11-20 Thread Heikki Linnakangas
I'm trying to catch up on all of this dynamic shared memory stuff. A 
bunch of random questions and complaints:


What kind of usage are we trying to cater with the dynamic shared 
memory? How many allocations? What size will they have have typically, 
minimum and maximum? I looked at the message queue implementation you 
posted, but I wonder if that's the use case you're envisioning for this, 
or if you have more things in mind.



* dsm_handle is defined in dsm_impl.h, but it's exposed in the function 
signatures in dsm.h. ISTM it should be moved to dsm.h


* The DSM API contains functions for resizing the segment. That's not 
exercised by the MQ or TOC facilities. Is that going to stay dead code, 
or do you envision a user for it?


* dsm_impl_can_resize() incorrectly returns false for DSM_IMPL_MMAP. The 
mmap() implementation can resize.


* This is an issue I've seen for some time with git master, while 
working on various things. Sometimes, when I kill the server with 
CTRL-C, I get this in the log:


^CLOG:  received fast shutdown request
LOG:  aborting any active transactions
FATAL:  terminating connection due to administrator command
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
LOG:  could not remove shared memory segment /PostgreSQL.1804289383: 
Tiedostoa tai hakemistoa ei ole


(that means ENOENT)

And I just figured out why that happens: If you take a base backup of a 
running system, the pg_dynshmem/state file is included in the backup. If 
you now start up a standby from the backup on the same system, it will 
clean up and reuse the dynshmem segment still used by the master 
system. Now, when you shut down the master, you get that message in the 
log. If the segment was actually used for something, the master would 
naturally crash.


* As discussed in the Something fishy happening on frogmouth thread, I 
don't like the fact that the dynamic shared memory segments will be 
permanently leaked if you kill -9 postmaster and destroy the data directory.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information schema parameter_default implementation

2013-11-20 Thread Peter Eisentraut
Updated patch
From f82bc0c522b7c238b1dd8e5bb3495babd5b6192a Mon Sep 17 00:00:00 2001
From: Peter Eisentraut pete...@gmx.net
Date: Thu, 14 Nov 2013 21:43:15 -0500
Subject: [PATCH v2] Implement information_schema.parameters.parameter_default
 column

Reviewed-by: Ali Dar ali.munir@gmail.com
Reviewed-by: Amit Khandekar amit.khande...@enterprisedb.com
---
 doc/src/sgml/information_schema.sgml|  9 +++
 src/backend/catalog/information_schema.sql  |  9 ++-
 src/backend/utils/adt/ruleutils.c   | 84 +
 src/include/catalog/catversion.h|  2 +-
 src/include/catalog/pg_proc.h   |  2 +
 src/include/utils/builtins.h|  1 +
 src/test/regress/expected/create_function_3.out | 33 +-
 src/test/regress/sql/create_function_3.sql  | 24 +++
 8 files changed, 160 insertions(+), 4 deletions(-)

diff --git a/doc/src/sgml/information_schema.sgml b/doc/src/sgml/information_schema.sgml
index 22e17bb..22f43c8 100644
--- a/doc/src/sgml/information_schema.sgml
+++ b/doc/src/sgml/information_schema.sgml
@@ -3323,6 +3323,15 @@ titleliteralparameters/literal Columns/title
in future versions.)
   /entry
  /row
+
+ row
+  entryliteralparameter_default/literal/entry
+  entrytypecharacter_data/type/entry
+  entry
+   The default expression of the parameter, or null if none or if the
+   function is not owned by a currently enabled role.
+  /entry
+ /row
 /tbody
/tgroup
   /table
diff --git a/src/backend/catalog/information_schema.sql b/src/backend/catalog/information_schema.sql
index c5f7a8b..fd706e3 100644
--- a/src/backend/catalog/information_schema.sql
+++ b/src/backend/catalog/information_schema.sql
@@ -1133,10 +1133,15 @@ CREATE VIEW parameters AS
CAST(null AS sql_identifier) AS scope_schema,
CAST(null AS sql_identifier) AS scope_name,
CAST(null AS cardinal_number) AS maximum_cardinality,
-   CAST((ss.x).n AS sql_identifier) AS dtd_identifier
+   CAST((ss.x).n AS sql_identifier) AS dtd_identifier,
+   CAST(
+ CASE WHEN pg_has_role(proowner, 'USAGE')
+  THEN pg_get_function_arg_default(p_oid, (ss.x).n)
+  ELSE NULL END
+ AS character_data) AS parameter_default
 
 FROM pg_type t, pg_namespace nt,
- (SELECT n.nspname AS n_nspname, p.proname, p.oid AS p_oid,
+ (SELECT n.nspname AS n_nspname, p.proname, p.oid AS p_oid, p.proowner,
  p.proargnames, p.proargmodes,
  _pg_expandarray(coalesce(p.proallargtypes, p.proargtypes::oid[])) AS x
   FROM pg_namespace n, pg_proc p
diff --git a/src/backend/utils/adt/ruleutils.c b/src/backend/utils/adt/ruleutils.c
index 5ffce68..dace05a 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -2266,6 +2266,90 @@ static char *generate_function_name(Oid funcid, int nargs,
 	return argsprinted;
 }
 
+static bool
+is_input_argument(int nth, const char *argmodes)
+{
+	return (!argmodes
+			|| argmodes[nth] == PROARGMODE_IN
+			|| argmodes[nth] == PROARGMODE_INOUT
+			|| argmodes[nth] == PROARGMODE_VARIADIC);
+}
+
+/*
+ * Get textual representation of a function argument's default value.  The
+ * second argument of this function is the argument number among all arguments
+ * (i.e. proallargtypes, *not* proargtypes), starting with 1, because that's
+ * how information_schema.sql uses it.
+ */
+Datum
+pg_get_function_arg_default(PG_FUNCTION_ARGS)
+{
+	Oid			funcid = PG_GETARG_OID(0);
+	int32		nth_arg = PG_GETARG_INT32(1);
+	HeapTuple	proctup;
+	Form_pg_proc proc;
+	int			numargs;
+	Oid		   *argtypes;
+	char	  **argnames;
+	char	   *argmodes;
+	int			i;
+	List	   *argdefaults;
+	Node	   *node;
+	char	   *str;
+	int			nth_inputarg;
+	Datum		proargdefaults;
+	bool		isnull;
+	int			nth_default;
+
+	proctup = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcid));
+	if (!HeapTupleIsValid(proctup))
+		elog(ERROR, cache lookup failed for function %u, funcid);
+
+	numargs = get_func_arg_info(proctup, argtypes, argnames, argmodes);
+	if (nth_arg  1 || nth_arg  numargs || !is_input_argument(nth_arg - 1, argmodes))
+	{
+		ReleaseSysCache(proctup);
+		PG_RETURN_NULL();
+	}
+
+	nth_inputarg = 0;
+	for (i = 0; i  nth_arg; i++)
+		if (is_input_argument(i, argmodes))
+			nth_inputarg++;
+
+	proargdefaults = SysCacheGetAttr(PROCOID, proctup,
+	 Anum_pg_proc_proargdefaults,
+	 isnull);
+	if (isnull)
+	{
+		ReleaseSysCache(proctup);
+		PG_RETURN_NULL();
+	}
+
+	str = TextDatumGetCString(proargdefaults);
+	argdefaults = (List *) stringToNode(str);
+	Assert(IsA(argdefaults, List));
+	pfree(str);
+
+	proc = (Form_pg_proc) GETSTRUCT(proctup);
+
+	/* Calculate index into proargdefaults: proargdefaults corresponds to the
+	 * last N input arguments, where N = pronargdefaults. */
+	nth_default = nth_inputarg - 1 - (proc-pronargs - 

Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Andres Freund
On 2013-11-20 05:30:39 -0800, Kevin Grittner wrote:
  Yes. There's less expensive ways to do it, but those seem to
  complicated to suggest.
 
 Wouldn't a database VACUUM FREEZE fix it, with WAL-logged writing
 of everything that doesn't yet have hint bits set?

Besides also being pretty expensive it still wouldn't correct the clog -
and we don't always rely on hint bits.

I was thinking about just copying over the clog from the primary, but
it's not trivial if the standby isn't cought up, since the primary's
clog could have been truncated ahead of what the standby needs.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] configure: allow adding a custom string to PG_VERSION

2013-11-20 Thread Michael Paquier
On Wed, Nov 20, 2013 at 8:08 AM, Oskari Saarenmaa o...@ohmu.fi wrote:

 On Mon, Nov 18, 2013 at 08:48:13PM -0500, Peter Eisentraut wrote:
  On Tue, 2013-11-05 at 18:29 +0200, Oskari Saarenmaa wrote:
   This can be used to tag custom built packages with an extra version string
   such as the git describe id or distribution package release version.
 
  I think this is a reasonable feature, and the implementation is OK, but
  I don't see why the format of the extra version information needs to be
  exactly
 
  PG_VERSION=$PACKAGE_VERSION ($withval)
 
  I'd imagine, for example, that someone will want to do -something or
  +something.  So I'd just make this
 
  PG_VERSION=$PACKAGE_VERSION$withval
 
  Comments?

 Sounds reasonable.

   +# Allow adding a custom string to PG_VERSION
   +PGAC_ARG_REQ(with, extra-version, [NAME], [extra version information],
   +[PG_VERSION=$PACKAGE_VERSION ($withval)], 
   [PG_VERSION=$PACKAGE_VERSION])
 
  This could be indented better.  It was a bit confusing at first.

 Agreed.  Attached an updated patch, or you can grab it from
 https://github.com/saaros/postgres/compare/extra-version

Here are a couple of comments about the patch:
1) I think that you should regenerate ./configure as well with this
patch to include all the changes together (someone correct me if I am
wrong here!)
2) This new option should be added in the section ## Command line
options in configure.in
3) PG_VERSION is not a variable name adapted IMO, as it might contain
custom information. Something like PG_VERSION_TOTAL perhaps?
regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


ECPG fixes, was: Re: [HACKERS] ECPG FETCH readahead

2013-11-20 Thread Boszormenyi Zoltan

2013-11-12 07:15 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:01 keltezéssel, Noah Misch írta:

On Mon, Nov 11, 2013 at 10:17:54AM +0100, Boszormenyi Zoltan wrote:

The old contents of my GIT repository was removed so you need to
clone it fresh. https://github.com/zboszor/ecpg-readahead.git
I won't post the humongous patch again, since sending a 90KB
compressed file to everyone on the list is rude.

Patches of that weight show up on a regular basis.  I don't think it's rude.


OK, here it is.


I have rebased the patchset after ecpg: Split off mmfatal() from mmerror()
since it caused merge conflicts.

It's at the usual place again, you need to clone it from scratch if you are
interested in looking at git diff/log

I have removed some previous ecpg_log() debug output and
the total patch size is not so huge any more but I am going to post
the split-up set in parts.

Attached is the first few patches that are strictly generic ECPG fixes.
They can be applied independently and obvious enough.

Subsequent patches will come as reply to this email.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/

commit f167aaa9693305e08cd6b2946af8528dada799b4
Author: Böszörményi Zoltán z...@cybertec.at
Date:   Wed Nov 20 10:31:21 2013 +0100

ECPG: Make the preprocessor emit ';' if the variable type for
a list of variables is varchar. This fixes this test case:

int main(void)
{
exec sql begin declare section;
varchar a[50], b[50];
exec sql end declare section;

return 0;
}

Since varchars are internally turned into custom structs and
the type name is emitted for these variable declarations,
the preprocessed code previously had:

struct varchar_1  { ... }  a _,_  struct varchar_2  { ... }  b ;

The comma in the generated C file was a syntax error.

There are no regression test changes since it's not exercised.

diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer
index 342b7bc..fd35dfc 100644
--- a/src/interfaces/ecpg/preproc/ecpg.trailer
+++ b/src/interfaces/ecpg/preproc/ecpg.trailer
@@ -837,7 +837,12 @@ opt_signed: SQL_SIGNED
 variable_list: variable
 			{ $$ = $1; }
 		| variable_list ',' variable
-			{ $$ = cat_str(3, $1, mm_strdup(,), $3); }
+		{
+			if (actual_type[struct_level].type_enum == ECPGt_varchar)
+$$ = cat_str(3, $1, mm_strdup(;), $3);
+			else
+$$ = cat_str(3, $1, mm_strdup(,), $3);
+		}
 		;
 
 variable: opt_pointer ECPGColLabel opt_array_bounds opt_bit_field opt_initializer
commit b6d57b3fa709757769eb27083d7231602f2d806c
Author: Böszörményi Zoltán z...@cybertec.at
Date:   Wed Nov 20 10:33:40 2013 +0100

ECPG: Free the malloc()'ed variables in the test so it comes out
clean on Valgrind runs.

diff --git a/src/interfaces/ecpg/test/expected/preproc-outofscope.c b/src/interfaces/ecpg/test/expected/preproc-outofscope.c
index 125d7d8..2438911 100644
--- a/src/interfaces/ecpg/test/expected/preproc-outofscope.c
+++ b/src/interfaces/ecpg/test/expected/preproc-outofscope.c
@@ -347,28 +347,31 @@ if (sqlca.sqlcode  0) exit (1);}
 
 	close_cur1();
 
+	free(myvar);
+	free(mynullvar);
+
 	strcpy(msg, drop);
 	{ ECPGdo(__LINE__, 0, 1, NULL, 0, ECPGst_normal, drop table a1, ECPGt_EOIT, ECPGt_EORT);
-#line 115 outofscope.pgc
+#line 118 outofscope.pgc
 
 if (sqlca.sqlcode  0) exit (1);}
-#line 115 outofscope.pgc
+#line 118 outofscope.pgc
 
 
 	strcpy(msg, commit);
 	{ ECPGtrans(__LINE__, NULL, commit);
-#line 118 outofscope.pgc
+#line 121 outofscope.pgc
 
 if (sqlca.sqlcode  0) exit (1);}
-#line 118 outofscope.pgc
+#line 121 outofscope.pgc
 
 
 	strcpy(msg, disconnect);
 	{ ECPGdisconnect(__LINE__, CURRENT);
-#line 121 outofscope.pgc
+#line 124 outofscope.pgc
 
 if (sqlca.sqlcode  0) exit (1);}
-#line 121 outofscope.pgc
+#line 124 outofscope.pgc
 
 
 	return (0);
diff --git a/src/interfaces/ecpg/test/expected/preproc-outofscope.stderr b/src/interfaces/ecpg/test/expected/preproc-outofscope.stderr
index 91d3505..c7f8771 100644
--- a/src/interfaces/ecpg/test/expected/preproc-outofscope.stderr
+++ b/src/interfaces/ecpg/test/expected/preproc-outofscope.stderr
@@ -102,13 +102,13 @@
 [NO_PID]: sqlca: code: 0, state: 0
 [NO_PID]: ecpg_execute on line 58: OK: CLOSE CURSOR
 [NO_PID]: sqlca: code: 0, state: 0
-[NO_PID]: ecpg_execute on line 115: query: drop table a1; with 0 parameter(s) on connection regress1
+[NO_PID]: ecpg_execute on line 118: query: drop table a1; with 0 parameter(s) on connection regress1
 [NO_PID]: sqlca: code: 0, state: 0
-[NO_PID]: ecpg_execute on line 115: using PQexec
+[NO_PID]: ecpg_execute on line 118: using PQexec
 [NO_PID]: sqlca: code: 0, state: 0
-[NO_PID]: ecpg_execute on line 115: OK: DROP TABLE
+[NO_PID]: ecpg_execute on line 

[HACKERS] Modify the DECLARE CURSOR command tag depending on the scrollable flag

2013-11-20 Thread Boszormenyi Zoltan

2013-11-20 14:41 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:15 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:01 keltezéssel, Noah Misch írta:

On Mon, Nov 11, 2013 at 10:17:54AM +0100, Boszormenyi Zoltan wrote:

The old contents of my GIT repository was removed so you need to
clone it fresh. https://github.com/zboszor/ecpg-readahead.git
I won't post the humongous patch again, since sending a 90KB
compressed file to everyone on the list is rude.

Patches of that weight show up on a regular basis.  I don't think it's rude.


OK, here it is.


...
Subsequent patches will come as reply to this email.


Attached is the patch that modified the command tag returned by
the DECLARE CURSOR command. It returns DECLARE SCROLL CURSOR
or DECLARE NO SCROLL CURSOR depending on the cursor's
scrollable flag that can be determined internally even if neither is
asked explicitly.

It is expected by the ECPG cursor readahead code.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/

commit a691e262f562debd4b8991728fddd1e0895cf907
Author: Böszörményi Zoltán z...@cybertec.at
Date:   Wed Nov 20 10:50:31 2013 +0100

The backend knows whether the cursor is scrollable if neither SCROLL
nor NO SCROLL was specified. Inform the clients about this property
in the command tag: DECLARE SCROLL CURSOR or DECLARE NO SCROLL CURSOR.
The upcoming ECPG cursor handling code uses it. One contrib and some
ECPG regression tests have changed.

diff --git a/contrib/dblink/expected/dblink.out b/contrib/dblink/expected/dblink.out
index f237c43..d2b5d69 100644
--- a/contrib/dblink/expected/dblink.out
+++ b/contrib/dblink/expected/dblink.out
@@ -442,9 +442,9 @@ SELECT dblink_close('myconn','rmt_foo_cursor');
 
 -- this should succeed because we have an open transaction
 SELECT dblink_exec('myconn','DECLARE xact_test CURSOR FOR SELECT * FROM foo');
-  dblink_exec   
-
- DECLARE CURSOR
+  dblink_exec  
+---
+ DECLARE SCROLL CURSOR
 (1 row)
 
 -- commit remote transaction
@@ -477,9 +477,9 @@ SELECT dblink_close('myconn','rmt_foo_cursor2');
 
 -- this should succeed because we have an open transaction
 SELECT dblink_exec('myconn','DECLARE xact_test CURSOR FOR SELECT * FROM foo');
-  dblink_exec   
-
- DECLARE CURSOR
+  dblink_exec  
+---
+ DECLARE SCROLL CURSOR
 (1 row)
 
 -- this should commit the transaction
diff --git a/src/backend/commands/portalcmds.c b/src/backend/commands/portalcmds.c
index 5c3f42c..ef36d78 100644
--- a/src/backend/commands/portalcmds.c
+++ b/src/backend/commands/portalcmds.c
@@ -42,7 +42,8 @@
  */
 void
 PerformCursorOpen(PlannedStmt *stmt, ParamListInfo params,
-  const char *queryString, bool isTopLevel)
+  const char *queryString, bool isTopLevel,
+  bool *scrollable)
 {
 	DeclareCursorStmt *cstmt = (DeclareCursorStmt *) stmt-utilityStmt;
 	Portal		portal;
@@ -118,6 +119,8 @@ PerformCursorOpen(PlannedStmt *stmt, ParamListInfo params,
 			portal-cursorOptions |= CURSOR_OPT_NO_SCROLL;
 	}
 
+	*scrollable = !!(portal-cursorOptions  CURSOR_OPT_SCROLL);
+
 	/*
 	 * Start execution, inserting parameters if any.
 	 */
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 6a7bf0d..89665cc 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -505,11 +505,15 @@ standard_ProcessUtility(Node *parsetree,
 		case T_PlannedStmt:
 			{
 PlannedStmt *stmt = (PlannedStmt *) parsetree;
+bool		scrollable;
 
 if (stmt-utilityStmt == NULL ||
 	!IsA(stmt-utilityStmt, DeclareCursorStmt))
 	elog(ERROR, non-DECLARE CURSOR PlannedStmt passed to ProcessUtility);
-PerformCursorOpen(stmt, params, queryString, isTopLevel);
+PerformCursorOpen(stmt, params, queryString, isTopLevel, scrollable);
+if (completionTag)
+	sprintf(completionTag, DECLARE %s CURSOR,
+			   (scrollable ? SCROLL : NO SCROLL));
 			}
 			break;
 
diff --git a/src/include/commands/portalcmds.h b/src/include/commands/portalcmds.h
index b123bbd..8e756a8 100644
--- a/src/include/commands/portalcmds.h
+++ b/src/include/commands/portalcmds.h
@@ -19,7 +19,8 @@
 
 
 extern void PerformCursorOpen(PlannedStmt *stmt, ParamListInfo params,
-  const char *queryString, bool isTopLevel);
+  const char *queryString, bool isTopLevel,
+  bool *scrollable);
 
 extern void PerformPortalFetch(FetchStmt *stmt, DestReceiver *dest,
    char *completionTag);
diff --git a/src/interfaces/ecpg/test/expected/compat_informix-sqlda.stderr b/src/interfaces/ecpg/test/expected/compat_informix-sqlda.stderr
index f463d03..fc36a14 100644
--- a/src/interfaces/ecpg/test/expected/compat_informix-sqlda.stderr
+++ b/src/interfaces/ecpg/test/expected/compat_informix-sqlda.stderr
@@ -28,7 +28,7 @@
 [NO_PID]: 

[HACKERS] ECPG infrastructure changes, part 2, was: Re: ECPG fixes

2013-11-20 Thread Boszormenyi Zoltan

2013-11-20 14:41 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:15 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:01 keltezéssel, Noah Misch írta:

On Mon, Nov 11, 2013 at 10:17:54AM +0100, Boszormenyi Zoltan wrote:

The old contents of my GIT repository was removed so you need to
clone it fresh. https://github.com/zboszor/ecpg-readahead.git
I won't post the humongous patch again, since sending a 90KB
compressed file to everyone on the list is rude.

Patches of that weight show up on a regular basis.  I don't think it's rude.


OK, here it is.


...
Subsequent patches will come as reply to this email.


ecpg_log() fixes after part 1 that produces a lot of regression test changes.
This patch is over 200K in itself so I send it separately and compressed.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



17.patch.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ECPG infrastructure changes, part 3, was: Re: ECPG fixes

2013-11-20 Thread Boszormenyi Zoltan

2013-11-20 14:41 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:15 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:01 keltezéssel, Noah Misch írta:

On Mon, Nov 11, 2013 at 10:17:54AM +0100, Boszormenyi Zoltan wrote:

The old contents of my GIT repository was removed so you need to
clone it fresh. https://github.com/zboszor/ecpg-readahead.git
I won't post the humongous patch again, since sending a 90KB
compressed file to everyone on the list is rude.

Patches of that weight show up on a regular basis.  I don't think it's rude.


OK, here it is.


...
Subsequent patches will come as reply to this email.


Further ecpglib/execute.c changes.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/

commit c901afd59894f49001b743982d38a51b23bac8e1
Author: Böszörményi Zoltán z...@cybertec.at
Date:   Wed Nov 20 11:05:34 2013 +0100

ECPG: Explicitly decouple the tuple index from the array index
in ecpg_get_data(). Document the function arguments.

diff --git a/src/interfaces/ecpg/ecpglib/data.c b/src/interfaces/ecpg/ecpglib/data.c
index 5f9a3d4..7f3c7cb 100644
--- a/src/interfaces/ecpg/ecpglib/data.c
+++ b/src/interfaces/ecpg/ecpglib/data.c
@@ -119,8 +119,35 @@ check_special_value(char *ptr, double *retval, char **endptr)
 	return false;
 }
 
+/*
+ * ecpg_get_data
+ *   Store one field data from PQgetvalue(results, act_tuple, act_field)
+ *   into a target variable. If the field is NULL, store the indication or
+ *   emit an error about the fact that there is no NULL indicator given.
+ * Parameters:
+ *   results: result set
+ *   act_tuple:   row index in the result set
+ *   act_field:   column index in the result set
+ *   var_index:   array index in the target variable
+ *   lineno:  line number in the ECPG source file for debugging
+ *   type:type of target variable
+ *   ind_type:type of NULL indicator variable
+ *   var: target variable
+ *   ind: NULL indicator variable
+ *   varcharsize: size of the variable if it's varchar
+ *   offset:  size of the target variable
+ *(used for indexing in an array)
+ *   ind_offset:  size of the NULL indicator variable
+ *(used for indexing in an array)
+ *   isarray: array type
+ *   compat:  native PostgreSQL or Informix compatibility mode
+ *   force_indicator:
+ *if Informix compatibility mode is set and no NULL indicator
+ *is given, provide a way to indicate NULL value in the
+ *target variable itself
+ */
 bool
-ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno,
+ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int var_index, int lineno,
 			  enum ECPGttype type, enum ECPGttype ind_type,
 			  char *var, char *ind, long varcharsize, long offset,
 			  long ind_offset, enum ARRAY_TYPE isarray, enum COMPAT_MODE compat, bool force_indicator)
@@ -167,20 +194,20 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno,
 	{
 		case ECPGt_short:
 		case ECPGt_unsigned_short:
-			*((short *) (ind + ind_offset * act_tuple)) = value_for_indicator;
+			*((short *) (ind + ind_offset * var_index)) = value_for_indicator;
 			break;
 		case ECPGt_int:
 		case ECPGt_unsigned_int:
-			*((int *) (ind + ind_offset * act_tuple)) = value_for_indicator;
+			*((int *) (ind + ind_offset * var_index)) = value_for_indicator;
 			break;
 		case ECPGt_long:
 		case ECPGt_unsigned_long:
-			*((long *) (ind + ind_offset * act_tuple)) = value_for_indicator;
+			*((long *) (ind + ind_offset * var_index)) = value_for_indicator;
 			break;
 #ifdef HAVE_LONG_LONG_INT
 		case ECPGt_long_long:
 		case ECPGt_unsigned_long_long:
-			*((long long int *) (ind + ind_offset * act_tuple)) = value_for_indicator;
+			*((long long int *) (ind + ind_offset * var_index)) = value_for_indicator;
 			break;
 #endif   /* HAVE_LONG_LONG_INT */
 		case ECPGt_NO_INDICATOR:
@@ -192,7 +219,7 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno,
 	 * Informix has an additional way to specify NULLs note
 	 * that this uses special values to denote NULL
 	 */
-	ECPGset_noind_null(type, var + offset * act_tuple);
+	ECPGset_noind_null(type, var + offset * var_index);
 }
 else
 {
@@ -243,10 +270,10 @@ ecpg_get_data(const PGresult *results, int act_tuple, int act_field, int lineno,
 		if (binary)
 		{
 			if (varcharsize == 0 || varcharsize * offset = size)
-memcpy(var + offset * act_tuple, pval, size);
+memcpy(var + offset * var_index, pval, size);
 			else
 			{
-memcpy(var + offset * act_tuple, pval, varcharsize * offset);
+memcpy(var + offset * var_index, pval, varcharsize * offset);
 
 if (varcharsize * offset  size)
 {
@@ -255,20 

Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-20 05:30:39 -0800, Kevin Grittner wrote:

 Wouldn't a database VACUUM FREEZE fix it, with WAL-logged
 writing of everything that doesn't yet have hint bits set?

 Besides also being pretty expensive it still wouldn't correct the
 clog - and we don't always rely on hint bits.

I'm talking about after a fix is deployed, fixing up the possible
corruption.  Can you explain where VACUUM FREEZE would not suffice?
I don't know of anywhere that we have hint bits set for a tuple and
we go fetch the clog bits in spite of that.  I don't understand
where that would make sense; especially since I thought that a
database FREEZE followed by a checkpoint releases old clog space
anyway.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Andres Freund
On 2013-11-20 05:59:58 -0800, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
  On 2013-11-20 05:30:39 -0800, Kevin Grittner wrote:
 
  Wouldn't a database VACUUM FREEZE fix it, with WAL-logged
  writing of everything that doesn't yet have hint bits set?
 
  Besides also being pretty expensive it still wouldn't correct the
  clog - and we don't always rely on hint bits.
 
 I'm talking about after a fix is deployed, fixing up the possible
 corruption.  Can you explain where VACUUM FREEZE would not suffice?
 I don't know of anywhere that we have hint bits set for a tuple and
 we go fetch the clog bits in spite of that.

There's several places. Grep for TransactionIdDidCommit() and ignore the
bits in tqual.c. Many of the remaining ones do not look at hint bits.

 I don't understand
 where that would make sense; especially since I thought that a
 database FREEZE followed by a checkpoint releases old clog space
 anyway.

It only releases them up to the (cluster wide) xmin horizon. So if there
are older snapshots or prepared xacts around...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ECPG infrastructure changes, part 4, was: Re: ECPG fixes

2013-11-20 Thread Boszormenyi Zoltan

2013-11-20 14:41 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:15 keltezéssel, Boszormenyi Zoltan írta:

2013-11-12 07:01 keltezéssel, Noah Misch írta:

On Mon, Nov 11, 2013 at 10:17:54AM +0100, Boszormenyi Zoltan wrote:

The old contents of my GIT repository was removed so you need to
clone it fresh. https://github.com/zboszor/ecpg-readahead.git
I won't post the humongous patch again, since sending a 90KB
compressed file to everyone on the list is rude.

Patches of that weight show up on a regular basis.  I don't think it's rude.


OK, here it is.


...
Subsequent patches will come as reply to this email.


This is another, semi independent subfeature of ECPG readahead.
It's about 150K by itself, so I send it compressed.

The purpose of this patch is to track (sub-)transactions and cursors
in ecpglib to reduce network turnaround and speed up the application
in certain cases. E.g. cursors are discarded upon ROLLBACK TO
SAVEPOINT and ecpglib needs to know about it. When an unknown
savepoint or cursor name is sent, ecpglib would not send the command
to the server in an open transaction after this patch. Instead, it flips
a client-side error flag and returns the same error the backend
would in this case.

Best regards,
Zoltán Böszörményi

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/



23.patch.gz
Description: Unix tar archive

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] nested hstore patch

2013-11-20 Thread Peter Eisentraut
On 11/12/13, 1:35 PM, Teodor Sigaev wrote:
 Hi!
 
 Attatched patch adds nesting feature, types (string, boll and numeric
 values), arrays and scalar to hstore type.

Documentation doesn't build:

openjade:hstore.sgml:206:16:E: document type does not allow element 
VARLISTENTRY here; assuming missing VARIABLELIST start-tag

Compiler warnings:

hstore_io.c: In function 'array_to_hstore':
hstore_io.c:1736:29: error: 'result' may be used uninitialized in this function 
[-Werror=maybe-uninitialized]



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-20 05:59:58 -0800, Kevin Grittner wrote:

 I don't understand where that would make sense; especially since
 I thought that a database FREEZE followed by a checkpoint
 releases old clog space anyway.

 It only releases them up to the (cluster wide) xmin horizon. So
 if there are older snapshots or prepared xacts around...

So as long as there are no open transactions or prepared
transactions on the master which started before the release with
the fix is applied, VACUUM FREEZE would be guaranteed to work? 
Since I don't see how a non-prepared transaction would be running
from before a minor release upgrade, that just means we have to
make sure there are no prepared transactions from before the
upgrade?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Easily reading debug_print_plan

2013-11-20 Thread Tom Lane
Craig Ringer cr...@2ndquadrant.com writes:
 I'm spending a lot of time staring at parse and plan trees at the
 moment, and I'm finding reading them rather cumbersome.

Is there a particular reason you're doing that rather than looking at
EXPLAIN output?  Only the latter is meant to be at all user-friendly.

 For those of you who do this a lot, do you use any sort of tooling to
 help you out? Just being able to collapse and expand subtrees would be a
 lifesaver.

vim was mentioned --- I prefer emacs, which can also find the matching
brace pretty easily.

 If it's a hassle for others too, how would you feel about using json as
 an output format in future releases? It'd be pretty simple to retrofit
 by the looks, though updating the regression tests would be a PITA. My
 main concern would be effects on back-patching.

The internal trees appear nowhere in the regression test outputs AFAIK.

 The same representation is used for storing rules. So it can't be
 changed for BC reasons and compactness/performance.

We could in principle change to a different text representation for
stored rules.  Compactness would be an issue if it were materially
bigger than the existing formatting, but offhand it seems like JSON
is morally equivalent to what we do now, no?

If you think this is worthwhile, you might care to take a look at
outfuncs.c/readfuncs.c and figure out what it'd take to switch to
json-compatible formatting.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Andres Freund
On 2013-11-20 06:21:13 -0800, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
  On 2013-11-20 05:59:58 -0800, Kevin Grittner wrote:
 
  I don't understand where that would make sense; especially since
  I thought that a database FREEZE followed by a checkpoint
  releases old clog space anyway.
 
  It only releases them up to the (cluster wide) xmin horizon. So
  if there are older snapshots or prepared xacts around...
 
 So as long as there are no open transactions or prepared
 transactions on the master which started before the release with
 the fix is applied, VACUUM FREEZE would be guaranteed to work? 
 Since I don't see how a non-prepared transaction would be running
 from before a minor release upgrade, that just means we have to
 make sure there are no prepared transactions from before the
 upgrade?

That's not a bad point. So the way to fix it would be:

1) Restart the standby to the new minor release, wait for catchup
2) Restart the primary (fast or smart) to the new minor release
3) Acquire enough new xids to make sure we cross a clog page (?)
4) Jot down a new xid:  SELECT txid_current()::bigint % (1::bigint33-1)
5) vacuumdb -z -a
6) Ensure that there are no prepared xacts older than 3) around
SELECT *
FROM pg_prepared_xacts
ORDER BY age(transaction) DESC LIMIT 1;
7) Ensure the xmin horizon is above the one from: 3:
SELECT datname, datfrozenxid
FROM pg_database
WHERE datname != 'template0'
ORDER BY age(datfrozenxid) DESC LIMIT 1;
8) Get the current lsn: SELECT pg_current_xlog_location();
9) verify on each standby that SELECT pg_last_xlog_receive_location() is
   past 7)
10) be happy

I am not sure how we can easily compute that 6) and 7) are past 3) in
the presence of xid wraparounds.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autoconf 2.69 update

2013-11-20 Thread Tom Lane
Oskari Saarenmaa o...@ohmu.fi writes:
 ISTM autoconf has been better with backwards compatibility lately. 
 Maybe the fatal error could be changed to a warning and/or the check for 
 version == 2.63 be replaced with a check for version = 2.63?

Seems a bit risky to me.  Now, Red Hat diked that test out for years
in their packages, and didn't get burnt.  But they are not known for
shipping bleeding-edge versions of autoconf.  And more to the point,
they (I) would've taken full responsibility for dealing with any
ensuing breakage.  If we remove the version test from configure.in,
then it becomes *our* problem if it doesn't work with $random autoconf.

 Without a 
 strict requirement for a certain autoconf version it would make sense to 
 also drop the built autoconf artifacts from the git repository which 
 would make diffs shorter and easier to review when touching configure.in.

If we dropped the configure script from git, then buildfarm testing would
provide essentially no confidence that the exact script we'd ship in a
release would have gotten any testing, other than on machines configured
identically to the one we build release tarballs on.  Which sort of
defeats the purpose of buildfarm testing.

As a rule, you're not supposed to bother including the configure output
script in a submitted diff anyway.  Certainly any committer worth his
commit bit is going to ignore it and redo autoconf for himself.

Personally, I keep all the active branches' autoconf versions installed
in /usr/local/autoconf-N.NN/, and the script I use to switch my attention
to one or another active branch includes changing my PATH to put
the appropriate /usr/local/autoconf-N.NN/bin/ in front of /usr/bin.
So I don't have to think about this, other than occasionally needing
to install a new autoconf version from source.  But that's a good thing
anyway --- I think it's a good idea to avoid using distro-supplied
autoconfs for this, because that might make it hard for another committer
to reproduce the identical results.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autoconf 2.69 update

2013-11-20 Thread Andres Freund

On 2013-11-20 09:53:53 -0500, Tom Lane wrote:
 As a rule, you're not supposed to bother including the configure output
 script in a submitted diff anyway.  Certainly any committer worth his
 commit bit is going to ignore it and redo autoconf for himself.

The committer maybe, but it's a PITA for reviewers on machines without
the matching autoconf version around. Which at least currently
frequently isn't packaged anymore...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block

2013-11-20 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Tue, Nov 19, 2013 at 10:21:47PM -0500, Tom Lane wrote:
 My personal standpoint is that I don't care much whether these messages
 are NOTICE or WARNING.  What I'm not happy about is promoting cases that
 have been non-error conditions for years into ERRORs.

 I don't remember any cases where that was suggested.

Apparently you've forgotten the commit that was the subject of this
thread.  You took a bunch of SET cases that we've always accepted
without any complaint whatsoever, and made them into ERRORs, thereby
breaking any applications that might've expected such usage to be
harmless.  I would be okay if that patch had issued WARNINGs, which
as you can see from the thread title was the original suggestion.

 The attached patch changes ABORT from NOTICE to WARNING, and documents
 that all other are errors.  This top-level logic idea came from Robert
 Haas, and it has some level of consistency.

This patch utterly fails to address my complaint.

More to the point, I think it's a waste of time to make these sorts of
adjustments.  The only thanks you're likely to get for it is complaints
about cross-version behavioral changes.  Also, you're totally ignoring
the thought that these different message levels might've been selected
intentionally, back when the code was written.  Since there have been
no field complaints about the inconsistency, what's your hurry to
change it?  See Emerson.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-20 06:21:13 -0800, Kevin Grittner wrote:


  So as long as there are no open transactions or prepared
  transactions on the master which started before the release with
  the fix is applied, VACUUM FREEZE would be guaranteed to work? 
  Since I don't see how a non-prepared transaction would be running
  from before a minor release upgrade, that just means we have to
  make sure there are no prepared transactions from before the
  upgrade?
 
 That's not a bad point. So the way to fix it would be:
 
 1) Restart the standby to the new minor release, wait for catchup
 2) Restart the primary (fast or smart) to the new minor release
 3) Acquire enough new xids to make sure we cross a clog page (?)
 4) Jot down a new xid:  SELECT txid_current()::bigint % (1::bigint33-1)
 5) vacuumdb -z -a
 6) Ensure that there are no prepared xacts older than 3) around
 SELECT *
 FROM pg_prepared_xacts
 ORDER BY age(transaction) DESC LIMIT 1;
 7) Ensure the xmin horizon is above the one from: 3:
 SELECT datname, datfrozenxid
 FROM pg_database
 WHERE datname != 'template0'
 ORDER BY age(datfrozenxid) DESC LIMIT 1;
 8) Get the current lsn: SELECT pg_current_xlog_location();
 9) verify on each standby that SELECT pg_last_xlog_receive_location() is
    past 7)
 10) be happy
 
 I am not sure how we can easily compute that 6) and 7) are past 3) in
 the presence of xid wraparounds.


I may well be missing something here, but wouldn't it be sufficient to?:
1) Restart the standby to the new minor release, wait for catchup
2) Restart the primary (fast or smart) to the new minor release
3) Run VACUUM FREEZE (optionally with ANALYZE) in each database on primary
4) Run CHECKPOINT command on primary, or just wait for one to run
5) Wait for standby to process to the checkpoint
6) Be happy

--
Kevin Grittner 
EDB: http://www.enterprisedb.com 
The Enterprise PostgreSQL Company



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Peter Eisentraut
On 11/14/13, 1:41 AM, Amit Kapila wrote:
Security Concern
-
If a user can specify libpq  connection options, he can now execute
 any file he wants by passing it as standalone_backend.
 
Method to resolve Security concern

If an application wants to allow these connection parameters to be
 used, it would need to do PQenableStartServer() first. If it doesn't,
 those connection parameters will be rejected.

I don't think this really helps.  You can't tell with reasonable effort
or certainty whether a given program is calling PQenableStartServer(),
so you cannot audit this from the outside.  Also, someone could,
depending on circumstances, dynamically load a module that calls
PQenableStartServer(), thus circumventing this check.  And maybe before
long someone will patch up all drivers to call PQenableStartServer()
automatically, because why shouldn't I be able to run a standalone
backend from PHP or Ruby?  Also, at some point at least, something like
phpPgAdmin called pg_dump internally, so you could imagine that in
situations like that, assuming that pg_dump called
PQenableStartServer(), with a little bit craftiness, you could expose
the execute-any-file hole through a web server.

I don't have a better idea right now how to set up these connection
parameters in a way that you can only set them in certain safe
circumstances.

I would consider sidestepping this entire issue by having the
stand-alone backend create a Unix-domain socket and have a client
connect to that in the normal way.  At least if you split the patch that
way, you might alleviate some concerns of others about whether this
patch is about fixing standalone mode vs. allowing using standalone mode
with psql vs. making a fully embedded database.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Andres Freund
On 2013-11-20 07:06:04 -0800, Kevin Grittner wrote:
  That's not a bad point. So the way to fix it would be:
  
  1) Restart the standby to the new minor release, wait for catchup
  2) Restart the primary (fast or smart) to the new minor release
  3) Acquire enough new xids to make sure we cross a clog page (?)
  4) Jot down a new xid:  SELECT txid_current()::bigint % (1::bigint33-1)
  5) vacuumdb -z -a
  6) Ensure that there are no prepared xacts older than 3) around
  SELECT *
  FROM pg_prepared_xacts
  ORDER BY age(transaction) DESC LIMIT 1;
  7) Ensure the xmin horizon is above the one from: 3:
  SELECT datname, datfrozenxid
  FROM pg_database
  WHERE datname != 'template0'
  ORDER BY age(datfrozenxid) DESC LIMIT 1;
  8) Get the current lsn: SELECT pg_current_xlog_location();
  9) verify on each standby that SELECT pg_last_xlog_receive_location() is
     past 7)
  10) be happy
  
  I am not sure how we can easily compute that 6) and 7) are past 3) in
  the presence of xid wraparounds.
 
 
 I may well be missing something here, but wouldn't it be sufficient to?:
 1) Restart the standby to the new minor release, wait for catchup
 2) Restart the primary (fast or smart) to the new minor release
 3) Run VACUUM FREEZE (optionally with ANALYZE) in each database on primary
 4) Run CHECKPOINT command on primary, or just wait for one to run
 5) Wait for standby to process to the checkpoint
 6) Be happy

Well, in some cases it might. But what if there's a prepared xact
around? Or a transaction started directly after 2) preventing
FreezeLimit to go up? Or vacuum_defer_cleanup_age is set? Or there's
another bug like 4c697d8f4845823a8af67788b219ffa4516ad14c?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suggestion: Issue warning when calling SET TRANSACTION outside transaction block

2013-11-20 Thread Bruce Momjian
On Wed, Nov 20, 2013 at 10:04:22AM -0500, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  On Tue, Nov 19, 2013 at 10:21:47PM -0500, Tom Lane wrote:
  My personal standpoint is that I don't care much whether these messages
  are NOTICE or WARNING.  What I'm not happy about is promoting cases that
  have been non-error conditions for years into ERRORs.
 
  I don't remember any cases where that was suggested.
 
 Apparently you've forgotten the commit that was the subject of this
 thread.  You took a bunch of SET cases that we've always accepted
 without any complaint whatsoever, and made them into ERRORs, thereby
 breaking any applications that might've expected such usage to be
 harmless.  I would be okay if that patch had issued WARNINGs, which
 as you can see from the thread title was the original suggestion.

Oh, those changes.  I thought we were just looking at _additional_
changes.

  The attached patch changes ABORT from NOTICE to WARNING, and documents
  that all other are errors.  This top-level logic idea came from Robert
  Haas, and it has some level of consistency.
 
 This patch utterly fails to address my complaint.
 
 More to the point, I think it's a waste of time to make these sorts of
 adjustments.  The only thanks you're likely to get for it is complaints
 about cross-version behavioral changes.  Also, you're totally ignoring
 the thought that these different message levels might've been selected
 intentionally, back when the code was written.  Since there have been
 no field complaints about the inconsistency, what's your hurry to
 change it?  See Emerson.

My problem was that they issued _no_ message at all.  I am fine with
them issuing a warning if that's what people prefer.  As they are all
SET commands, they will be consistent.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Shave a few instructions from child-process startup sequence

2013-11-20 Thread Peter Eisentraut
On 11/5/13, 2:47 AM, Gurjeet Singh wrote:
 On Mon, Nov 4, 2013 at 12:20 AM, Tom Lane t...@sss.pgh.pa.us
 mailto:t...@sss.pgh.pa.us wrote:
 
 But we're not buying much.  A few instructions during postmaster
 shutdown
 is entirely negligible.
 
 
 The patch is for ClosePostmasterPorts(), which is called from every
 child process startup sequence (as $subject also implies), not in
 postmaster shutdown. I hope that adds some weight to the argument.

If there is a concern about future maintenance, you could add assertions
(in appropriate compile mode) that the rest of the array is indeed
PGINVALID_SOCKET.  I think that could be a win for both performance and
maintainability.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Autoconf 2.69 update

2013-11-20 Thread Magnus Hagander
On Wed, Nov 20, 2013 at 3:58 PM, Andres Freund and...@2ndquadrant.comwrote:


 On 2013-11-20 09:53:53 -0500, Tom Lane wrote:
  As a rule, you're not supposed to bother including the configure output
  script in a submitted diff anyway.  Certainly any committer worth his
  commit bit is going to ignore it and redo autoconf for himself.

 The committer maybe, but it's a PITA for reviewers on machines without
 the matching autoconf version around. Which at least currently
 frequently isn't packaged anymore...


That's going to be a PITA whichever way you go, though, because there is
not one standard about which autoconf version distros have. It's certainly
not all that have 2.69. I frequently do my builds on Ubuntu 12.04 for
example, which has 2.15,  2.59, 2.64 and 2.68 (don't ask me how they ended
up with that combination).

The point is - regardless of which version you chose, reviewers and
committers are going to have to deal with a local installation in many
cases anyway. So we might be better off just documenting that in a more
clear way.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Autoconf 2.69 update

2013-11-20 Thread Andrew Dunstan


On 11/20/2013 10:28 AM, Magnus Hagander wrote:


On Wed, Nov 20, 2013 at 3:58 PM, Andres Freund and...@2ndquadrant.com 
mailto:and...@2ndquadrant.com wrote:



On 2013-11-20 09:53:53 -0500, Tom Lane wrote:
 As a rule, you're not supposed to bother including the configure
output
 script in a submitted diff anyway.  Certainly any committer
worth his
 commit bit is going to ignore it and redo autoconf for himself.

The committer maybe, but it's a PITA for reviewers on machines without
the matching autoconf version around. Which at least currently
frequently isn't packaged anymore...


That's going to be a PITA whichever way you go, though, because there 
is not one standard about which autoconf version distros have. It's 
certainly not all that have 2.69. I frequently do my builds on Ubuntu 
12.04 for example, which has 2.15,  2.59, 2.64 and 2.68 (don't ask me 
how they ended up with that combination).


The point is - regardless of which version you chose, reviewers and 
committers are going to have to deal with a local installation in many 
cases anyway. So we might be better off just documenting that in a 
more clear way.






And it only matters if you're reviewing things that touch the configure 
setup. That's a tiny minority of patches.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 I would consider sidestepping this entire issue by having the
 stand-alone backend create a Unix-domain socket and have a client
 connect to that in the normal way.

Hmm.  But that requires the stand-alone backend to take on at least
some properties of a postmaster; at the very least, it would need to
accept some form of shutdown signal (not just EOF on its stdin).

Perhaps more to the point, I think this approach actually breaks one of
the principal good-thing-in-emergencies attributes of standalone mode,
namely being sure that nobody but you can connect.  With this, you're
right back to having a race condition as to whether your psql command
gets to the socket before somebody else.

I think we'd be better off trying to fix the security issue by
constraining what can be executed as a standalone backend.  Would
it work to insist that psql/pg_dump launch the program named postgres
from the same bin directory they're in, rather than accepting a path
from the connection string?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Andres Freund
On 2013-11-20 10:48:20 -0500, Tom Lane wrote:
 constraining what can be executed as a standalone backend.  Would
 it work to insist that psql/pg_dump launch the program named postgres
 from the same bin directory they're in, rather than accepting a path
 from the connection string?

But why do we want to start the server through the connection string
using PQconnectb() in the first place? That doesn't really seem right to
me.
Something like PQstartSingleUser(dsn) returning a established connection
seems better to me.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-11-20 10:48:20 -0500, Tom Lane wrote:
 constraining what can be executed as a standalone backend.  Would
 it work to insist that psql/pg_dump launch the program named postgres
 from the same bin directory they're in, rather than accepting a path
 from the connection string?

 But why do we want to start the server through the connection string
 using PQconnectb() in the first place? That doesn't really seem right to
 me.
 Something like PQstartSingleUser(dsn) returning a established connection
 seems better to me.

That just pushes the problem up a level --- how are you going to tell
psql, pg_dump, or other programs that they should do that?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Handling GIN incomplete splits

2013-11-20 Thread Heikki Linnakangas

On 19.11.2013 14:48, Michael Paquier wrote:

Here is a review of the first three patches:
1) Further gin refactoring:
make check passes (core tests and contrib tests).
Code compiles without warnings.


Committed.


Then... About the patch... Even if I got little experience with code of
gin, moving the flag for search mode out of btree, as well as removing the
logic of PostingTreeScan really makes the code lighter and easier to follow.
Just wondering, why not simplifying as well ginTraverseLock:ginbtree.c at
the same time to something similar to that?
if (!GinPageIsLeaf(page) || searchMode == TRUE)
return access;

/* we should relock our page */
LockBuffer(buffer, GIN_UNLOCK);
LockBuffer(buffer, GIN_EXCLUSIVE);

/* But root can become non-leaf during relock */
if (!GinPageIsLeaf(page))
{
/* restore old lock type (very rare) */
LockBuffer(buffer, GIN_UNLOCK);
LockBuffer(buffer, GIN_SHARE);
 }
else
access = GIN_EXCLUSIVE;
 return access;
Feel free to discard as I can imagine that changing such code would make
back-branch maintenance more difficult and it would increase conflicts with
patches currently in development.


Yeah, might be more readable to write it that way. There's a lot of 
cleanup that could be done to the gin code, these patches are by no 
means the end of it.



2) Refactoring of internal gin btree (needs patch 1 applied first):
make check passes (core tests and contrib tests).
Code compiles without warnings.


Committed.


Yep, removing ginPageGetLinkItup makes sense. Just to be picky, I would
have put the arguments of GinFormInteriorTuple replacing ginPageGetLinkItup
in 3 separate lines just for lisibility.


Ok, did that.


In dataPrepareDownlink:gindatapage.c, depending on if lpage is a leaf page
or not, isn't it inconsistent with the older code not to use
GinDataPageGetItemPointer and GinDataPageGetPostingItem to set
btree-pitem.key.


Hmm. The old code in dataSplitPage() didn't use 
GinDataPageGetItemPointer/PostingItem either.


The corresponding code in ginContinueSplit did, though. There was 
actually an inconsistency there: the ginContinueSplit function took the 
downlink's key from the last item on the page (using maxoff), while 
dataSplitPage took it from the right bound using 
GinDataPageGetRightBound(). Both are the same, dataSplitPage copies the 
value from the last item to the right bound, so it doesn't make a 
difference. They would diverge if the last item on the page is deleted, 
though, so the old coding in ginContinueSplit was actually a bit suspicious.



In ginContinueSplit:ginxlog.c, could it be possible to remove this code? It
looks that its deletion has been forgotten:
/*

  * elog(NOTICE,ginContinueSplit root:%u l:%u r:%u,  split-rootBlkno,
  * split-leftBlkno, split-rightBlkno);
  */


Yeah, that's just leftover debug code. But again, I'll leave that for 
another patch (in fact, the whole function will go away with the fourth 
patch, anyway).



3) More refactoring (needs patches 1 and 2):
make check passes (core tests and contrib tests).
Code compiles without warnings.
Perhaps this patch would have been easier to read with context diffs :) It
just moves code around so nothing to say.


Committed.

Thanks for the review! I'll let you finish the review of the fourth 
patch. Meanwhile, I'll take another look at Alexander's gin packed 
posting items patch, and see how badly these commits bitrotted it.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra functionality to createuser

2013-11-20 Thread Christopher Browne
On Tue, Nov 19, 2013 at 11:54 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 On further tests, I found inconsistency in behavior when some special
 characters are used in role names.

 1. Test for role name containing quotes
 a. In psql, create a role containing quotes in role name.
create role amitk in role test_ro'le_3;

 b. Now if we try to make a new role member of this role using
 createuser utility, it gives error
 try-1
 createuser.exe -g test_ro'le_3 -p 5446 amitk_2
 createuser: creation of new role failed: ERROR:  unterminated quoted
 string at or near 'le_3;
 LINE 1: ... NOCREATEDB NOCREATEROLE INHERIT LOGIN IN ROLE test_ro'le_3;
 try-2
 createuser.exe -g test_ro'le_3 -p 5446 amitk
 createuser: creation of new role failed: ERROR:  unterminated quoted
 string at or near 'le_3;
 LINE 1: ... NOCREATEDB NOCREATEROLE INHERIT LOGIN IN ROLE test_ro'le_3;

 c. If I try quoted string in new role to be created, it works fine.
 createuser.exe -p 5446 am'itk_2

 As quoted strings work well for role names, I think it should work
 with -g option as well.

 2. Test for role name containing special character ';' (semicolon)
 a. create role test;_1;

 b. Now if we try to make a new role member of this role using
 createuser utility, it gives error
 try-1
 createuser.exe -g test;_1 -p 5446 amitk_4
 createuser: creation of new role failed: ERROR:  syntax error at or near _1
 LINE 1: ...RUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN IN ROLE test;_1;
 try-2^
 createuser.exe -g test;_1 -p 5446 amitk_4
 createuser: creation of new role failed: ERROR:  syntax error at or near _1
 LINE 1: ...RUSER NOCREATEDB NOCREATEROLE INHERIT LOGIN IN ROLE test;_1;
 ^
 try-3
 createuser.exe -g 'test;_1' -p 5446 amitk_4
 createuser: creation of new role failed: ERROR:  syntax error at or
 near 'test;_1'
 LINE 1: ...SER NOCREATEDB NOCREATEROLE INHERIT LOGIN IN ROLE 'test;_1';

 c. If I try semicolon in new role to be created, it works fine.
 createuser.exe -p 5446 amit;k_3

 As semicolon work well for role names, I think it should work with -g
 option as well.

I was not unconscious of there being the potential for issue here; there is an
easy answer of double quoting the string, thus:

diff --git a/src/bin/scripts/createuser.c b/src/bin/scripts/createuser.c
index 88b8f2a..04ec324 100644
--- a/src/bin/scripts/createuser.c
+++ b/src/bin/scripts/createuser.c
@@ -308,7 +308,7 @@ main(int argc, char *argv[])
if (conn_limit != NULL)
appendPQExpBuffer(sql,  CONNECTION LIMIT %s, conn_limit);
if (roles != NULL)
-   appendPQExpBuffer(sql,  IN ROLE %s, roles);
+   appendPQExpBuffer(sql,  IN ROLE \%s\, roles);
appendPQExpBufferStr(sql, ;\n);

if (echo)
(END)

I was conscious of not quoting it.  Note that other parameters are not quoted
either, so I imagined I was being consistent with that.

I have added the above change, as well as rebasing, per Peter's recommendation.
-- 
When confronted by a difficult problem, solve it by reducing it to the
question, How would the Lone Ranger handle this?
diff --git a/doc/src/sgml/ref/createuser.sgml b/doc/src/sgml/ref/createuser.sgml
index 2f1ea2f..5a38d2e 100644
--- a/doc/src/sgml/ref/createuser.sgml
+++ b/doc/src/sgml/ref/createuser.sgml
@@ -131,6 +131,16 @@ PostgreSQL documentation
  /varlistentry
 
  varlistentry
+  termoption-g replaceable 
class=parameterroles/replaceable//term
+  termoption--roles=replaceable 
class=parameterroles/replaceable//term
+  listitem
+   para
+Indicates roles to which this role will be added immediately as a new 
member.
+   /para
+  /listitem
+ /varlistentry
+
+ varlistentry
   termoption-i//term
   termoption--inherit//term
   listitem
diff --git a/src/bin/scripts/createuser.c b/src/bin/scripts/createuser.c
index 83623ea..04ec324 100644
--- a/src/bin/scripts/createuser.c
+++ b/src/bin/scripts/createuser.c
@@ -24,6 +24,7 @@ main(int argc, char *argv[])
{host, required_argument, NULL, 'h'},
{port, required_argument, NULL, 'p'},
{username, required_argument, NULL, 'U'},
+   {roles, required_argument, NULL, 'g'},
{no-password, no_argument, NULL, 'w'},
{password, no_argument, NULL, 'W'},
{echo, no_argument, NULL, 'e'},
@@ -57,6 +58,7 @@ main(int argc, char *argv[])
char   *host = NULL;
char   *port = NULL;
char   *username = NULL;
+   char   *roles = NULL;
enum trivalue prompt_password = TRI_DEFAULT;
boolecho = false;
boolinteractive = false;
@@ -83,7 +85,7 @@ main(int argc, char *argv[])
 
handle_help_version_opts(argc, argv, createuser, help);
 
-   while ((c = 

Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Andres Freund
On 2013-11-20 11:08:33 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2013-11-20 10:48:20 -0500, Tom Lane wrote:
  constraining what can be executed as a standalone backend.  Would
  it work to insist that psql/pg_dump launch the program named postgres
  from the same bin directory they're in, rather than accepting a path
  from the connection string?
 
  But why do we want to start the server through the connection string
  using PQconnectb() in the first place? That doesn't really seem right to
  me.
  Something like PQstartSingleUser(dsn) returning a established connection
  seems better to me.
 
 That just pushes the problem up a level --- how are you going to tell
 psql, pg_dump, or other programs that they should do that?

An explicit parameter. A program imo explicitly needs to be aware that a
PQconnect() suddenly starts forking and such. What if it is using
threads? What if it has it's own SIGCHLD handler for other business it's
doing?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Hannu Krosing
On 11/18/2013 06:49 PM, Josh Berkus wrote:
 On 11/18/2013 06:13 AM, Peter Eisentraut wrote:
 On 11/15/13, 6:15 PM, Josh Berkus wrote:
 Thing is, I'm not particularly concerned about *Merlin's* specific use
 case, which there are ways around. What I am concerned about is that we
 may have users who have years of data stored in JSON text fields which
 won't survive an upgrade to binary JSON, because we will stop allowing
 certain things (ordering, duplicate keys) which are currently allowed in
 those columns.  At the very least, if we're going to have that kind of
 backwards compatibilty break we'll want to call the new version 10.0.
 We could do something like SQL/XML and specify the level of validity
 in a typmod, e.g., json(loose), json(strict), etc.
 Doesn't work; with XML, the underlying storage format didn't change.
 With JSONB, it will ... so changing the typemod would require a total
 rewrite of the table.  That's a POLS violation if I ever saw one
We do rewrites on typmod changes already.

To me having json(string) and json(hstore) does not seem too bad.

Cheers
Hannu


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra functionality to createuser

2013-11-20 Thread Christopher Browne
Wait, that doesn't work if more than one role is added, as they get
merged together by the quoting.

A somewhat ugly amount of quoting can be done at the shell level to
induce double quotes.

$ createuser -g \test_rol'e_3\ usequoted3

I note that similar (with not quite identical behaviour) issues apply
to the user name.  Perhaps the
resolution to this is to leave quoting issues to the administrator.
That simplifies the problem away.
I suspect that the apparatus needed to do a thorough solution (e.g. -
parse the string, and do something
smarter) may be larger than is worth getting into.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Heikki Linnakangas

On 20.11.2013 17:06, Kevin Grittner wrote:

Andres Freund and...@2ndquadrant.com wrote:

On 2013-11-20 06:21:13 -0800, Kevin Grittner wrote:




  So as long as there are no open transactions or prepared
  transactions on the master which started before the release with
  the fix is applied, VACUUM FREEZE would be guaranteed to work?
  Since I don't see how a non-prepared transaction would be running
  from before a minor release upgrade, that just means we have to
  make sure there are no prepared transactions from before the
  upgrade?


That's not a bad point. So the way to fix it would be:

1) Restart the standby to the new minor release, wait for catchup
2) Restart the primary (fast or smart) to the new minor release
3) Acquire enough new xids to make sure we cross a clog page (?)
4) Jot down a new xid:  SELECT txid_current()::bigint % (1::bigint33-1)
5) vacuumdb -z -a
6) Ensure that there are no prepared xacts older than 3) around
SELECT *
FROM pg_prepared_xacts
ORDER BY age(transaction) DESC LIMIT 1;
7) Ensure the xmin horizon is above the one from: 3:
SELECT datname, datfrozenxid
FROM pg_database
WHERE datname != 'template0'
ORDER BY age(datfrozenxid) DESC LIMIT 1;
8) Get the current lsn: SELECT pg_current_xlog_location();
9) verify on each standby that SELECT pg_last_xlog_receive_location() is
past 7)
10) be happy

I am not sure how we can easily compute that 6) and 7) are past 3) in
the presence of xid wraparounds.



I may well be missing something here, but wouldn't it be sufficient to?:
1) Restart the standby to the new minor release, wait for catchup
2) Restart the primary (fast or smart) to the new minor release
3) Run VACUUM FREEZE (optionally with ANALYZE) in each database on primary
4) Run CHECKPOINT command on primary, or just wait for one to run
5) Wait for standby to process to the checkpoint
6) Be happy


Isn't it possible that the standby has already incorrectly set 
HEAP_XMIN_INVALID hint bit on a page? The full page images generated by 
VACUUM FREEZE will correct the damage, but if not, e.g. because 
full_page_writes=off, strange things will happen.


Personally, I wouldn't trust anything less than a new base backup.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 I think we'd be better off trying to fix the security issue by
 constraining what can be executed as a standalone backend.  Would
 it work to insist that psql/pg_dump launch the program named postgres
 from the same bin directory they're in, rather than accepting a path
 from the connection string?

Couldn't that be an issue for people who have multiple major versions of
binaries installed?  In particular, the default on the system for psql
might be 9.3 while the cluster you're trying to recover may be 9.2.  Of
course, in that case you might say to use the 9.2 psql, which would be
fair, but what if you're looking to get the data out of the 9.2 DB and
into the 9.3?  In that case, we'd recommend using the 9.3 pg_dump.

Basically, I'd suggest that we try and avoid things like the binaries
have to be in the same directory..  With regard to access to the
socket, perhaps we create our own socket w/ 0600 and use that?  Seems
like it'd be sufficient to prevent the 'normal' users from getting into
the DB while we're working on it.  If there's two different individuals
gettings into the same system and trying to start the same cluster as
the same unix user, well..  I'm not convinced we'd be able to come up
with a perfect solution to that anyway.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] review: autovacuum_work_mem

2013-11-20 Thread Nigel Heron
On Mon, Nov 18, 2013 at 11:36 PM, Peter Geoghegan p...@heroku.com wrote:
 Please reply to the original thread in future (even if the Reply-to
 Message-ID is the same, I see this as a separate thread).

sorry about that, when i added review to the subject gmail removed
the thread info.
for reference the original thread started here:
http://www.postgresql.org/message-id/cam3swztwla8ef2dtvbwm1b1zevu_en3n4rregnu5_zfyjng...@mail.gmail.com


 Revision attached.
 --
 Peter Geoghegan

Review for Peter Geoghegan's v2 patch in CF 2013-11:
https://commitfest.postgresql.org/action/patch_view?id=1262

Submission review
-
* Is the patch in a patch format which has context?
Yes

* Does it apply cleanly to the current git master
(04eee1fa9ee80dabf7cf4b8b9106897272e9b291)?
patching file src/backend/commands/vacuumlazy.c
Hunk #2 succeeded at 1582 (offset 1 line).

* Does it include reasonable tests, necessary doc patches, etc?
Documentation patches included.
No additional tests.

Usability review
-
* Does the patch actually implement that?
Yes.

* Do we want that?
Yes. The original thread has references, see
http://www.postgresql.org/message-id/cam3swztwla8ef2dtvbwm1b1zevu_en3n4rregnu5_zfyjng...@mail.gmail.com

* Do we already have it?
No.

* Does it follow SQL spec, or the community-agreed behavior?
SQL spec: n/a
community: Yes. The original thread has references, see
http://www.postgresql.org/message-id/cam3swztwla8ef2dtvbwm1b1zevu_en3n4rregnu5_zfyjng...@mail.gmail.com

* Does it include pg_dump support (if applicable)?
n/a

Feature test
-
* Does the feature work as advertised?
Yes.

* Are there corner cases the author has failed to consider?
None that i can see.

* Are there any assertion failures or crashes?
No.

Performance review
-
* Does the patch slow down simple tests?
No.

* If it claims to improve performance, does it?
n/a

* Does it slow down other things?
No.

Coding review
-
* Does it follow the project coding guidelines?
Yes.

* Are there portability issues?
None that i can see.

* Will it work on Windows/BSD etc?
None that i can see. (I only tested it on linux though)

* Does it do what it says, correctly?
Yes.

* Does it produce compiler warnings?
No.

* Can you make it crash?
No.

Architecture review
-
* Is everything done in a way that fits together coherently with other
features/modules?
Yes.

* Are there interdependencies that can cause problems?
No.

-nigel.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread David Johnston
Hannu Krosing-3 wrote
 On 11/18/2013 06:49 PM, Josh Berkus wrote:
 On 11/18/2013 06:13 AM, Peter Eisentraut wrote:
 On 11/15/13, 6:15 PM, Josh Berkus wrote:
 Thing is, I'm not particularly concerned about *Merlin's* specific use
 case, which there are ways around. What I am concerned about is that we
 may have users who have years of data stored in JSON text fields which
 won't survive an upgrade to binary JSON, because we will stop allowing
 certain things (ordering, duplicate keys) which are currently allowed
 in
 those columns.  At the very least, if we're going to have that kind of
 backwards compatibilty break we'll want to call the new version 10.0.
 We could do something like SQL/XML and specify the level of validity
 in a typmod, e.g., json(loose), json(strict), etc.
 Doesn't work; with XML, the underlying storage format didn't change.
 With JSONB, it will ... so changing the typemod would require a total
 rewrite of the table.  That's a POLS violation if I ever saw one
 We do rewrites on typmod changes already.
 
 To me having json(string) and json(hstore) does not seem too bad.

Three things:

1) How would this work in the face of functions that erase typemod
information?
2) json [no type mod] would have to effectively default to json(string)?
3) how would #1 and #2 interact?

I pondered the general idea but my (admittedly limited) gut feeling is that
using typemod would possibly be technically untenable and from an end-user
perspective would be even more confusing than having two types.

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/additional-json-functionality-tp5777975p5779428.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Andres Freund
On 2013-11-20 17:19:42 +0100, Andres Freund wrote:
  That just pushes the problem up a level --- how are you going to tell
  psql, pg_dump, or other programs that they should do that?
 
 An explicit parameter. A program imo explicitly needs to be aware that a
 PQconnect() suddenly starts forking and such. What if it is using
 threads? What if it has it's own SIGCHLD handler for other business it's
 doing?

Just as an example, consider what happens if somebody does pg_dump -j?
Or somebody specifies such a connection for primary_conninfo?

I am also not sure whether vacuumdb -a/reindexdb -a (both not unlikely
commands to use for single user mode) are careful enough not to have
parallel connections open?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Andres Freund
On 2013-11-20 18:25:56 +0200, Heikki Linnakangas wrote:
 Isn't it possible that the standby has already incorrectly set
 HEAP_XMIN_INVALID hint bit on a page? The full page images generated by
 VACUUM FREEZE will correct the damage, but if not, e.g. because
 full_page_writes=off, strange things will happen.

The xlog_heap_freeze records should repair that afaics.

 Personally, I wouldn't trust anything less than a new base backup.

But I still tend to agree.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logging WAL when updating hintbit

2013-11-20 Thread Sawada Masahiko
On Wed, Nov 20, 2013 at 9:19 PM, Dilip kumar dilip.ku...@huawei.com wrote:
 On 19 November 2013 22:19, Sawada Masahiko Wrote
 

 Thanks!
 I took it wrong.
 I think that there are quite a few difference amount of WAL.

  Did you test about amount of WAL size in your patch?

 Not yet. I will do that.

 1. Patch applies cleanly to master HEAD.
 2. No Compilation Warning.
 3. It works as per the patch expectation.

 Some Suggestion:
 1. Add new WAL level (all) in comment in postgresql.conf
wal_level = hot_standby # minimal, archive, or 
 hot_standby

Thank you for reviewing the patch.
Yes, I will do that. And I'm going to implement documentation patch.


 Performance Test Result:
 Run with pgbench for 300 seconds

 WAL level : hot_standby
 WAL Size  :   111BF8A8
 TPS   :   125

 WAL level : all
 WAL Size  :   11DB5AF8
 TPS   :   122

 * TPS is almost constant but WAL size is increased around 11M.

 This is the first level of observation, I will continue to test few more 
 scenarios including performance test on standby.

Thank you for performance testing.
According of test result, TPS of 'all'  lower than 'hot_standby',but
WAL size is increased.
I think that it should be separate parameter.
And TPS on master side is is almost constant. so this patch might have
several benefit for performance
improvement on standby side if the result of performance test on
standby side is improved.

Regards,

---
Sawada Masahiko


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Add transforms feature

2013-11-20 Thread Peter Eisentraut
On 11/15/13, 11:04 AM, Dimitri Fontaine wrote:
   - Documentation style seems to be to be different from the man page
 or reference docs style that we use elsewhere, and is instead
 deriving the general case from examples. Reads strange.

Which specific section do you have in mind?  It's hard to explain this
feature in abstract terms, I think.

   - The internal datatype argument and return type discussion for
 function argument looks misplaced, but I don't have a better
 proposition for that.

OK, maybe I'll put that in parentheses or a separate paragraph.

   - Do we need an ALTER TRANSFORM command?
 
 Usually we have at least an Owner for the new objects and a command
 to change the owner. Then should we be able to change the
 function(s) used in a transform?

We don't have ALTER CAST either, and no one's been too bothered about
that.  It's possible, of course.

   - Should transform live in a schema?
 
 At first sight, no reason why, but see next point about a use case
 that we might be able to solve doing that.

Transforms don't have a name, so I don't quite see what you mean here.

   - SQL Standard has something different named the same thing,
 targetting client side types apparently. Is there any reason why we
 would want to stay away from using the same name for something
 really different in PostgreSQL?

Let's review that, as there as been some confusion about that.  The SQL
standard syntax is

CREATE TRANSFORM FOR type groupname (...details...);

and then there is

SET DEFAULT TRANSFORM GROUP groupname
SET TRANSFORM GROUP FOR TYPE type groupname

This is essentially an elaborate way to have custom input/output
formats, like DateStyle or bytea_output.

(You can find examples of this in the IBM DB2 documentation.  Some of
their clients apparently set a certain transform group automatically,
allowing you to set per-interface output formats.)

The proposed syntax in the other hand is

CREATE TRANSFORM FOR type LANGUAGE lang (...details...);

So you could consider LANGUAGE lang to be the implicit transform group
of language lang, if you like.

Or you could consider that this is a situation like VIEW vs.
MATERERIALIZED VIEW: they sound the same, they are a bit alike, but the
implementation details are different.

All obvious synonyms of transform (conversion, translation, etc.) are
already in use.

 On the higher level design, the big question here is about selective
 behavior. As soon as you CREATE TRANSFORM FOR hstore LANGUAGE plperl
 then any plperl function will now receive its hstore arguments as a
 proper perl hash rather than a string.
 
 Any pre-existing plperl function with hstore arguments or return type
 then needs to be upgraded to handle the new types nicely, and some of
 those might not be under the direct control of the DBA running the
 CREATE TRANSFORM command, when using some plperl extensions for example.

I had proposed disallowing installing a transform that would affect
existing functions.  That was rejected or deemed unnecessary.  You can't
have it both ways. ;-)

 A mechanism allowing for the transform to only be used in some functions
 but not others might be useful. The simplest such mechanism I can think
 of is modeled against the PL/Java classpath facility as specified in the
 SQL standard: you attach a classpath per schema.

Anything that's a problem per-database would also be a problem per schema.

 Should using the schema to that ends be frowned upon, then we need a way
 to register each plperl function against using or not using the
 transform facility, defaulting to not using anything. Maybe something
 like the following:
 
   CREATE FUNCTION foo(hash hstore, x ltree)
  RETURNS hstore
  LANGUAGE plperl
  USING TRANSFORM FOR hstore, ltree
   AS $$ … $$;

This is a transition problem.  Nobody is required to install the
transforms into their existing databases.  They probably shouldn't.

How many people actually use hstore with PL/Perl or PL/Python now?
Probably not many, because it's weird.

I like to think about how this works for new development:  Here is my
extension type, here is how it interfaces with languages.  Once you have
established that, you don't want to have to repeat that every time you
write a function.  That's error prone and cumbersome.  And anything
that's set per schema or higher is a dependency tracking and caching mess.

Also, extension types should work the same as built-in types.
Eventually, I'd like to rip out the hard-coded data type support in
PL/Python and replace it with built-in transforms.  Even if we don't
actually do it, conceptually it should be possible.  Now if we require
USING TRANSFORM FOR int, bytea every time, we'd have taken a big step
back.  Effectively, we already have built-in transforms in PL/Python.
We have added a few more over the years.  It's been a bit of a pain from
time to time.  At least, with this feature we'd be moving this decision
into user space and give 

Re: [HACKERS] GIN improvements part 1: additional information

2013-11-20 Thread Heikki Linnakangas

On 06.11.2013 17:36, Alvaro Herrera wrote:

Just for my own illumination, can someone explain this bit?

+ If a posting list is too large to store in-line in a key entry, a posting tree
+ is created. A posting tree is a B-tree structure, where the ItemPointer is
+ used as the key. At the leaf-level, item pointers are stored compressed, in
+ varbyte encoding.

I think the first ItemPointer mentioned (the key) refers to a TID
pointing to the index, and item pointers stored compressed refers to
the TIDs pointing to the heap (the data).  Is that correct?


No, they both refer to TIDs pointing to the heap.


I'm also interested in the FIXME explain varbyte encoding explanation
currently missing, if somebody can write it down ...


Alexander's latest version filled in that explanation (haven't read it 
myself yet)


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Greg Stark
On Sat, Nov 16, 2013 at 12:32 AM, Josh Berkus j...@agliodbs.com wrote:

 On 11/15/2013 04:00 PM, David Johnston wrote:
  Looking at this a different way: could we just implement BSON and leave
 json
  alone?
 
  http://bsonspec.org/

 In short?  No.

 For one thing, our storage format is different from theirs (better,
 frankly), and as a result is not compliant with their standard.


Not being super familiar with either BSON our JSONB what advantages are we
gaining from the difference?

It might be interesting if we supported the same binary representation so
we could have a binary send/recv routines that don't need to do any
serialization/deserialization. Especially since a standard format would
potentially be skipping the serialization/deserialization on both the
server and client.



-- 
greg


[HACKERS] WITH ORDINALITY versus column definition lists

2013-11-20 Thread Tom Lane
Consider the following case of a function that requires a column
definition list (example borrowed from the regression tests):

create function array_to_set(anyarray) returns setof record as $$
  select i AS index, $1[i] AS value from generate_subscripts($1, 1) i
$$ language sql strict immutable;

select * from array_to_set(array['one', 'two']) as t(f1 int,f2 text);

What if you want to add ordinality to that?  In HEAD you get:

regression=# select * from array_to_set(array['one', 'two']) with ordinality as 
t(f1 int,f2 text);
ERROR:  WITH ORDINALITY is not supported for functions returning record
LINE 1: select * from array_to_set(array['one', 'two']) with ordinal...
  ^

which is a restriction imposed by the original WITH ORDINALITY patch.
The currently-submitted patch removes this restriction (although not the
documentation about it :-(), and what you get is

regression=# select * from array_to_set(array['one', 'two']) with ordinality as 
t(f1 int,f2 text);
 f1 | f2  | ordinality 
+-+
  1 | one |  1
  2 | two |  2
(2 rows)

Notice that the coldef list doesn't include the ordinality column, so in
this syntax there is no way to choose a different name for the ordinality
column.  The new TABLE syntax provides an arguably-saner solution:

regression=# select * from table(array_to_set(array['one', 'two']) as (f1 
int,f2 text)) with ordinality;
 f1 | f2  | ordinality 
+-+
  1 | one |  1
  2 | two |  2
(2 rows)

regression=# select * from table(array_to_set(array['one', 'two']) as (f1 
int,f2 text)) with ordinality as t(a1,a2,a3);
 a1 | a2  | a3 
+-+
  1 | one |  1
  2 | two |  2
(2 rows)

Now, it seems to me that putting WITH ORDINALITY on the same syntactic
level as the coldeflist is pretty confusing, especially since it behaves
differently than WITH ORDINALITY with a simple alias list:

regression=# select * from generate_series(1,2) with ordinality as t(f1,f2);
 f1 | f2 
+
  1 |  1
  2 |  2
(2 rows)

Here, the alias list does extend to the ordinality column.

It seems to me that we don't really want this behavior of the coldeflist
not including the ordinality column.  It's operating as designed, maybe,
but it's unexpected and confusing.  We could either

1. Reinsert HEAD's prohibition against directly combining WITH ORDINALITY
with a coldeflist (with a better error message and a HINT suggesting that
you can get what you want via the TABLE syntax).

2. Change the parser so that the coldeflist is considered to include the
ordinality column, for consistency with the bare-alias case.  We'd
therefore insist that the last coldeflist item be declared as int8, and
then probably have to strip it out internally.

Thoughts?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] -d option for pg_isready is broken

2013-11-20 Thread Josh Berkus
On 11/20/2013 01:55 AM, Fujii Masao wrote:
 Yeah, I agree that we should make the logic of pg_isready more future-proof
 in 9.4dev. One idea is to expose internal_ping() as a libpq function. Then,
 instead of just calling PQpingParams(), we can do PQconnectStartParams(),
 libpq-function-version-of-internal_ping(), PQhost(), PQport(), and PQfinish().
 That is, we get to know the host and port information by calling
 PQhost() and PQport(),
 after trying to connect to the server.

+1

This would allow writers of client drivers to implement their own
pg_ping() functions instead of needing to go through the shell (as I
currently do with pg_isready).

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Josh Berkus
On 11/20/2013 04:52 AM, Robert Haas wrote:
 I confess to being a bit perplexed by why we want hstore and json to
 share a common binary format.  hstore doesn't store hierarchical data;
 json does.  If we design a binary format for json, doesn't that just
 obsolete store?  Why go to a lot of trouble to extend our home-grown
 format if we've got a standard format to plug into?

See hstore2 patch from Teodor.  That's what we're talking about, not
hstore1, which as you point out is non-heirarchical.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Storage formats for JSON WAS: additional json functionality

2013-11-20 Thread Josh Berkus
Greg,

 Not being super familiar with either BSON our JSONB what advantages are we
 gaining from the difference?

We have the JSONB/Hstore2 format *now*, and it can go into 9.4.  This
makes it inherently superior to any theoretical format.  So any further
discussion (below) is strictly academic, for the archives.

 It might be interesting if we supported the same binary representation so
 we could have a binary send/recv routines that don't need to do any
 serialization/deserialization. Especially since a standard format would
 potentially be skipping the serialization/deserialization on both the
 server and client.

Leaving aside that we don't want to implement 10gen's spec (because of
major omissions like decimal numbers), the fundamental issue with
binary-update-in-place is that nobody (certainly not 10gen) has devised
a way to do it without having ginormous amounts of bloat in value
storage.  The way BSON allows for binary-update-in-place, as I
understand it, is to pad all values with lots of zero bytes and to
prohibit compression, either of which are much larger losses for
performance than serialization is.

In other words, binary-update-in-place seems like a clear win for
heirarchical data storage, but practically it's not.

Of course, an investigation into this by someone with much more
knowledge of low-level storage than me (most of this list) is welcome.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Andrew Dunstan


On 11/20/2013 12:50 PM, Greg Stark wrote:


On Sat, Nov 16, 2013 at 12:32 AM, Josh Berkus j...@agliodbs.com 
mailto:j...@agliodbs.com wrote:


On 11/15/2013 04:00 PM, David Johnston wrote:
 Looking at this a different way: could we just implement BSON
and leave json
 alone?

 http://bsonspec.org/

In short?  No.

For one thing, our storage format is different from theirs (better,
frankly), and as a result is not compliant with their standard.


Not being super familiar with either BSON our JSONB what advantages 
are we gaining from the difference?


It might be interesting if we supported the same binary representation 
so we could have a binary send/recv routines that don't need to do any 
serialization/deserialization. Especially since a standard format 
would potentially be skipping the serialization/deserialization on 
both the server and client.







To start with, it doesn't support arbitrary precision numerics. That 
means that right off the bat it's only accepting a subset of what the 
JSON spec allows. 'Nuff said, I think.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New option for pg_basebackup, to specify a different directory for pg_xlog

2013-11-20 Thread Gavin Flower

On 20/11/13 23:43, Haribabu kommi wrote:

On 19 November 2013 19:12 Fujii Masao wrote:

On Tue, Nov 19, 2013 at 9:14 PM, Haribabu kommi
haribabu.ko...@huawei.com wrote:

On 18 November 2013 23:30 Fujii Masao wrote:

On Tue, Nov 19, 2013 at 12:01 AM, Haribabu kommi
haribabu.ko...@huawei.com wrote:

Thanks for newer version of the patch!

I found that the empty base directory is created and remains even
when the same directory is specified in both -D and --xlogdir and

the

error occurs.
I think that it's
better to throw an error in that case before creating any new

directory.

Thought?

By creating the base directory only the patch finds whether provided
base and Xlog directories are same or not? To solve this problem
following options are possible

1. No problem as it is just an empty base directory, so it can be

reused in the next time

Leave it as it is.
2. Once the error is identified, the base directory can be deleted.
3. write a new function to remove the parent references from the

provided path to identify

the absolute path used for detecting base and Xlog directories are

same or not?

Please provide your suggestions.


+xlogdir = get_absolute_path(xlog_dir);

xlog_dir must be an absolute path. ISTM we don't do the above. No?

It is required. As user can provide the path as

/home/installation/bin/../bin/data.

The provided path is considered as absolute path only but while
comparing the same With data directory path it will not match.

Okay, maybe I understand you. In order to know the real absolute path,
basically we need to create the directory specified in --xlogdir,
change the working directory to it and calculate the parent path.
You're worried about the cases as follows, for example.
Right?

* path containing .. like /aaa/bbb/../ccc is specified in --xlogdir
* symbolic link is specified in --xlogdir

On the second thought, I'm thinking that it might be overkill to add
such not simple code for that small benefit.

I tried using of stat'ing in two directories, which is having a problem in 
windows.
So modified old approach to detect limited errors. Updated patch attached.
This will detect and throw an error in the following scenarios.
1. pg_basebackup -D /home/data --xlogdir=/home/data
2. pg_basebackup -D data --xlogdir=/home/data -- home is the CWD
3. pg_basebackup -D ../data --xlogdir=/data -- home is the CWD

Please let me know your suggestions.

Regards,
Hari babu.


I don't think Postgres on other systems should be hobbled by the 
limitations of Microsoft software!


If certain features of Postgres are either not available, or are 
available in a reduced form on Microsoft platforms, then this should be 
documented - might provide subtle hints to upgrade to Linux.



Cheers,
Gavin


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Josh Berkus
Andrews, Kevin:

Presumably a replica created while all traffic was halted on the master
would be clean, correct?  This bug can only be triggered if there's
heavy write load on the master, right?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-11-20 11:08:33 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
 Something like PQstartSingleUser(dsn) returning a established connection
 seems better to me.

 That just pushes the problem up a level --- how are you going to tell
 psql, pg_dump, or other programs that they should do that?

 An explicit parameter. A program imo explicitly needs to be aware that a
 PQconnect() suddenly starts forking and such. What if it is using
 threads? What if it has it's own SIGCHLD handler for other business it's
 doing?

Hm.  That's a fair point.  I don't especially buy your other argument
about additional connections --- if the program tries such, they'll
just fail, which can hardly be said to be unexpected.  But it's reasonable
to worry that programs might need to be aware that they now have a child
process.  (It occurs to me that we'll need to provide a way to get the
PID of the child, too.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WITH ORDINALITY versus column definition lists

2013-11-20 Thread David Johnston
Tom Lane-2 wrote
 It seems to me that we don't really want this behavior of the coldeflist
 not including the ordinality column.  It's operating as designed, maybe,
 but it's unexpected and confusing.  We could either
 
 1. Reinsert HEAD's prohibition against directly combining WITH ORDINALITY
 with a coldeflist (with a better error message and a HINT suggesting that
 you can get what you want via the TABLE syntax).
 
 2. Change the parser so that the coldeflist is considered to include the
 ordinality column, for consistency with the bare-alias case.  We'd
 therefore insist that the last coldeflist item be declared as int8, and
 then probably have to strip it out internally.

#2 but I am hoping to be able to make the definition of the column optional. 
One possibility is that if you do want to provide an alias you have to make
it clear that the coldeflist item in question is only valid for a with
ordinality column alias.  Otherwise the entire coldeflist is used to alias
the record-type output and the ordinality column is provided its default
name.

Two options I came up with:

1) disallow any type specifier on the last item:  t(f1 int, f2 text, o1)
2) add a new pseudo-type, ord:  t(f1 int, f2 text, o1 ord)

I really like option #2.  It makes it perfectly clear, entirely within the
coldeflist SQL, that the last column is different and in this case optional
both in the sense of providing an alias and also the user can drop the whole
ordinality aspect of the call as well.  The system does not need to be told,
by the user, the actual type of the ordinality column.  And given that I
would supposed most people would think to use int or bigint before using
int8 the usability there is improved once they need and then learn that to
alias the ordinality column they use the ord type which would internally
resolve to the necessary output type.

Option one is somewhat simpler but the slight added verbosity makes reading
the SQL coldeflist easier, IMO, since you are already scanning name-type
pairs and recognizing the missing type is, for me, harder than reading off
ord and recalling its meaning.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/WITH-ORDINALITY-versus-column-definition-lists-tp5779443p5779449.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Tom Lane
David Johnston pol...@yahoo.com writes:
 On 11/18/2013 06:13 AM, Peter Eisentraut wrote:
 We could do something like SQL/XML and specify the level of validity
 in a typmod, e.g., json(loose), json(strict), etc.

 Three things:

 1) How would this work in the face of functions that erase typemod
 information?

You'd have to make the data self-identifying (which I think was the plan
already), and ensure that *every* function taking json could cope with
both formats on input.  The typmod could only be expected to be enforced
when storing or explicitly casting to one subformat, much like operations
on numeric pay little attention to the original precision/scale if any.

I agree that this solution isn't terribly workable, mainly because it'd
break any third-party C functions that take json today.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add CREATE support to event triggers

2013-11-20 Thread Christopher Browne
On Fri, Nov 8, 2013 at 10:33 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Hello,

 Attached you can find a very-much-WIP patch to add CREATE info support
 for event triggers (normalized commands).  This patch builds mainly on
 two things:

 1. Dimitri's DDL rewrite patch he submitted way back, in
http://www.postgresql.org/message-id/m2zk1j9c44@2ndquadrant.fr

 I borrowed the whole ddl_rewrite.c code, and tweaked it a bit.  There
 are several things still wrong with it and which will need to be fixed
 before a final patch can even be contemplated; but there are some
 questions that require a consensus answer before I go and fix it all,
 because what it will look like will depend on said answers.

I have tried this out; the patch applies fine.

Note that it induces (modulo my environment) a failure in make check.

The opr_sanity test fails.

postgres@cbbrowne ~/p/s/t/regress diff expected/opr_sanity.out
results/opr_sanity.out
348,350c348,351
  oid | proname
 -+-
 (0 rows)
---
  oid  | proname
 --+--
  3567 | pg_event_trigger_get_normalized_commands
 (1 row)

That's a minor problem; the trouble there is that the new function is not
yet documented.  Not a concern at this stage.

 2. The ideas we used to build DROP support.  Mainly, the interesting
thing here is the fact that we use a SRF to report, at
ddl_command_end, all the objects that were created during execution
of that command.  We do this by collecting them in a list in some raw
form somewhere during ProcessUtility, and then spitting them out if
the SRF is called.  I think the general idea is sound, although of
course I admit there might be bugs in the implementation.

 Note this patch doesn't try to add any kind of ALTER support.  I think
 this is fine in principle, because we agreed that we would attack each
 kind of command separately (divide to conquer and all that); but there
 is a slight problem for some kind of objects that are represented partly
 as ALTER state during creation; for example creating a table with a
 sequence uses ALTER SEQ/OWNED BY internally at some point.  There might
 be other cases I'm missing, also.  (The REFRESH command is nominally
 also supported.)

I imagine that the things we create in earlier stages may help with later
stages, so it's worth *some* planning so we can hope not to build bits
now that push later enhancements into corners that they can't get out of.

But I'm not disagreeing at all.

 Now about the questions I mentioned above:

 a) It doesn't work to reverse-parse the statement nodes in all cases;
 there are several unfixable bugs if we only do that.  In order to create
 always-correct statements, we need access to the catalogs for the
 created objects.  But if we are doing catalog access, then it seems to
 me that we can do away with the statement parse nodes completely and
 just reconstruct the objects from catalog information.  Shall we go that
 route?

Here's a case where it doesn't work.

testevent@localhost-  create schema foo;
CREATE SCHEMA
testevent@localhost-  create domain foo.bar integer;
CREATE DOMAIN
testevent@localhost-  CREATE OR REPLACE FUNCTION snitch() RETURNS
event_trigger LANGUAGE plpgsql AS $$
testevent$# DECLARE
testevent$# r RECORD;
testevent$# BEGIN
testevent$# FOR r IN SELECT * FROM
pg_event_trigger_get_normalized_commands()
testevent$# LOOP
testevent$# RAISE NOTICE 'object created: id %, statement %',
testevent$# r.identity, r.command;
testevent$# END LOOP;
testevent$# END;
testevent$# $$;
CREATE FUNCTION
testevent@localhost-  CREATE EVENT TRIGGER snitch ON ddl_command_end
EXECUTE PROCEDURE snitch();
CREATE EVENT TRIGGER
testevent@localhost-  set search_path to public, foo;
SET
testevent@localhost-  create table foo.foo2 (acolumn bar);
NOTICE:  object created: id foo.foo2, statement CREATE TABLE foo.foo2
(acolumn bar)
CREATE TABLE

The trouble is that you have only normalized the table name.  The
domain, bar, needs its name normalized as well.

 b) What's the best design of the SRF output?  This patch proposes two
 columns, object identity and create statement.  Is there use for
 anything else?  Class/object OIDs perhaps, schema OIDs for objects types
 that have it?  I don't see any immediate need to that info, but perhaps
 someone does.

Probably an object type is needed as well, to know if it's a table or
a domain or a sequence or whatever.

I suspect that what will be needed to make it all usable is some sort of
structured form.  That is in keeping with Robert Haas' discomfort with
the normalized form.

My minor gripe is that you haven't normalized enough (e.g. - it should be
CREATE TABLE foo.foo2 (acolumn foo.bar), capturing the normalization of
data types that are referenced).

But Robert's quite right that users may want more than just to capture that
literally; they may want to modify it, for instance, 

Re: [HACKERS] nested hstore patch

2013-11-20 Thread David E. Wheeler
On Nov 20, 2013, at 6:19 AM, Peter Eisentraut pete...@gmx.net wrote:

 openjade:hstore.sgml:206:16:E: document type does not allow element 
 VARLISTENTRY here; assuming missing VARIABLELIST start-tag

Thanks, I fixed this one.

David

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1

2013-11-20 Thread Josh Berkus
On 11/20/2013 10:30 AM, Josh Berkus wrote:
 Andrews, Kevin:

Andres, that is.

 
 Presumably a replica created while all traffic was halted on the master
 would be clean, correct?  This bug can only be triggered if there's
 heavy write load on the master, right?
 

Also, just to verify:

If someone is doing PITR based on a snapshot taken with pg_basebackup,
that will only trip this corruption bug if the user has hot_standby=on
in their config *while restoring*?  Or is it critical if they have
hot_standby=on while backing up?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] additional json functionality

2013-11-20 Thread Andrew Dunstan


On 11/20/2013 01:36 PM, Tom Lane wrote:


You'd have to make the data self-identifying (which I think was the plan
already), and ensure that *every* function taking json could cope with
both formats on input.  The typmod could only be expected to be enforced
when storing or explicitly casting to one subformat, much like operations
on numeric pay little attention to the original precision/scale if any.

I agree that this solution isn't terribly workable, mainly because it'd
break any third-party C functions that take json today.





Yeah, I had come to this conclusion. I don't think we can bolt on 
typmods after the event.


I don't think having a separate jsonb type will be a tragedy.

I'm already planning on overloading the existing json functions and 
operators.



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Store Extension Options

2013-11-20 Thread Fabrízio de Royes Mello
Hi all,

The main goal of this patch is enable to an user the capability to store
options
(relations and attributes) related to extensions by using a fixed prefix
called 'ext' in
the defined name. It's cant be useful for replication solutions.

So, with this patch we can do that:

ALTER TABLE foo
   SET (ext.somext.do_replicate=true);

When 'ext' is the fixed prefix, 'somext' is the extension name,
'do_replicate' is the
extension option and 'true' is the value.

Also we can use this form to define storage options to indexes and
per-attribute
options.


Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
diff --git a/doc/src/sgml/ref/alter_index.sgml b/doc/src/sgml/ref/alter_index.sgml
index d210077..bf4e196 100644
--- a/doc/src/sgml/ref/alter_index.sgml
+++ b/doc/src/sgml/ref/alter_index.sgml
@@ -82,6 +82,15 @@ ALTER INDEX [ IF EXISTS ] replaceable class=PARAMETERname/replaceable RESE
   xref linkend=SQL-REINDEX
   to get the desired effects.
  /para
+ note
+   para
+ A special prefix called 'replaceable class=PARAMETERext./' can be 
+ used to define storage parameter. The storage parameters with this prefix
+ will be used by 'extensions'. Storage option nomenclature: ext.name.option=value
+ (ext=fixed prefix, name=extension name, option=option name and value=option value).
+ See example bellow.
+   /para
+ /note
 /listitem
/varlistentry
 
@@ -202,6 +211,17 @@ ALTER INDEX distributors SET (fillfactor = 75);
 REINDEX INDEX distributors;
 /programlisting/para
 
+  para
+   To set a storage parameter to be used by extensions:
+programlisting
+ALTER INDEX distributors
+  SET (ext.somext.do_replicate=true);
+/programlisting
+   (ext=fixed prefix, somext=extension name, do_replicate=option name and 
+   true=option value)
+/para
+
+
  /refsect1
 
  refsect1
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index 89649a2..4756a58 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -213,6 +213,17 @@ ALTER TABLE [ IF EXISTS ] replaceable class=PARAMETERname/replaceable
   of statistics by the productnamePostgreSQL/productname query
   planner, refer to xref linkend=planner-stats.
  /para
+
+ note
+  para
+   A special prefix called 'replaceable class=PARAMETERext./' can be used to 
+   define per-attribute options. The attribute options with this prefix will be
+   used by 'extensions'. The attribute option nomenclature: ext.name.option=value
+   (ext=fixed prefix, name=extension name, option=option name and value=option value).
+   See the example bellow.
+  /para
+ /note
+
 /listitem
/varlistentry
 
@@ -476,6 +487,11 @@ ALTER TABLE [ IF EXISTS ] replaceable class=PARAMETERname/replaceable
commandALTER TABLE/ does not treat literalOIDS/ as a
storage parameter.  Instead use the literalSET WITH OIDS/
and literalSET WITHOUT OIDS/ forms to change OID status.
+   A special prefix called 'replaceable class=PARAMETERext./' can be 
+   used to define storage parameter. The storage parameters with this prefix
+   will be used by 'extensions'. Storage option nomenclature: ext.name.option=value
+   (ext=fixed prefix, name=extension name, option=option name and value=option value).
+   See example bellow.
   /para
  /note
 /listitem
@@ -1112,6 +1128,26 @@ ALTER TABLE distributors DROP CONSTRAINT distributors_pkey,
 ADD CONSTRAINT distributors_pkey PRIMARY KEY USING INDEX dist_id_temp_idx;
 /programlisting/para
 
+  para
+   To set a per-attribute option to be used by extensions:
+programlisting
+ALTER TABLE distributors
+  ALTER COLUMN dist_id SET (ext.somext.do_replicate=true);
+/programlisting
+   (ext=fixed prefix, somext=extension name, do_replicate=option name and 
+   true=option value)
+/para
+
+  para
+   To set a storage parameter to be used by extensions:
+programlisting
+ALTER TABLE distributors
+  SET (ext.somext.do_replicate=true);
+/programlisting
+   (ext=fixed prefix, somext=extension name, do_replicate=option name and 
+   true=option value)
+/para
+
  /refsect1
 
  refsect1
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index b5fd30a..06c2b3a 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -275,6 +275,8 @@ static void initialize_reloptions(void);
 static void parse_one_reloption(relopt_value *option, char *text_str,
 	int text_len, bool validate);
 
+static bool is_extension_namespace(char *namespace);
+
 /*
  * initialize_reloptions
  *		initialization routine, must be called before parsing
@@ -602,13 +604,15 @@ transformRelOptions(Datum 

Re: [HACKERS] WITH ORDINALITY versus column definition lists

2013-11-20 Thread Tom Lane
David Johnston pol...@yahoo.com writes:
 Tom Lane-2 wrote
 It seems to me that we don't really want this behavior of the coldeflist
 not including the ordinality column.  It's operating as designed, maybe,
 but it's unexpected and confusing.  We could either
 
 1. Reinsert HEAD's prohibition against directly combining WITH ORDINALITY
 with a coldeflist (with a better error message and a HINT suggesting that
 you can get what you want via the TABLE syntax).
 
 2. Change the parser so that the coldeflist is considered to include the
 ordinality column, for consistency with the bare-alias case.  We'd
 therefore insist that the last coldeflist item be declared as int8, and
 then probably have to strip it out internally.

 Two options I came up with:

 1) disallow any type specifier on the last item:  t(f1 int, f2 text, o1)
 2) add a new pseudo-type, ord:  t(f1 int, f2 text, o1 ord)

 I really like option #2.

I don't.  Pseudo-types have a whole lot of baggage.  #1 is a mess too.
And in either case, making coldef list items optional increases the number
of ways to make a mistake, if you accidentally omit some other column for
instance.

Basically the problem here is that it's not immediately obvious whether
the coldef list ought to include the ordinality column or not.  The user
would probably guess not (since the system knows what type ordinality
should be).  Unless he's trying to specify a column name for the ordinality
column, in which case he'll realize the syntax forces it to be there.
Any way you slice it, that's going to lead to confusion and bug reports.

The TABLE syntax is really a vastly better solution for this.  So I'm
thinking my #1 is the best answer, assuming we can come up with a good
error message.  My first attempt would be

ERROR: WITH ORDINALITY cannot be used with a column definition list
HINT: Put the function's column definition list inside TABLE() syntax.

Better ideas?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dynamic Shared Memory stuff

2013-11-20 Thread Robert Haas
On Wed, Nov 20, 2013 at 8:32 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 I'm trying to catch up on all of this dynamic shared memory stuff. A bunch
 of random questions and complaints:

 What kind of usage are we trying to cater with the dynamic shared memory?

Parallel sort, and then parallel other stuff.  Eventually general
parallel query.

I have recently updated https://wiki.postgresql.org/wiki/Parallel_Sort
and you may find that interesting/helpful as a statement of intent.

 How many allocations? What size will they have have typically, minimum and
 maximum?

The facility is intended to be general, so the answer could vary
widely by application.  The testing that I have done so far suggests
that for message-passing, relatively small queue sizes (a few kB,
perhaps 1 MB at the outside) should be sufficient.  However,
applications such as parallel sort could require vast amounts of
shared memory.  Consider a machine with 1TB of memory performing a
512GB internal sort.  You're going to need 512GB of shared memory for
that.

 I looked at the message queue implementation you posted, but I
 wonder if that's the use case you're envisioning for this, or if you have
 more things in mind.

I consider that to be the first application of dynamic shared memory
and expect it to be used for (1) returning errors from background
workers to the user backend and (2) funneling tuples from one portion
of a query tree that has been split off to run in a background worker
back to the user backend.  However, I expect that many clients of the
dynamic shared memory system will want to roll their own data
structures.  Parallel internal sort (or external sort) is obviously
one, and in addition to that we might have parallel construction of
in-memory hash tables for a hash join or hash agg, or, well, anything
else you'd like to parallelize.  I expect that many of those case will
result in much larger allocations than what we need just for message
passing.

 * dsm_handle is defined in dsm_impl.h, but it's exposed in the function
 signatures in dsm.h. ISTM it should be moved to dsm.h

Well, dsm_impl.h is the low-level stuff, and dsm.h is intended as the
user API.  Unfortunately, whichever file declares that will have to be
included by the other one, so the separation is not as clean as I
would like, but I thought it made more sense for the high-level stuff
to depend on the low-level stuff rather than the other way around.

 * The DSM API contains functions for resizing the segment. That's not
 exercised by the MQ or TOC facilities. Is that going to stay dead code, or
 do you envision a user for it?

I dunno.  It certainly seems like a useful thing to be able to do - if
we run out of memory, go get some more.  It'd obviously be more useful
if we had a full-fledged allocator for dynamic shared memory, which is
something that I'd like to build but haven't built yet.  However,
after discovering that it doesn't work either on Windows or with
System V shared memory, I'm less sanguine about the chances of finding
good uses for it.  I haven't completely given up hope, but I don't
have anything concrete in mind at the moment.  It'd be a little more
plausible if we adjusted things so that the mmap() implementation
works on Windows.

 * dsm_impl_can_resize() incorrectly returns false for DSM_IMPL_MMAP. The
 mmap() implementation can resize.

Oops, that's a bug.

 * This is an issue I've seen for some time with git master, while working on
 various things. Sometimes, when I kill the server with CTRL-C, I get this in
 the log:

 ^CLOG:  received fast shutdown request
 LOG:  aborting any active transactions
 FATAL:  terminating connection due to administrator command
 LOG:  autovacuum launcher shutting down
 LOG:  shutting down
 LOG:  database system is shut down
 LOG:  could not remove shared memory segment /PostgreSQL.1804289383:
 Tiedostoa tai hakemistoa ei ole

 (that means ENOENT)

 And I just figured out why that happens: If you take a base backup of a
 running system, the pg_dynshmem/state file is included in the backup. If you
 now start up a standby from the backup on the same system, it will clean
 up and reuse the dynshmem segment still used by the master system. Now,
 when you shut down the master, you get that message in the log. If the
 segment was actually used for something, the master would naturally crash.

Ooh.  Well, pg_basebackup can be fixed not to copy that, but there's
still going to be a problem with old-style base backups.  We could try
to figure out some additional sanity check for the dsm code to use, to
determine whether or not it belongs to the same cluster, like storing
the port number or the system identifier or some other value in the
shared memory segment and then comparing it to verify whether we've
got the same one.  Or perhaps we could store the PID of the creating
postmaster in there and check whether that PID is still alive,
although we might get confused if the PID has been recycled.

 * As 

Re: [HACKERS] Replication Node Identifiers and crashsafe Apply Progress

2013-11-20 Thread Robert Haas
On Tue, Nov 19, 2013 at 1:20 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-11-19 12:47:29 -0500, Robert Haas wrote:
 On Tue, Nov 19, 2013 at 11:57 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  Agreed. As an alternative we could just have a single - probably longer
  than NAMEDATALEN - string to identify replication progress and rely on
  the users of the facility to build the identifier automatically
  themselves using components that are helpful in their system.

 I tend to feel like a generic identifier would be better.  I'm not
 sure why something like a UUID wouldn't be enough, though.
 Arbitrary-length identifiers will be bad for performance, and 128 bits
 ought to be enough for anyone.

 That's what I had suggested to some people originally and the response
 was, well, somewhat unenthusiastic. It's not that easy to assign them in
 a meaningful automated manner. How do you automatically assign a pg
 cluster an id?

/dev/urandom

 But yes, maybe the answer is to balk on that part, let the users figure
 out what's best, and then only later implement more policy based on that
 experience.

 WRT performance: I agree that fixed-width identifiers are more
 performant, that's why I went for them, but I am not sure it's that
 important. The performance sensitive parts should all be done using the
 internal id the identifier maps to, not the public one.

But I thought the internal identifier was exactly what we're creating.

I think we should also take note of Steve Singer's comments.  Perhaps
this entire endeavor is premature.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNNEST with multiple args, and TABLE with multiple funcs

2013-11-20 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk wrote:
 The spec syntax for table function calls, table function derived table
 in table reference, looks like TABLE(func(args...)) AS ...

 This patch implements that, plus an extension: it allows multiple
 functions, TABLE(func1(...), func2(...), func3(...)) [WITH ORDINALITY]
 and defines this as meaning that the functions are to be evaluated in
 parallel.

I went back and looked at the spec, and so far as I can tell, the claim
that this is spec syntax plus an extension is a falsehood.  What
I read in SQL:2008 7.6 table reference is

table function derived table ::=
  TABLE left paren collection value expression right paren

where collection value expression is elsewhere defined to be an
expression returning an array or multiset value, and then syntax rule 2
says:

* the collection value expression shall be a routine invocation

* this construct is equivalent to UNNEST ( collection value expression )

So unless I'm misreading it, the spec's idea is that you could write

   SELECT ... FROM TABLE( function_returning_array(...) )

and this would result in producing the array elements as a table column.
There is nothing in there about a function returning set.  You could argue
that that leaves us with the freedom to define what the construct does
for functions returning set --- but as this patch stands, if a function
doesn't return set but does return an array, the behavior will not be what
the spec plainly demands.

I do like the basic concept of this syntax, but I think it's a serious
error to appropriate the TABLE() spelling for something that doesn't
agree with the spec's semantics for that spelling.  We need to spell it
some other way.

I've not experimented to see what's practical in bison, but a couple
of ideas that come to mind are:

1. Use FUNCTION instead of TABLE.

2. Don't use any keyword, just parens.  Right now you get a syntax error
from that:

regression=# select * from (foo(), bar()) s;
ERROR:  syntax error at or near ,
LINE 1: select * from (foo(), bar()) s;
^

which implies that it's syntax space we could commandeer.  On the other
hand, I'm a bit worried about the future-proof-ness of such a choice.
It's uncomfortably close to one of the ways to write a row expression,
so it's not too hard to foresee the SQL committee someday defining
something like this in FROM clauses.  It's also hard to see what you'd
call the construct in documentation or error messages --- no keyword means
no easy name to apply.

Thoughts, other ideas?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Peter Eisentraut
On 11/20/13, 11:31 AM, Stephen Frost wrote:
 Couldn't that be an issue for people who have multiple major versions of
 binaries installed?  In particular, the default on the system for psql
 might be 9.3 while the cluster you're trying to recover may be 9.2.  Of
 course, in that case you might say to use the 9.2 psql, which would be
 fair, but what if you're looking to get the data out of the 9.2 DB and
 into the 9.3?  In that case, we'd recommend using the 9.3 pg_dump.

Right.  And also, in emergency situations you might have a custom built
postgres binary lying around in a separate path that includes a patch
from a mailing list you're supposed to test or something.  Best not to
make that even more difficult.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part2: fast scan

2013-11-20 Thread Alexander Korotkov
On Wed, Nov 20, 2013 at 3:06 AM, Alexander Korotkov aekorot...@gmail.comwrote:

 On Fri, Nov 15, 2013 at 11:19 AM, Alexander Korotkov aekorot...@gmail.com
  wrote:

 On Fri, Nov 15, 2013 at 12:34 AM, Heikki Linnakangas 
 hlinnakan...@vmware.com wrote:

 On 14.11.2013 19:26, Alexander Korotkov wrote:

 On Sun, Jun 30, 2013 at 3:00 PM, Heikki Linnakangas 
 hlinnakan...@vmware.com

 wrote:


  On 28.06.2013 22:31, Alexander Korotkov wrote:

  Now, I got the point of three state consistent: we can keep only one
 consistent in opclasses that support new interface. exact true and
 exact
 false values will be passed in the case of current patch consistent;
 exact
 false and unknown will be passed in the case of current patch
 preConsistent. That's reasonable.


 I'm going to mark this as returned with feedback. For the next
 version,
 I'd like to see the API changed per above. Also, I'd like us to do
 something about the tidbitmap overhead, as a separate patch before
 this, so
 that we can assess the actual benefit of this patch. And a new test
 case
 that demonstrates the I/O benefits.



 Revised version of patch is attached.
 Changes are so:
 1) Patch rebased against packed posting lists, not depends on additional
 information now.
 2) New API with tri-state logic is introduced.


 Thanks! A couple of thoughts after a 5-minute glance:

 * documentation


 Will provide documented version this week.


 * How about defining the tri-state consistent function to also return a
 tri-state? True would mean that the tuple definitely matches, false means
 the tuple definitely does not match, and Unknown means it might match. Or
 does return value true with recheck==true have the same effect? If I
 understood the patch, right, returning Unknown or True wouldn't actually
 make any difference, but it's conceivable that we might come up with more
 optimizations in the future that could take advantage of that. For example,
 for a query like foo OR (bar AND baz), you could immediately return any
 tuples that match foo, and not bother scanning for bar and baz at all.


 The meaning of recheck flag when input contains unknown is undefined now.
 :)
 For instance, we could define it in following ways:
 1) Like returning Unknown meaning that consistent with true of false
 instead of input Unknown could return either true or false.
 2) Consistent with true of false instead of input Unknown could return
 recheck. This meaning is probably logical, but I don't see any usage of it.

 I'm not against idea of tri-state returning value for consisted, because
 it's logical continuation of its tri-state input. However, I don't see
 usage of distinguish True and Unknown in returning value for now :)

 In example you give we can return foo immediately, but we have to create
 full bitmap. So we anyway will have to scan (bar AND baz). We could skip
 part of trees for bar and baz. But it's possible only when foo contains
 large amount of sequential TIDS so we can be sure that we didn't miss any
 TIDs. This seems to be very narrow use-case for me.

 Another point is that one day we probably could immediately return tuples
 in gingettuple. And with LIMIT clause and no sorting we can don't search
 for other tuples. However, gingettuple was removed because of reasons of
 concurrency. And my patches for index-based ordering didn't return it in
 previous manner: it collects all the results and then returns them
 one-by-one.


 I'm trying to make fastscan work with GinFuzzySearchLimit. Then I figure
 out that I don't understand how GinFuzzySearchLimit works. Why with
 GinFuzzySearchLimit startScan can return without doing startScanKey? Is it
 a bug?


Revised version of patch is attached. Changes are so:
1) Support for GinFuzzySearchLimit.
2) Some documentation.
Question about GinFuzzySearchLimit is still relevant.

--
With best regards,
Alexander Korotkov.


gin-fast-scan.8.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Peter Eisentraut
On 11/20/13, 10:48 AM, Tom Lane wrote:
 Perhaps more to the point, I think this approach actually breaks one of
 the principal good-thing-in-emergencies attributes of standalone mode,
 namely being sure that nobody but you can connect.  With this, you're
 right back to having a race condition as to whether your psql command
 gets to the socket before somebody else.

I don't disagree, except maybe about the relative gravity of the various
competing concerns.  But I want to see if we can split the proposed
patch into smaller, more acceptable parts.

There is elegance in being able to start a standalone backend from libpq
connection parameters.  But there are also security concerns and some
general concerns about promoting an embedded database mode.

If we allow single-user backends to speak protocol over sockets, then we
have at least solved the problem of being able to use standard tools in
emergency mode.  And I don't think it precludes adding some of the other
functionality later.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Robert Haas
On Wed, Nov 20, 2013 at 10:13 AM, Peter Eisentraut pete...@gmx.net wrote:
 On 11/14/13, 1:41 AM, Amit Kapila wrote:
Security Concern
-
If a user can specify libpq  connection options, he can now execute
 any file he wants by passing it as standalone_backend.

Method to resolve Security concern

If an application wants to allow these connection parameters to be
 used, it would need to do PQenableStartServer() first. If it doesn't,
 those connection parameters will be rejected.

 I don't think this really helps.  You can't tell with reasonable effort
 or certainty whether a given program is calling PQenableStartServer(),
 so you cannot audit this from the outside.  Also, someone could,
 depending on circumstances, dynamically load a module that calls
 PQenableStartServer(), thus circumventing this check.

What?!  The complaint is that somebody who only has access to set
connection parameters could cause a server to be started.  There's a
tremendous gulf between I can set the connection string and I can
set LD_PRELOAD.  If you can set LD_PRELOAD to a value of your choice,
I'm pretty sure you can do things that are far more entertaining than
calling a hypothetical PQenableStartServer() function.

 And maybe before
 long someone will patch up all drivers to call PQenableStartServer()
 automatically, because why shouldn't I be able to run a standalone
 backend from PHP or Ruby?  Also, at some point at least, something like
 phpPgAdmin called pg_dump internally, so you could imagine that in
 situations like that, assuming that pg_dump called
 PQenableStartServer(), with a little bit craftiness, you could expose
 the execute-any-file hole through a web server.

The point is that client applications should expose whether or not to
set this function as a command-line switch separate from whatever they
accept in terms of connection strings.  So pg_dump should have a flag
called --standalone-server or something like, and it should all
PQenableStartServer() only when that flag is used.  So if the user has
a shell script that invokes pg_dump -d $1, the user cannot contrive
a server.  If they write the script as pg_dump --standalone-server -d
$1, then they can, but by putting that option in there you pretty
much bought the farm.  Any program that calls that function
unconditionally while at the same time accepting untrusted user input
will be insecure, but chmod -R u+s /bin is insecure, too.  That's why
we don't do that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Peter Eisentraut
On 11/20/13, 3:24 PM, Robert Haas wrote:
 The point is that client applications should expose whether or not to
 set this function as a command-line switch separate from whatever they
 accept in terms of connection strings.  So pg_dump should have a flag
 called --standalone-server or something like, and it should all
 PQenableStartServer() only when that flag is used.

The argument elsewhere in this thread was that the reason for putting
this in the connection options was so that you do *not* have to patch up
every client to be able to use this functionality.  If you have to add
separate options everywhere, then you might as well just have a separate
libpq function to initiate the session.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Easily reading debug_print_plan

2013-11-20 Thread Robert Haas
On Wed, Nov 20, 2013 at 9:36 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 We could in principle change to a different text representation for
 stored rules.  Compactness would be an issue if it were materially
 bigger than the existing formatting, but offhand it seems like JSON
 is morally equivalent to what we do now, no?

Yeah, but it gains a little.

{FROB :zot 3}
would become something like
{type: FROB, zot: 3}

You could minimize the damage by using a single character name, like
an underscore, for the node type, and emitting all whitespace:

{_:FROB,zot:3}

...but it's still more.  Possibly not enough to matter, but more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Robert Haas
On Wed, Nov 20, 2013 at 3:32 PM, Peter Eisentraut pete...@gmx.net wrote:
 On 11/20/13, 3:24 PM, Robert Haas wrote:
 The point is that client applications should expose whether or not to
 set this function as a command-line switch separate from whatever they
 accept in terms of connection strings.  So pg_dump should have a flag
 called --standalone-server or something like, and it should all
 PQenableStartServer() only when that flag is used.

 The argument elsewhere in this thread was that the reason for putting
 this in the connection options was so that you do *not* have to patch up
 every client to be able to use this functionality.  If you have to add
 separate options everywhere, then you might as well just have a separate
 libpq function to initiate the session.

Well, that's fair enough.  I don't care much what the syntax is for
invoking the postmaster this way, as long as it's reasonably
convenient.  I just want there to be one.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WITH ORDINALITY versus column definition lists

2013-11-20 Thread David Johnston
Tom Lane-2 wrote
 David Johnston lt;

 polobo@

 gt; writes:
 Tom Lane-2 wrote
 It seems to me that we don't really want this behavior of the coldeflist
 not including the ordinality column.  It's operating as designed, maybe,
 but it's unexpected and confusing.  We could either
 
 1. Reinsert HEAD's prohibition against directly combining WITH
 ORDINALITY
 with a coldeflist (with a better error message and a HINT suggesting
 that
 you can get what you want via the TABLE syntax).
 
 2. Change the parser so that the coldeflist is considered to include the
 ordinality column, for consistency with the bare-alias case.  We'd
 therefore insist that the last coldeflist item be declared as int8, and
 then probably have to strip it out internally.
 
 Two options I came up with:
 
 1) disallow any type specifier on the last item:  t(f1 int, f2 text, o1)
 2) add a new pseudo-type, ord:  t(f1 int, f2 text, o1 ord)
 
 I really like option #2.
 
 I don't.  Pseudo-types have a whole lot of baggage.  #1 is a mess too.
 And in either case, making coldef list items optional increases the number
 of ways to make a mistake, if you accidentally omit some other column for
 instance.

I'll have to trust on the baggage/mess conclusion but if you can distinctly
and un-ambigiously identify the coldeflist item that is to be used for
ordinality column aliasing then the mistakes related to the
function-record-coldeflist are the same as now.  There may be more (be still
quite few I would think) ways for the user to make a mistake but the syntax
ones are handled anyway and so if the others can be handled reasonably well
the UI for the feature becomes more friendly.

IOW, instead of adding int8 and ignoring it we poll the last item,
conditionally discard it (like the int8 case), then handle the possibly
modified structure as planned.


 Basically the problem here is that it's not immediately obvious whether
 the coldef list ought to include the ordinality column or not.  The user
 would probably guess not (since the system knows what type ordinality
 should be).  

Yes, if the column is not made optional somehow then I dislike option #2


 The TABLE syntax is really a vastly better solution for this.  So I'm
 thinking my #1 is the best answer, assuming we can come up with a good
 error message.  My first attempt would be
 
 ERROR: WITH ORDINALITY cannot be used with a column definition list
 HINT: Put the function's column definition list inside TABLE() syntax.
 
 Better ideas?

Works for me if #1 is implemented.



Just to clarify we are still allowing simple aliasing:

select * from generate_series(1,2) with ordinality as t(f1,f2); 

Its only when the output of the function is record does the restriction of
placing the record-returning function call into TABLE (if you want ordinals)
come into play.


select * from table(array_to_set(array['one', 'two']) as (f1 int,f2 text))
with ordinality as t(a1,a2,a3); 

If we could do away with having to re-specify the record-aliases in the
outer layer (a1, a2) then I'd be more understanding but I'm thinking that is
not possible unless you force a single-column alias definition attached to
WITH ORDINALITY to mean alias the ordinality column only.


On the plus side: anyone using record-returning functions is already dealing
with considerable verbosity so this extra bit doesn't seem to be adding that
much overhead; and since the alias - t(a1,a2,a3) - is optional if you don't
care about aliasing the with ordinal column the default case is not that
verbose (just add the surrounding TABLE).

I feel like I need a flow-chart for #1...

With #2 (w/ optional) you can add in an alias for the ordinality column
anyplace you would be specifying a coldeflist OR alias list.  Favoring the
pseudo-type solution is the fact that given the prior sentence if you place
o1 ord in the wrong place it is possible to generate an error like with
ordinality not present for aliasing.

#1 is simpler to implement and does not preclude #2 in the future.

Possible #3?

Not sure if this is possible at this point but really the alias for the
ordinality column would be attached directly to the ordinality keyword.

e.g., ...) with ordinality{alias} as t(a1, a2)

David J.








--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/WITH-ORDINALITY-versus-column-definition-lists-tp5779443p5779468.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proof of concept: standalone backend with full FE/BE protocol

2013-11-20 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 The argument elsewhere in this thread was that the reason for putting
 this in the connection options was so that you do *not* have to patch up
 every client to be able to use this functionality.  If you have to add
 separate options everywhere, then you might as well just have a separate
 libpq function to initiate the session.

Right, Andres was saying that we had to do both (special switches that
lead to calling a special connection function).  I'm not terribly happy
about that, because it will greatly constrain the set of programs that are
able to connect to standalone backends --- but I think that there are some
in this discussion who want that, anyway.  In practice, as long as psql
and pg_dump and pg_upgrade can do it, I think we've covered most of the
interesting bases.

To my mind, the create a socket and hope nobody else can get to it
approach is exactly one of the main things we're trying to avoid here.
If you'll recall, awhile back we had a big discussion about how pg_upgrade
could positively guarantee that nobody messed with the source database
while it was working, and we still don't have a bulletproof guarantee
there.  I would like to fix that by making pg_upgrade use only standalone
backends to talk to the source database, never starting a real postmaster
at all.  But if the standalone-pg_dump mode goes through a socket, we're
back to square one on that concern.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >