Re: [HACKERS] row_to_json bug with index only scans: empty keys!

2014-11-10 Thread Ross Reedstrom
Yes, it's late beta, but especially if we're pushing json/jsonb usage
as a major feature of this release, I'd say remove surprising cases
now. It's late beta, but that's what beta is for, the last chance for
bug fixes, before we live w/ it for the support life of the release.

The affected class are people with an app that already uses json,
so 9.2 or better, have ran acceptance tests against an early beta of
9.4, and have SQL that returns madeup fieldnames instead of the appropriate
aliases? Which they were probably annoyed at when they wrote the code that
consumes that output in the first place. I think an item in the list of
compatability changes/gotchas should catch anyone who is that on the ball
as to be testing the betas anyway. Anyone pushing that envelope is going to
do the same acceptance testing against the actual release before rolling
to production.

Ross


On Mon, Nov 10, 2014 at 10:11:25AM -0500, Tom Lane wrote:
 I wrote:
  Attached are patches meant for HEAD and 9.2-9.4 respectively.
 
 BTW, has anyone got an opinion about whether to stick the full fix into
 9.4?  The argument for, of course, is that we'd get the full fix out to
 the public a year sooner.  The argument against is that someone might
 have already done compatibility testing of their application against
 9.4 betas, and then get blindsided if we change this before release.
 
 I'm inclined to think that since we expect json/jsonb usage to really
 take off with 9.4, it might be better if we get row_to_json behaving
 unsurprisingly now.  But I'm not dead set on it.
 
   regards, tom lane
 

-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] row_to_json bug with index only scans: empty keys!

2014-11-08 Thread Ross Reedstrom
I've no opinion on the not respecting aliases aspect of this, but both
the hstore and json emtpy keys case breaks the format: it's duplicate keys
that collapse to a single value, and expected keynames are missing.

The insidious bit about this bug though is that it works fine during testing
with the developers typically small data sets. It's only triggered in my case
when we the plan switches to index-only. Even an index scan works fine. I can't
imagine that there is code out there that _depends_ on this behavior. Just as
likely to me are that there exist systems that just have can't reproduce bugs
that would be fixed by this.

Ross


On Sat, Nov 08, 2014 at 01:09:23PM -0500, Tom Lane wrote:
 Oh, some more fun: a RowExpr that's labeled with a named composite type
 (rather than RECORD) has the same issue of not respecting aliases.
 Continuing previous example with table foo:
 
 regression=# create view vv as select * from foo;
 CREATE VIEW
 regression=# select row_to_json(q) from vv q;
row_to_json   
 -
  {f1:1,f2:2}
 (1 row)
 
 regression=# select row_to_json(q) from vv q(a,b);
row_to_json   
 -
  {f1:1,f2:2}
 (1 row)
 
 So that's another case we probably want to change in HEAD but not the back
 branches.
 
   regards, tom lane
 

-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] row_to_json bug with index only scans: empty keys!

2014-11-07 Thread Ross Reedstrom
This is a serious bug in 9.3.5 and 9.4 beta3:

row_to_json() yields empty strings for json keys if the data is
fulfilled by an index only scan.

Example:

testjson=# select count(*) from document_acl;
 count 
---
   426
(1 row)

testjson=# SELECT row_to_json(combined_rows) FROM (
SELECT uuid, user_id AS uid, permission
FROM document_acl_text AS acl
WHERE uuid = '8f774048-8936-4d7f-aa38-1974c91bbef2'
ORDER BY user_id ASC, permission ASC
) as combined_rows;
 row_to_json 
-
 {:8f774048-8936-4d7f-aa38-1974c91bbef2,:admin,:publish}
(1 row)

testjson=# explain SELECT row_to_json(combined_rows) FROM (
SELECT uuid, user_id AS uid, permission
FROM document_acl_text AS acl
WHERE uuid = '8f774048-8936-4d7f-aa38-1974c91bbef2'
ORDER BY user_id ASC, permission ASC
) as combined_rows;
   QUERY PLAN   


 Subquery Scan on combined_rows  (cost=0.27..8.30 rows=1 width=76)
   -  Index Only Scan using document_acl_text_pkey on document_acl_text acl  
(cost=0.27..8.29 rows=1 width=52)
 Index Cond: (uuid = '8f774048-8936-4d7f-aa38-1974c91bbef2'::text)
 Planning time: 0.093 ms
(4 rows)

# set enable_indexonlyscan to off;
SET
testjson=# SELECT row_to_json(combined_rows) FROM (
SELECT uuid, user_id AS uid, permission
FROM document_acl_text AS acl
WHERE uuid = '8f774048-8936-4d7f-aa38-1974c91bbef2'
ORDER BY user_id ASC, permission ASC
) as combined_rows;
   row_to_json  
  
--
 
{uuid:8f774048-8936-4d7f-aa38-1974c91bbef2,user_id:admin,permission:publish}
(1 row)

tjson=# explain SELECT row_to_json(combined_rows) FROM (
SELECT uuid, user_id AS uid, permission
FROM document_acl_text AS acl
WHERE uuid = '8f774048-8936-4d7f-aa38-1974c91bbef2'
ORDER BY user_id ASC, permission ASC
) as combined_rows;
QUERY PLAN  
   
---
 Subquery Scan on combined_rows  (cost=0.27..8.30 rows=1 width=76)
   -  Index Scan using document_acl_text_pkey on document_acl_text acl  
(cost=0.27..8.29 rows=1 width=52)
 Index Cond: (uuid = '8f774048-8936-4d7f-aa38-1974c91bbef2'::text)
 Planning time: 0.095 ms
(4 rows)

We have a table defined as so:

CREATE TYPE permission_type AS ENUM (
   'publish'
);

create table document_acl (
   uuid UUID,
   user_id TEXT,
   permission permission_type NOT NULL,
   PRIMARY KEY (uuid, user_id, permission),
   FOREIGN KEY (uuid) REFERENCES document_controls (uuid)
);

The uuid and enums make no difference - I've made an all text version as well,
same problem.

testjson=# select version();
 version
 
-
 PostgreSQL 9.4beta3 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 
4.8.2-19ubuntu1) 4.8.2, 64-bit
(1 row)

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation epub format

2013-05-01 Thread Ross Reedstrom
On Wed, May 01, 2013 at 09:33:23AM -0700, Joshua D. Drake wrote:
 
 On 05/01/2013 09:27 AM, Fabien COELHO wrote:
 
 
 Hello devs,
 
 I've given a try to the PostgreSQL documentation in epub format.
 
 I must admit that there is a bit of a disappointement as far as the user
 experience is concerned: the generated file is barely usable on an iPad2
 with the default iBooks reader, which was clearly not designed for
 handling a 4592 pages book (from its point of view).
 
 The table of contents too much detailed, so it is long and slow to scan,
 and there is no clear shortcut. Flipping pages in the documentation
 takes ages (well, close to one second or more if I flip a few pages). Do
 not try search.
 
 This seems to suggest that instead of generating one large ebook, the
 build should generate a set of ebooks, say one for each part. At the
 minimum, a less detailed toc could be more usable and help navigate the
 huge contents.
 
 Once upon a time we had multiple books as documentation, then at
 some point we merged them. It was quite a few years ago.
 
 I would agree at this point that we need to consider breaking them
 up again. The documentation is unwieldy.

In my day job, we're building epubs that encompass entire college textbooks
(Physics, Algebra, etc.) There is in fact an issue w/ attempting to use
existing readers with such large files. There is a bit of a trap you can fall
into, though, limiting yourself to what the current generation of reading tools
(both software and dedicated devices) can do: newer devices will have greater
capabilities, and can make use of features of the content that only work well
in the context of the whole work. This happens right now when using the large
work on a less-mobile platform (though my new laptop is both mobile and more
capable than many db servers I've adminned in the past ...)

There are significant costs to splitting the docs up: both the author and the
reader have to agree on where a piece of information should live, and for the
(potentially naive) reader, this can be a problem. Structurally, I think since
the one book to bind them work has been done, there's much better
cross-referencing going on. In fact, that seems to be the reason for doing it.
If those xrefs can survive splitting into multiple docs, that can help relieve
the newbie-finding problem, though current reading tools may not support
linking across books, putting the burden of finding things back on the reader.

That said, creating a different format of the docs -- multiple epubs that are
more easily moved around and read on current devices -- has merit, if it
doesn't break the existing all-one-document presentation on the web. In that
sort of case, the multiple parts are a convenience, and have less burden to
carry for searchability and findability than if they are presented as the
primary format for using the material. If the split version is not primary,
automated, less-than-perfect means of splitting (page count?) can be
considered.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request: auto savepoint for interactive psql when in transaction.

2012-08-17 Thread Ross Reedstrom
On Wed, Aug 15, 2012 at 10:26:55PM -0400, Bruce Momjian wrote:
 On Mon, Nov 14, 2011 at 04:19:30PM -0600, Ross Reedstrom wrote:
  On Wed, Sep 28, 2011 at 11:47:51AM -0700, David Fetter wrote:
   On Wed, Sep 28, 2011 at 02:25:44PM -0400, Gurjeet Singh wrote:
On Wed, Sep 28, 2011 at 1:51 PM, Kevin Grittner 
kevin.gritt...@wicourts.gov
 wrote:

 Alvaro Herrera alvhe...@commandprompt.com wrote:

  See ON_ERROR_ROLLBACK
  http://www.postgresql.org/docs/9.0/static/app-psql.html

 I had missed that.  Dang, this database product is rich with nice
 features!  :-)


+1

I would like it to be on/interactive by default, though.
   
   You can have it by putting it in your .psqlrc.
   
   If we were just starting out, I'd be all for changing the defaults,
   but we're not.  We'd break things unnecessarily if we changed this
   default.
   
  
  This discussion died out with a plea for better documentation, and perhaps 
  some
  form of discoverability. I've scanned ahead and see no further discussion.
  However, I'm wondering, what use-cases would be broken by setting the 
  default
  to 'interactive'? Running a non-interactive script by piping it to psql?
  Reading the code, I see that case is covered: the definition of 
  'interactive'
  includes both stdin and stdout are a tty, and the source of commands is 
  stdin.
  Seems this functionality appeared in version 8.1.  Was there discussion re:
  making it the default at that time?  I'm all for backward compatibility, 
  but I'm
  having trouble seeing what would break.
  
  I see that Peter blogged about this from a different angle over a year ago
  (http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html)
  which drew a comment from Tom Lane that perhaps we need a better/different 
  tool
  for running scripts. That would argue the defaults for psql proper should 
  favor
  safe interactive use (autocommit off, anyone?) Peter mentioned the 
  traditional
  method unix shells use to handle this: different config files are read for
  interactive vs. non-interactive startup. Seems we have that, just for the 
  one
  setting ON_ERROR_ROLLBACK.
 
 What documentation improvement are you suggesting?  The docs seem clear
 to me.

Wow, that's a blast from the past: November. I think I wasn't looking for docs
changes, just suggested that the thread ended with a plea from others for docs.
I was wondering what supposed breakage would occur by changing the default psql
ON_ERROR_ROLLBACK behavior to 'interactive', since the code guards that pretty
hard to make sure it's a human in a terminal, not a redirect or script.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE


 
 -- 
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com
 
   + It's impossible for everything to be true. +
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping : in .pgpass - code or docs bug?

2011-12-17 Thread Ross Reedstrom
On Fri, Dec 16, 2011 at 02:55:09PM +, Richard Huxton wrote:
 According to the docs [1], you should escape embedded colons in
 .pgpass (fair enough). Below is PG 9.1.1
 
 user = te:st, db = te:st, password = te:st
 
 $ cat ~/.pgpass
 *:*:te:st:te:st:te:st
 $ psql91 -U te:st -d te:st
 te:st=
 
 $ cat ~/.pgpass
 *:*:te\:st:te\:st:te:st
 $ psql91 -U te:st -d te:st
 te:st=
 
 $ cat ~/.pgpass
 *:*:te\:st:te\:st:te\:st
 $ psql91 -U te:st -d te:st
 psql: FATAL:  password authentication failed for user te:st
 password retrieved from file /home/richardh/.pgpass
 
 I'm a bit puzzled how it manages without the escaping in the first
 case. There's a lack of consistency though that either needs
 documenting or fixing.

Hmm, seems the code in fe-connect.c that reads the password out of .pgpass does 
this:

if ((t = pwdfMatchesString(t, hostname)) == NULL ||
(t = pwdfMatchesString(t, port)) == NULL ||
(t = pwdfMatchesString(t, dbname)) == NULL ||
(t = pwdfMatchesString(t, username)) == NULL)
 [...]

pwdfMatchesString 'eats' the stringbuffer until the next unmatched character or
unescaped colon.  If it falls out the bottom of that, the rest of the line is
returned as the candidate password.

Since the code that does the backslash detection is in pwdfMatchesString(), and
the password never goes through that function, the escapes are not cleaned up.

This should either be fixed by changing the documentation to say to not escape
colons or backslashes in the password part, only, or modify this function
(PasswordFromFile) to silently unescape the password string. It already copies
it. 

Perhaps something like (WARNING! untested code, rusty C skills):


*** src/interfaces/libpq/fe-connect.c.orig  2011-12-16 17:44:29.265913914 
-0600
--- src/interfaces/libpq/fe-connect.c   2011-12-16 17:46:46.137913871 -0600
***
*** 4920,4925 
--- 4920,4933 
continue;
ret = strdup(t);
fclose(fp);
+ 
+ /* unescape any residual escaped colons */
+ t = ret;
+ while (t[0]) {
+ if (t[0] == '\\'  (t[1] == ':' || t[1] == '\\'))
+ strcpy(t,t+1);
+ t++;
+ 
return ret;
}
  
This would be backward compatible, in that existing working passwords would
continue to work, unless they happen to contain exactly the string '\:' or
'\\', then they'd need to double the backslash.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

 
 
 [1] http://www.postgresql.org/docs/9.1/static/libpq-pgpass.html
 
 -- 
   Richard Huxton
   Archonet Ltd
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] includeifexists in configuration file

2011-12-12 Thread Ross Reedstrom
On Mon, Dec 12, 2011 at 02:24:53PM -0500, Greg Smith wrote:
 On 11/16/2011 10:19 AM, Robert Haas wrote:
 I haven't read the code yet, but just to get the bikeshedding started,
 I think it might be better to call this include_if_exists rather than
 running it together as one word.
 
 What's going on, it's like this bikeshed just disappeared.  I should
 figure out how that happened so I can replicate it.

Must be that special camo paint.


Ross
Woohoo! Caught up from my beginning of Oct. trip backlog, just in time for 
Christmas!
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2011-12-05 Thread Ross Reedstrom
On Sat, Dec 03, 2011 at 01:26:22AM +0100, Andres Freund wrote:
 On Saturday, December 03, 2011 01:09:48 AM Alvaro Herrera wrote:
  Excerpts from Andres Freund's message of vie dic 02 19:09:47 -0300 2011:
   Hi all,
   
   There is also the point about how permission checks on the actual
   commands (in comparison of modifying command triggers) and such are
   handled:
   
   BEFORE and INSTEAD will currently be called independently of the fact
   whether the user is actually allowed to do said action (which is
   inconsistent with data triggers) and indepentent of whether the object
   they concern exists.
   
   I wonder if anybody considers that a problem?
  
  Hmm, we currently even have a patch (or is it already committed?) to
  avoid locking objects before we know the user has permission on the
  object.  Getting to the point of calling the trigger would surely be
  even worse.
 Well, calling the trigger won't allow them to lock the object. It doesn't 
 even 
 confirm the existance of the table.
 
didn't I see a discussion in passing about the possibility of using these 
command
triggers to implement some aspects of se-pgsql? In that case, you'd need the 
above
behavior.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql concise mode

2011-11-14 Thread Ross Reedstrom
On Mon, Nov 07, 2011 at 11:01:39PM -0500, Josh Kupershmidt wrote:
 On Mon, Nov 7, 2011 at 10:04 PM, Robert Haas robertmh...@gmail.com wrote:
 
   I can also see myself turning it on and then going
  - oh, wait, is that column not there, or did it just disappear because
  I'm in concise mode?
 
 Yeah, that would be a bit of a nuisance in some cases.

Well, that specific problem could be fixed with some format signalling,
such as changing the vertical divider, or perhaps leaving it doubled:

Given your test case:

 test=# \d+ foo
  Table public.foo
  Column |  Type   | Modifiers | Storage | Stats target | Description
 +-+---+-+--+-
  a  | integer |   | plain   |  |
  b  | integer |   | plain   |  |
 Has OIDs: no
 
Concise output might look like (bikeshed argument: splat indicates
columns squashed out):
 
 test=# \d+ foo
  Table public.foo
  Column |  Type   # Storage #
 +-+-+
  a  | integer # plain   #
  b  | integer # plain   #
 Has OIDs: no

or:

  Column |  Type   || Storage |
 +-++-+
  a  | integer || plain   |
  b  | integer || plain   |

or even:
 
  Column |  Type   || Storage ||
 +-++-++
  a  | integer || plain   ||
  b  | integer || plain   ||

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request: auto savepoint for interactive psql when in transaction.

2011-11-14 Thread Ross Reedstrom
On Wed, Sep 28, 2011 at 11:47:51AM -0700, David Fetter wrote:
 On Wed, Sep 28, 2011 at 02:25:44PM -0400, Gurjeet Singh wrote:
  On Wed, Sep 28, 2011 at 1:51 PM, Kevin Grittner kevin.gritt...@wicourts.gov
   wrote:
  
   Alvaro Herrera alvhe...@commandprompt.com wrote:
  
See ON_ERROR_ROLLBACK
http://www.postgresql.org/docs/9.0/static/app-psql.html
  
   I had missed that.  Dang, this database product is rich with nice
   features!  :-)
  
  
  +1
  
  I would like it to be on/interactive by default, though.
 
 You can have it by putting it in your .psqlrc.
 
 If we were just starting out, I'd be all for changing the defaults,
 but we're not.  We'd break things unnecessarily if we changed this
 default.
 

This discussion died out with a plea for better documentation, and perhaps some
form of discoverability. I've scanned ahead and see no further discussion.
However, I'm wondering, what use-cases would be broken by setting the default
to 'interactive'? Running a non-interactive script by piping it to psql?
Reading the code, I see that case is covered: the definition of 'interactive'
includes both stdin and stdout are a tty, and the source of commands is stdin.
Seems this functionality appeared in version 8.1.  Was there discussion re:
making it the default at that time?  I'm all for backward compatibility, but I'm
having trouble seeing what would break.

I see that Peter blogged about this from a different angle over a year ago
(http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html)
which drew a comment from Tom Lane that perhaps we need a better/different tool
for running scripts. That would argue the defaults for psql proper should favor
safe interactive use (autocommit off, anyone?) Peter mentioned the traditional
method unix shells use to handle this: different config files are read for
interactive vs. non-interactive startup. Seems we have that, just for the one
setting ON_ERROR_ROLLBACK.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request: auto savepoint for interactive psql when in transaction.

2011-11-14 Thread Ross Reedstrom
On Mon, Nov 14, 2011 at 02:45:04PM -0800, Will Leinweber wrote:
 My coworker Dan suggested that some people copy and paste scripts. However
 I feel that that is an orthogonal problem and if there is a very high rate
 of input psql should detect that and turn interactive off. And I
 still strongly feel that on_error_rollback=interactive should be the
 default.

Hmm, I think that falls under the don't so that, then usecase. I've been
known to cp the occasional script - I guess the concern here would be not
seeing failed steps that scrolled off the terminal. (I set my scrollback to
basically infinity and actaully use it, but then I'm strange that way :-) )

Trying to autodetect 'high rate of input' seems ... problematic. The code as is
does handle detecting interactivity at startup, and for the current command
- switching mid-stream ... catching repeated auto-rollbacks might be a
  possibility, then switching the transaction into 'failed' state. That should
catch most of the possible cases where an early set of steps failed, but
scrolled off, so there's no visible error at the end of paste.
 
 Until then, I've included this as a PSA at the start of any postgres talks
 I've given, because it's simply not widely known.

Good man. (That's a Postgres Service Announcement, then?)

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers