Re: [PATCHES] Fix oversight in pts_error_callback()

2005-08-09 Thread Tom Lane
Qingqing Zhou [EMAIL PROTECTED] writes:
 Since we will invoke callback functions unconditionally in errfinish(), so
 pts_error_callback() should not report invalid type name without checking
 current error status.

Please exhibit a case in which you feel this is needed.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] PL/pgSQL: #option select_into_1_row (was SELECT INTO

2005-08-09 Thread Matt Miller
On Mon, 2005-08-08 at 17:18 -0400, Tom Lane wrote:
 If you think that this should be a global option instead of a
 per-statement one, something like the (undocumented) #option hack might
 be a good way to specify it; that would give it per-function scope,
 which seems reasonable.
 
   create function myfn(...) returns ... as $$
   #option select_into_1_row
   declare ...
   $$ language plpgsql;

Attached is a patch that implements the #option select_into_1_row
directive as suggested.

Is it time to document this directive?
Index: src/pl/plpgsql/src/gram.y
===
RCS file: /var/local/pgcvs/pgsql/src/pl/plpgsql/src/gram.y,v
retrieving revision 1.80
diff -c -r1.80 gram.y
*** src/pl/plpgsql/src/gram.y	2 Jul 2005 17:01:59 -	1.80
--- src/pl/plpgsql/src/gram.y	8 Aug 2005 22:53:36 -
***
*** 224,229 
--- 224,230 
  
  %token	O_OPTION
  %token	O_DUMP
+ %token	O_SELECT_INTO_1_ROW
  
  %%
  
***
*** 249,254 
--- 250,259 
  	{
  		plpgsql_DumpExecTree = true;
  	}
+ | O_OPTION O_SELECT_INTO_1_ROW
+ 	{
+ 		plpgsql_SelectInto1Row = true;
+ 	}
  ;
  
  opt_semi		:
Index: src/pl/plpgsql/src/pl_comp.c
===
RCS file: /var/local/pgcvs/pgsql/src/pl/plpgsql/src/pl_comp.c,v
retrieving revision 1.92
diff -c -r1.92 pl_comp.c
*** src/pl/plpgsql/src/pl_comp.c	6 Jul 2005 16:42:10 -	1.92
--- src/pl/plpgsql/src/pl_comp.c	9 Aug 2005 14:32:43 -
***
*** 78,83 
--- 78,84 
  int			plpgsql_error_lineno;
  char	   *plpgsql_error_funcname;
  bool		plpgsql_DumpExecTree = false;
+ bool		plpgsql_SelectInto1Row = false;
  bool		plpgsql_check_syntax = false;
  
  PLpgSQL_function *plpgsql_curr_compile;
***
*** 309,314 
--- 310,316 
  	plpgsql_ns_init();
  	plpgsql_ns_push(NULL);
  	plpgsql_DumpExecTree = false;
+ 	plpgsql_SelectInto1Row = false;
  
  	datums_alloc = 128;
  	plpgsql_nDatums = 0;
Index: src/pl/plpgsql/src/pl_exec.c
===
RCS file: /var/local/pgcvs/pgsql/src/pl/plpgsql/src/pl_exec.c,v
retrieving revision 1.151
diff -c -r1.151 pl_exec.c
*** src/pl/plpgsql/src/pl_exec.c	28 Jul 2005 07:51:13 -	1.151
--- src/pl/plpgsql/src/pl_exec.c	9 Aug 2005 14:49:45 -
***
*** 1649,1665 
  		elog(ERROR, unsupported target);
  
  	/*
! 	 * Run the query
  	 */
! 	exec_run_select(estate, stmt-query, 1, NULL);
  	tuptab = estate-eval_tuptable;
  	n = estate-eval_processed;
  
  	/*
! 	 * If the query didn't return any rows, set the target to NULL and
! 	 * return.
  	 */
! 	if (n == 0)
  	{
  		exec_move_row(estate, rec, row, NULL, tuptab-tupdesc);
  		exec_eval_cleanup(estate);
--- 1649,1672 
  		elog(ERROR, unsupported target);
  
  	/*
! 	 * Run the query, bringing back up to 2 rows if necessary
  	 */
! 	exec_run_select(estate, stmt-query, plpgsql_SelectInto1Row ? 2 : 1, NULL);
  	tuptab = estate-eval_tuptable;
  	n = estate-eval_processed;
  
  	/*
! 	 * If the #option select_into_1_row directive was specified, and the query didn't
! 	 * find exactly 1 row, then exit without setting the target.  If this directive was
! 	 * not specified then set the target, either to NULL if no rows were found or to
! 	 * the value of the first row found.
  	 */
! 	if (plpgsql_SelectInto1Row  n != 1)
! 	{
! 		exec_eval_cleanup(estate);
! 		return PLPGSQL_RC_OK;
! 	}
! 	else if (n == 0)
  	{
  		exec_move_row(estate, rec, row, NULL, tuptab-tupdesc);
  		exec_eval_cleanup(estate);
Index: src/pl/plpgsql/src/plpgsql.h
===
RCS file: /var/local/pgcvs/pgsql/src/pl/plpgsql/src/plpgsql.h,v
retrieving revision 1.64
diff -c -r1.64 plpgsql.h
*** src/pl/plpgsql/src/plpgsql.h	22 Jun 2005 01:35:02 -	1.64
--- src/pl/plpgsql/src/plpgsql.h	8 Aug 2005 22:53:36 -
***
*** 644,649 
--- 644,650 
   **/
  
  extern bool	plpgsql_DumpExecTree;
+ extern bool	plpgsql_SelectInto1Row;
  extern bool	plpgsql_SpaceScanned;
  extern int	plpgsql_nDatums;
  extern PLpgSQL_datum **plpgsql_Datums;
Index: src/pl/plpgsql/src/scan.l
===
RCS file: /var/local/pgcvs/pgsql/src/pl/plpgsql/src/scan.l,v
retrieving revision 1.42
diff -c -r1.42 scan.l
*** src/pl/plpgsql/src/scan.l	26 Jun 2005 19:16:07 -	1.42
--- src/pl/plpgsql/src/scan.l	8 Aug 2005 22:53:36 -
***
*** 186,191 
--- 186,192 
  
  ^#option		{ return O_OPTION;			}
  dump			{ return O_DUMP;			}
+ select_into_1_row	{ return O_SELECT_INTO_1_ROW;		}
  
  
  /* --

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PATCHES] O_DIRECT for WAL writes

2005-08-09 Thread Bruce Momjian
Mark Wong wrote:
 O_DIRECT + fsync() can make sense.  It avoids the copying of data
 to the page cache before being written and will also guarantee
 that the file's metadata is also written to disk.  It also
 prevents the page cache from filling up with write data that
 will never be read (I assume it is only read if a recovery
 is necessary - which should be rare).  It can also
 helps disks with write back cache when using the journaling
 file system that use i/o barriers.  You would want to use
 large writes, since the kernel page cache won't be writing
 multiple pages for you.

Right, but it seems O_DIRECT is pretty much the same as O_DIRECT with
O_DSYNC because the data is always written to disk on write().  Our
logic is that there is nothing for fdatasync to do in most cases after
using O_DIRECT, so the O_DIRECT/fdatasync() combination doesn't make
sense.

And FreeBSD, and perhaps others, need O_SYNC or fdatasync with O_DIRECT
because O_DIRECT doesn't force stuff to disk in all cases.

 I need to look at the kernel code more to comment on O_DIRECT with
 O_SYNC.
 
 Questions:
 
 Does the database transaction logger preallocate the log file?

Yes.

 Does the logger care about the order in which each write hits the disk?

Not really.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PATCHES] Remove all trace of EXPLAIN EXECUTE

2005-08-09 Thread Simon Riggs
There once was a rumour of a EXPLAIN EXECUTE command.

This minor patch removes all trace of that, but without disturbing other
valid occurrences of the EXECUTE command, which still lives on.

Best Regards, Simon Riggs
Index: commands/explain.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/explain.c,v
retrieving revision 1.137
diff -c -c -r1.137 explain.c
*** commands/explain.c	4 Jun 2005 02:07:09 -	1.137
--- commands/explain.c	9 Aug 2005 22:18:45 -
***
*** 93,99 
  		if (query-utilityStmt  IsA(query-utilityStmt, DeclareCursorStmt))
  			ExplainOneQuery(query, stmt, tstate);
  		else if (query-utilityStmt  IsA(query-utilityStmt, ExecuteStmt))
! 			ExplainExecuteQuery(stmt, tstate);
  		else
  			do_text_output_oneline(tstate, Utility statements have no plan structure);
  	}
--- 93,99 
  		if (query-utilityStmt  IsA(query-utilityStmt, DeclareCursorStmt))
  			ExplainOneQuery(query, stmt, tstate);
  		else if (query-utilityStmt  IsA(query-utilityStmt, ExecuteStmt))
! 			ExplainAnalyzeQuery(stmt, tstate);
  		else
  			do_text_output_oneline(tstate, Utility statements have no plan structure);
  	}
***
*** 211,217 
   *		EXPLAIN output
   *
   * This is exported because it's called back from prepare.c in the
!  * EXPLAIN EXECUTE case
   *
   * Note: the passed-in QueryDesc is freed when we're done with it
   */
--- 211,217 
   *		EXPLAIN output
   *
   * This is exported because it's called back from prepare.c in the
!  * EXPLAIN ANALYZE case
   *
   * Note: the passed-in QueryDesc is freed when we're done with it
   */
Index: commands/prepare.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/commands/prepare.c,v
retrieving revision 1.40
diff -c -c -r1.40 prepare.c
*** commands/prepare.c	22 Jun 2005 17:45:45 -	1.40
--- commands/prepare.c	9 Aug 2005 22:18:46 -
***
*** 535,544 
  }
  
  /*
!  * Implements the 'EXPLAIN EXECUTE' utility statement.
   */
  void
! ExplainExecuteQuery(ExplainStmt *stmt, TupOutputState *tstate)
  {
  	ExecuteStmt *execstmt = (ExecuteStmt *) stmt-query-utilityStmt;
  	PreparedStatement *entry;
--- 535,544 
  }
  
  /*
!  * Implements the 'EXPLAIN ANALYZE' utility statement.
   */
  void
! ExplainAnalyzeQuery(ExplainStmt *stmt, TupOutputState *tstate)
  {
  	ExecuteStmt *execstmt = (ExecuteStmt *) stmt-query-utilityStmt;
  	PreparedStatement *entry;

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] Remove all trace of EXPLAIN EXECUTE

2005-08-09 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 There once was a rumour of a EXPLAIN EXECUTE command.
 This minor patch removes all trace of that, but without disturbing other
 valid occurrences of the EXECUTE command, which still lives on.

It's not as dead as you seem to think.

regression=# prepare foo as select * from tenk1;
PREPARE
regression=# explain execute foo;
 QUERY PLAN
-
 Seq Scan on tenk1  (cost=0.00..458.00 rows=1 width=244)
(1 row)

regression=#

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] COPY FROM performance improvements

2005-08-09 Thread Alon Goldshuv
I did some performance checks after the recent code commit.

The good news is that the parsing speed of COPY is now MUCH faster, which is
great. It is about 5 times faster - about 100MB/sec on my machine
(previously 20MB/sec at best, usually less).

The better news is that my original patch parsing speed reaches 120MB/sec,
about 20MB/sec faster than the code that's now in CVS. This can be
significant for the long scheme of things and for large data sets. Maybe we
can improve the current code a bit more to reach this number.

I performed those measurement by executing *only the parsing logic* of the
COPY pipeline. All data conversion (functioncall3(string...)) and tuple
handling (form_heaptuple etc...) and insertion were manually disabled. So
the only code measured is reading from disk and parsing to the attribute
level.

Cheers,
Alon.





On 8/7/05 1:21 AM, Luke Lonergan [EMAIL PROTECTED] wrote:

 Tom,
 
 On 8/6/05 9:08 PM, Tom Lane [EMAIL PROTECTED] wrote:
 
 Luke Lonergan [EMAIL PROTECTED] writes:
 I had some difficulty in generating test cases that weren't largely
 I/O-bound, but AFAICT the patch as applied is about the same speed
 as what you submitted.
 
 You achieve the important objective of knocking the parsing stage down a
 lot, but your parsing code is actually about 20% slower than Alon's.
 
 I would like to see the exact test case you are using to make this
 claim; the tests I did suggested my code is the same speed or faster.
 
 I showed mine - you show yours :-)  Apparently our e-mail crossed.
   
 As best I can tell, my version of CopyReadAttributes is significantly
 quicker than Alon's, approximately balancing out the fact that my
 version of CopyReadLine is slower.  I did the latter first, and would
 now be tempted to rewrite it in the same style as CopyReadAttributes,
 ie one pass of memory-to-memory copy using pointers rather than buffer
 indexes.
 
 See previous timings - looks like Alon's parsing is substantially faster.
 However, I'd like him to confirm by running with the shunt placed at
 different stages, in this case between parse and attribute conversion (not
 attribute parse).
  
 BTW, late today I figured out a way to get fairly reproducible
 non-I/O-bound numbers about COPY FROM: use a trigger that suppresses
 the actual inserts, thus:
 
 create table foo ...
 create function noway() returns trigger as
 'begin return null; end' language plpgsql;
 create trigger noway before insert on foo
   for each row execute procedure noway();
 then repeat:
 copy foo from '/tmp/foo.data';
 
 Cool!  That's a better way than hacking code and inserting shunts.
  
 Alon will likely hit this tomorrow.
 
 - Luke
 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PATCHES] COPY FROM performance improvements

2005-08-09 Thread Andrew Dunstan



Alon Goldshuv wrote:


I performed those measurement by executing *only the parsing logic* of the
COPY pipeline. All data conversion (functioncall3(string...)) and tuple
handling (form_heaptuple etc...) and insertion were manually disabled. So
the only code measured is reading from disk and parsing to the attribute
level.
 



Arguably this might exaggerate the effect quite significantly.  Users 
will want to know the real time effect on a complete COPY. Depending on 
how much the pasing is in the total time your 20% improvement in parsing 
might only be a small fraction of 20% improvement in COPY.


Like you, I'm happy we have seen a 5 times improvement in parsing. Is it 
possible you can factor out something smallish from your patch that 
might make up the balance?


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PATCHES] [HACKERS] Autovacuum loose ends

2005-08-09 Thread Alvaro Herrera
On Fri, Jul 29, 2005 at 03:33:09PM -0400, Tom Lane wrote:

 It occurs to me that vacuuming to prevent XID wraparound is not the only
 reason to do DB-wide vacuums: we also need to keep
 pg_database.datvacuumxid from getting too old, else we will have
 problems with clog bloat.  We may need to rethink the test used.

Hmm.  I have a patch for this, but now that it's ready, I wonder if it's
really needed.  If I understand vacuum_set_xid_limits() correctly, it's
very difficult for the vacuumxid to be far behind the freeze limit.  And
in the case it's actually behind, then there's nothing we can do -- the
only way out is for the user to end the long-running transaction.

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar
al alfarero y ver qué formas se pueden sacar del otro (C. Halloway en
La Feria de las Tinieblas, R. Bradbury)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PATCHES] [HACKERS] Autovacuum loose ends

2005-08-09 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Hmm.  I have a patch for this, but now that it's ready, I wonder if it's
 really needed.  If I understand vacuum_set_xid_limits() correctly, it's
 very difficult for the vacuumxid to be far behind the freeze limit.

Umm ... they can be close together, or a billion XIDs apart, depending
on whether the FREEZE option was used.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] [HACKERS] Autovacuum loose ends

2005-08-09 Thread Alvaro Herrera
On Tue, Aug 09, 2005 at 11:24:40PM -0400, Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  Hmm.  I have a patch for this, but now that it's ready, I wonder if it's
  really needed.  If I understand vacuum_set_xid_limits() correctly, it's
  very difficult for the vacuumxid to be far behind the freeze limit.
 
 Umm ... they can be close together, or a billion XIDs apart, depending
 on whether the FREEZE option was used.

Sorry, my point was that vacuumxid is generally going to be higher than
freeze-xid, and where it isn't, a simple vacuum can't fix it.

But now that I think about it, maybe the point is that if a long-running
transaction (a billon-transactions old transaction?) was running when
the last database-wide vacuum was run, then vacuumxid is going to be
older than freeze-xid, so we may need a database-wide vacuum to fix that
even though the freeze-xid is not old enough.

Is that right?

-- 
Alvaro Herrera (alvherre[a]alvh.no-ip.org)
We take risks not to escape from life, but to prevent life escaping from us.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] Fix oversight in pts_error_callback()

2005-08-09 Thread Qingqing Zhou

Tom Lane [EMAIL PROTECTED] writes

 Please exhibit a case in which you feel this is needed.


Suppose I want to print a debug info in parseTypeString() like this:

+ elog(DEBUG1, parse type %s, buf.data);
 raw_parsetree_list = raw_parser(buf.data);

Rebuild the server, psql it:

test=# set log_min_messages = debug1;
SET
test=# select regtypein('int4');
DEBUG:  parse type SELECT NULL::int4
CONTEXT:  invalid type name int4
 regtypein
---
 integer
(1 row)

The CONTEXT info is bogus.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match