Re: [PATCHES] Small code clean-up

2007-03-28 Thread Magnus Hagander
On Wed, Mar 28, 2007 at 10:23:09AM +0900, ITAGAKI Takahiro wrote:
 Here are two small code clean-up in initdb and win32_shmem.
 
 pg_char_to_encoding() was redundant in initdb because
 pg_valid_server_encoding() returns the same result if the encoding is valid,
 
 Changes in win32_shmem suppress the following warnings.
 | pg_shmem.c: In function `PGSharedMemoryCreate':
 | pg_shmem.c:137: warning: long unsigned int format, Size arg (arg 2)
 | pg_shmem.c:159: warning: long unsigned int format, Size arg (arg 2)
 

When you send two completely unrelated patches, please send them in
separate emails.

I have applied the win32 shmem part, thanks. Haven't had time to look into
the other one.

//Magnus


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] LIMIT/SORT optimization

2007-03-28 Thread Heikki Linnakangas

Some comments on the patch below.

Gregory Stark wrote:


+ /* tuplesort_set_bound - External API to set a bound on a tuplesort
+  *
+  * Must be called before inserting any tuples.
+ 
+  * Sets a maximum number of tuples the caller is interested in. The first

+  * bound tuples are maintained using a simple insertion sort and returned
+  * normally. Any tuples that lie after those in the sorted result set are
+  * simply thrown out
+  */


The Must be called before inserting any tuples is in contradiction 
with the comment in the header file:



+ /* This can be called at any time before performsort to advise tuplesort that
+  * only this many tuples are interesting. If that many tuples fit in memory 
and
+  * we haven't already overflowed to disk then tuplesort will switch to a 
simple
+  * insertion sort or heap sort and throw away the uninteresting tuples.
+  */


The latter seems to be correct.



! /*
!  * Convert the existing unordered list of sorttuples to a heap in either 
order.
!  * This used to be inline but now there are three separate places we heap sort
!  * (initializing the tapes, if we have a bounded output, and any time the user
!  * says he doesn't want to use glibc's qsort).
!  *
!  * NOTE heapify passes false for checkIndex (and stores a constant tupindex
!  * passed as a parameter) even though we use heaps for multi-run sources
!  * because we only heapify when we're doing in-memory sorts or in inittapes
!  * before there's any point in comparing tupindexes.
!  */
! 
! static void

! tuplesort_heapify(Tuplesortstate *state, int tupindex, HeapOrder heaporder)
! {


The comment claims that we use heap sort when the user says he doesn't 
want to use glibc's qsort. I recall that we always use our own qsort 
implementation nowadays. And we never used the heap sort as a qsort 
replacement, did we?


In performsort, you convert the in-memory heap to a sorted list in one 
go. I wonder if it would be better to switch to a new TSS_ALLINHEAP 
state that means all tuples are now in the in-memory heap, and call 
tuplesort_heap_siftup in gettuple. It probably doesn't make much 
difference in most cases, but if there's another limit node in the plan 
with a smaller limit or the client only fetches a few top rows with a 
cursor you'd avoid unheapifying tuples that are just thrown away later.


There's a few blocks of code surrounded with #if 0 - #endif. Are those 
just leftovers that should be removed, or are things that still need to 
finished and enabled?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PATCHES] [PATCH] add CLUSTER table ORDER BY index

2007-03-28 Thread Heikki Linnakangas

Tom Lane wrote:

Gregory Stark [EMAIL PROTECTED] writes:

Holger Schurig [EMAIL PROTECTED] writes:

* psql tab-completion, it favours now CLUSTER table ORDER BY index



It occurs to me (sorry that I didn't think of this earlier) that if we're
going to use ORDER BY it really ought to take a list columns.


Surely you jest.  The point is to be ordered the same as the index, no?


There's some narrow corner cases where it makes sense to CLUSTER without 
an index:


* You're going to build an index with the same order after clustering. 
It's cheaper to sort the data first and then create index, than to build 
index, sort data, and rebuild index.


* You're doing a lot of large sort + merge joins. Sorts are cheaper if 
the data is already in order. One might ask, though, why don't you just 
create an index then...


* You're using CLUSTER as a VACUUM FULL replacement, and there's no 
handy index to sort with. (It'd be better if we had a VACUUM FULL that 
rewrites the table like CLUSTER, though)


Though I don't think we're implementing CLUSTER table ORDER BY col1, 
col2 anytime soon, ORDER BY does imply that a list of columns is to 
follow. How about CLUSTER table USING index?


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PATCHES] [HACKERS] Full page writes improvement, code update

2007-03-28 Thread Simon Riggs
On Wed, 2007-03-28 at 10:54 +0900, Koichi Suzuki wrote:

 As written below, full page write can be
 categolized as follows:
 
 1) Needed for crash recovery: first page update after each checkpoint.
 This has to be kept in WAL.
 
 2) Needed for archive recovery: page update between pg_start_backup and
 pg_stop_backup. This has to be kept in archive log.
 
 3) For log-shipping slave such as pg_standby: no full page writes will
 be needed for this purpose.
 
 My proposal deals with 2). So, if we mark each full_page_write, I'd
 rather mark when this is needed. Still need only one bit because the
 case 3) does not need any mark.

I'm very happy with this proposal, though I do still have some points in
detailed areas.

If you accept that 1  2 are valid goals, then 1  3 or 1, 2  3 are
also valid goals, ISTM. i.e. you might choose to use full_page_writes on
the primary and yet would like to see optimised data transfer to the
standby server. In that case, you would need the mark.

  - Not sure why we need full_page_compress, why not just mark them
  always? That harms noone. (Did someone else ask for that? If so, keep
  it)
 
 No, no one asked to have a separate option. There'll be no bad
 influence to do so.  So, if we mark each full_page_write, I'd
 rather mark when this is needed. Still need only one bit because the
 case 3) does not need any mark.

OK, different question: 
Why would anyone ever set full_page_compress = off? 

Why have a parameter that does so little? ISTM this is:

i) one more thing to get wrong

ii) cheaper to mark the block when appropriate than to perform the if()
test each time. That can be done only in the path where backup blocks
are present.

iii) If we mark the blocks every time, it allows us to do an offline WAL
compression. If the blocks aren't marked that option is lost. The bit is
useful information, so we should have it in all cases.

  - OTOH I'd like to see an explicit parameter set during recovery since
  you're asking the main recovery path to act differently in case a single
  bit is set/unset. If you are using that form of recovery, we should say
  so explicitly, to keep everybody else safe.
 
 Only one thing I had to do is to create dummy full page write to
 maintain LSNs. Full page writes are omitted in archive log. We have to
 LSNs same as those in the original WAL. In this case, recovery has to
 read logical log, not dummy full page writes. On the other hand, if
 both logical log and real full page writes are found in a log record,
 the recovery has to use real full page writes.

I apologise for not understanding your reply, perhaps my original
request was unclear.

In recovery.conf, I'd like to see a parameter such as

dummy_backup_blocks = off (default) | on

to explicitly indicate to the recovery process that backup blocks are
present, yet they are garbage and should be ignored. Having garbage data
within the system is potentially dangerous and I want to be told by the
user that they were expecting that and its OK to ignore that data.
Otherwise I want to throw informative errors. Maybe it seems OK now, but
the next change to the system may have unintended consequences and it
may not be us making the change. It's OK the Alien will never escape
from the lab is the starting premise for many good sci-fi horrors and I
want to watch them, not be in one myself. :-)

We can call it other things, of course. e.g.
ignore_dummy_blocks
decompressed_blocks
apply_backup_blocks

 Yes I believe so. As pg_standby does not include any chance to meet
 partial writes of pages, I believe you can omit all the full page
 writes. Of course, as Tom Lange suggested in
 http://archives.postgresql.org/pgsql-hackers/2007-02/msg00034.php
 removing full page writes can lose a chance to recover from
 partial/inconsisitent writes in the crash of pg_standby. In this case,
 we have to import a backup and archive logs (with full page writes
 during the backup) to recover. (We have to import them when the file
 system crashes anyway). If it's okay, I believe
 pg_compresslog/pg_decompresslog can be integrated with pg_standby.
 
 Maybe we can work together to include pg_compresslog/pg_decompresslog in 
 pg_standby.

ISTM there are two options.

I think this option is already possible:

1. Allow pg_decompresslog to operate on a file, replacing it with the
expanded form, like gunzip, so we would do this:
  restore_command = 'pg_standby %f decomp.tmp  pg_decompresslog
decomp.tmp %p'

though the decomp.tmp file would not get properly initialised or cleaned
up when we finish.

whereas this will take additional work

2. Allow pg_standby to write to stdin, so that we can do this:
  restore_command = 'pg_standby %f | pg_decompresslog - %p'

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] LIMIT/SORT optimization

2007-03-28 Thread Gregory Stark
Heikki Linnakangas [EMAIL PROTECTED] writes:

 Some comments on the patch below.

Thanks!

 Gregory Stark wrote:



 The comment claims that we use heap sort when the user says he doesn't want to
 use glibc's qsort. I recall that we always use our own qsort implementation
 nowadays. And we never used the heap sort as a qsort replacement, did we?

Thanks, I had a version that used heap sort instead of qsort but that was
before I discovered what you said. So I stripped that useless bit out.

 In performsort, you convert the in-memory heap to a sorted list in one go. I
 wonder if it would be better to switch to a new TSS_ALLINHEAP state that means
 all tuples are now in the in-memory heap, and call tuplesort_heap_siftup in
 gettuple. 

The problem is that the heap is backwards. The head of the heap is the
greatest, ie, the last element we want to return. Hm, Is there such a thing as
a two-way heap?

 There's a few blocks of code surrounded with #if 0 - #endif. Are those just
 leftovers that should be removed, or are things that still need to finished 
 and
 enabled?

Uhm, I don't remember, will go look, thanks.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] autovacuum: multiple workers

2007-03-28 Thread Alvaro Herrera
Simon Riggs wrote:
 On Tue, 2007-03-27 at 17:41 -0400, Alvaro Herrera wrote:
 
  The main change is to have an array of Worker structs in shared memory;
  each worker checks the current table of all other Workers, and skips a
  table that's being vacuumed by any of them.  It also rechecks the table
  before vacuuming, which removes the problem of redundant vacuuming.
 
 Slightly OT: Personally, I'd like it if we added an array for all
 special backends, with configurable behaviour. That way it would be
 easier to have multiple copies of other backends of any flavour using
 the same code, as well as adding others without cutting and pasting each
 time. That part of the postmaster code has oozed sideways in the past
 few years and seems in need of some love. (A former sinner repents).

I'm not really thrilled about it, each case being so different from the
others.  For the autovac workers, for example, the array in shared
memory is kept on the autovac launcher, _not_ in the postmaster.  In the
postmaster, they are kept in the regular BackendList array, so they
don't fit on that array you describe.  And as far as the other processes
are concerned, every one of them is a special case, and we don't add new
ones frequently.  In fact, the autovac work is the only thing that has
added new processes in a long time, since the Windows port was
introduced (which required the logger process) and the bgwriter.

How would you make it configurable?  Have a struct containing function
pointers, each function being called when some event takes place?

What other auxiliary processes are you envisioning, anyway?

In any case I don't think this is something that would be good to attack
this late in the devel cycle -- we could discuss it for 8.4 though.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] [PATCH] add CLUSTER table ORDER BY index

2007-03-28 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 Though I don't think we're implementing CLUSTER table ORDER BY col1, 
 col2 anytime soon, ORDER BY does imply that a list of columns is to 
 follow. How about CLUSTER table USING index?

+1 ... AFAIR there was 0 discussion of the exact syntax before,
so I don't feel wedded to ORDER BY.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] autovacuum: multiple workers

2007-03-28 Thread Simon Riggs
On Wed, 2007-03-28 at 09:39 -0400, Alvaro Herrera wrote:
 What other auxiliary processes are you envisioning, anyway?

WAL Writer, multiple bgwriters, checkpoint process, parallel query and
sort slavesplus all the ones I haven't dreamed of yet.

No need to agree with my short list, but we do seem to keep adding them
on a regular basis

 In any case I don't think this is something that would be good to
 attack
 this late in the devel cycle -- we could discuss it for 8.4 though.

OK

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


[PATCHES] scrollable cursor support without MOVE statement

2007-03-28 Thread Pavel Stehule


This is the most recent email I have on this.  Was the scrollable patch
applied?  If not, would you resubmit?



I resubmit scrollable cursor patch

Regards
Pavel Stehule

_
Emotikony a pozadi programu MSN Messenger ozivi vasi konverzaci. 
http://messenger.msn.cz/
*** ./doc/src/sgml/plpgsql.sgml.orig	2007-01-26 20:30:17.0 +0100
--- ./doc/src/sgml/plpgsql.sgml	2007-01-26 21:33:38.0 +0100
***
*** 2354,2360 
  internally to avoid memory problems.) A more interesting usage is to
  return a reference to a cursor that a function has created, allowing the
  caller to read the rows. This provides an efficient way to return
! large row sets from functions.
 /para
  
 sect2 id=plpgsql-cursor-declarations
--- 2354,2361 
  internally to avoid memory problems.) A more interesting usage is to
  return a reference to a cursor that a function has created, allowing the
  caller to read the rows. This provides an efficient way to return
! large row sets from functions. PL/pgSQL allows to use scrollable 
! cursors.
 /para
  
 sect2 id=plpgsql-cursor-declarations
***
*** 2368,2374 
   Another way is to use the cursor declaration syntax,
   which in general is:
  synopsis
! replaceablename/replaceable CURSOR optional ( replaceablearguments/replaceable ) /optional FOR replaceablequery/replaceable;
  /synopsis
   (literalFOR/ may be replaced by literalIS/ for
   productnameOracle/productname compatibility.)
--- 2369,2375 
   Another way is to use the cursor declaration syntax,
   which in general is:
  synopsis
! replaceablename/replaceable optional optional NO /optional SCROLL /optional CURSOR optional ( replaceablearguments/replaceable ) /optional FOR replaceablequery/replaceable;
  /synopsis
   (literalFOR/ may be replaced by literalIS/ for
   productnameOracle/productname compatibility.)
***
*** 2517,2523 
   titleliteralFETCH//title
  
  synopsis
! FETCH replaceablecursor/replaceable INTO replaceabletarget/replaceable;
  /synopsis
  
   para
--- 2518,2524 
   titleliteralFETCH//title
  
  synopsis
! FETCH optional replaceabledirection/replaceable FROM /optional replaceablecursor/replaceable INTO replaceabletarget/replaceable;
  /synopsis
  
   para
***
*** 2526,2539 
variable, or a comma-separated list of simple variables, just like
commandSELECT INTO/command.  As with commandSELECT
 INTO/command, the special variable literalFOUND/literal may
!   be checked to see whether a row was obtained or not.
   /para
- 
  para
   An example:
  programlisting
  FETCH curs1 INTO rowvar;
  FETCH curs2 INTO foo, bar, baz;
  /programlisting
 /para
   /sect3
--- 2527,2545 
variable, or a comma-separated list of simple variables, just like
commandSELECT INTO/command.  As with commandSELECT
 INTO/command, the special variable literalFOUND/literal may
!   be checked to see whether a row was obtained or not. More details
! 	  about replaceabledirection/replaceable you can find in
!   xref linkend=sql-fetch without literalBACKWARD/ and literalFORWARD/ keywords.
! 	  Statement commandFETCH/command in applicationPL/pgSQL/ returns only one 
! 	  or zero row every time.
   /para
  para
   An example:
  programlisting
  FETCH curs1 INTO rowvar;
  FETCH curs2 INTO foo, bar, baz;
+ FETCH LAST INTO x, y;
+ FETCH RELATIVE -2 INTO x;
  /programlisting
 /para
   /sect3
*** ./doc/src/sgml/spi.sgml.orig	2007-01-14 12:37:19.0 +0100
--- ./doc/src/sgml/spi.sgml	2007-01-26 11:46:18.0 +0100
***
*** 800,805 
--- 800,937 
  
  !-- *** --
  
+ refentry id=spi-spi-prepare-cursor
+  refmeta
+   refentrytitleSPI_prepare_cursor/refentrytitle
+  /refmeta
+ 
+  refnamediv
+   refnameSPI_prepare_cursor/refname
+   refpurposeprepare a plan for a cursor, without executing it yet/refpurpose
+  /refnamediv
+ 
+  indextermprimarySPI_prepare_cursor/primary/indexterm
+ 
+  refsynopsisdiv
+ synopsis
+ void * SPI_prepare_cursor(const char * parametercommand/parameter, int parameternargs/parameter, Oid * parameterargtypes/parameter, int parameteroptions/parameter)
+ /synopsis
+  /refsynopsisdiv
+ 
+  refsect1
+   titleDescription/title
+ 
+   para
+functionSPI_prepare_cursor/function creates and returns an execution
+plan for the specified select but doesn't execute the command.
+This function should only be called from a connected procedure. This
+function allows set cursor's options. 
+   /para
+ 
+   para
+When the same or a similar command is to be executed repeatedly, it
+may be advantageous to perform the planning only once.
+

Re: [PATCHES] patch adding new regexp functions

2007-03-28 Thread Jeremy Drake
 Jeremy Drake wrote:
  On Thu, 22 Mar 2007, Tom Lane wrote:
 
   I'd vote for making this new code look like the rest of it, to wit
   hardwire the values.
 
  Attached please find a patch which does this.

I just realized that the last patch removed all usage of fcinfo in the
setup_regexp_matches function, so this version of the patch also removes
it as a parameter to that function.

-- 
Think of it!  With VLSI we can pack 100 ENIACs in 1 sq. cm.!Index: src/backend/utils/adt/regexp.c
===
RCS file: 
/home/jeremyd/local/postgres/cvsuproot/pgsql/src/backend/utils/adt/regexp.c,v
retrieving revision 1.70
diff -c -r1.70 regexp.c
*** src/backend/utils/adt/regexp.c  20 Mar 2007 05:44:59 -  1.70
--- src/backend/utils/adt/regexp.c  28 Mar 2007 18:57:28 -
***
*** 30,35 
--- 30,36 
  #include postgres.h
  
  #include access/heapam.h
+ #include catalog/pg_type.h
  #include funcapi.h
  #include regex/regex.h
  #include utils/builtins.h
***
*** 95,106 
size_toffset;
  
re_comp_flags flags;
- 
-   /* text type info */
-   Oid   param_type;
-   int16 typlen;
-   bool  typbyval;
-   char  typalign;
  } regexp_matches_ctx;
  
  typedef struct regexp_split_ctx
--- 96,101 
***
*** 119,126 
  static intnum_res = 0;/* # of cached re's */
  static cached_re_str re_array[MAX_CACHED_RES];/* cached re's */
  
! static regexp_matches_ctx *setup_regexp_matches(FunctionCallInfo fcinfo,
!   
text *orig_str, text *pattern,

text *flags);
  static ArrayType *perform_regexp_matches(regexp_matches_ctx *matchctx);
  
--- 114,120 
  static intnum_res = 0;/* # of cached re's */
  static cached_re_str re_array[MAX_CACHED_RES];/* cached re's */
  
! static regexp_matches_ctx *setup_regexp_matches(text *orig_str, text *pattern,

text *flags);
  static ArrayType *perform_regexp_matches(regexp_matches_ctx *matchctx);
  
***
*** 760,767 
oldcontext = 
MemoryContextSwitchTo(funcctx-multi_call_memory_ctx);
  
/* be sure to copy the input string into the multi-call ctx */
!   matchctx = setup_regexp_matches(fcinfo, 
PG_GETARG_TEXT_P_COPY(0),
!   
pattern, flags);
  
MemoryContextSwitchTo(oldcontext);
funcctx-user_fctx = (void *) matchctx;
--- 754,761 
oldcontext = 
MemoryContextSwitchTo(funcctx-multi_call_memory_ctx);
  
/* be sure to copy the input string into the multi-call ctx */
!   matchctx = setup_regexp_matches(PG_GETARG_TEXT_P_COPY(0), 
pattern,
!   
flags);
  
MemoryContextSwitchTo(oldcontext);
funcctx-user_fctx = (void *) matchctx;
***
*** 822,828 
  }
  
  static regexp_matches_ctx *
! setup_regexp_matches(FunctionCallInfo fcinfo, text *orig_str, text *pattern, 
text *flags)
  {
regexp_matches_ctx  *matchctx = palloc(sizeof(regexp_matches_ctx));
  
--- 816,822 
  }
  
  static regexp_matches_ctx *
! setup_regexp_matches(text *orig_str, text *pattern, text *flags)
  {
regexp_matches_ctx  *matchctx = palloc(sizeof(regexp_matches_ctx));
  
***
*** 835,845 
matchctx-pmatch = palloc(sizeof(regmatch_t) * 
(matchctx-cpattern-re_nsub + 1));
matchctx-offset = 0;
  
-   /* get text type oid, too lazy to do it some other way */
-   matchctx-param_type = get_fn_expr_argtype(fcinfo-flinfo, 0);
-   get_typlenbyvalalign(matchctx-param_type, matchctx-typlen,
-matchctx-typbyval, 
matchctx-typalign);
- 
matchctx-wide_str = palloc(sizeof(pg_wchar) * (matchctx-orig_len + 
1));
matchctx-wide_len = pg_mb2wchar_with_len(VARDATA(matchctx-orig_str),

  matchctx-wide_str, matchctx-orig_len);
--- 829,834 
***
*** 915,923 
dims[0] = 1;
}
  
return construct_md_array(elems, nulls, ndims, dims, lbs,
! matchctx-param_type, 
matchctx-typlen,
! matchctx-typbyval, 
matchctx-typalign);
  }
  
  Datum
--- 904,912 
dims[0] = 1;
}
  
+   /* XXX: this hardcodes 

Re: [PATCHES] [PATCH] add CLUSTER table ORDER BY index

2007-03-28 Thread Holger Schurig
 +1 ... AFAIR there was 0 discussion of the exact syntax before,
 so I don't feel wedded to ORDER BY.

A changed patch comes with the next e-mail.

I can not create a patch for CLUSTER table USING col1,col2,col3,
because I'm not yet deep into postgresql and don't have the time
for that. I just thought that the skill level needed for the TODO-
item was in my range :-)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] [PATCH] add CLUSTER table USING index

2007-03-28 Thread Alvaro Herrera
FWIW you don't need to patch the TODO files.  They will be updated by
Bruce.  (And in any case we don't remove the entries, but rather mark
them with a - meaning done for the next release).

Also, sql_help.h is a generated file.  You need to change the appropiate
SGML source (doc/src/sgml/ref/cluster.sgml I think)

   /*
 !  * If we have CLUSTER sth ORDER BY, then add the index as well.
*/
 ! else if (pg_strcasecmp(prev3_wd, CLUSTER) == 0 
 !  pg_strcasecmp(prev_wd, USING) == 0 
 ! Xpg_strcasecmp(prev2_wd, ORDER) == 0)
   {
 ! completion_info_charp = prev3_wd;
 ! COMPLETE_WITH_QUERY(Query_for_index_of_table);
   }

Huh?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] [PATCH] add CLUSTER table USING index

2007-03-28 Thread Holger Schurig
 Huh?

You're right. I should have done a quilt refresh -c before re-posting the 
patch.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] scrollable cursor support without MOVE statement

2007-03-28 Thread Bruce Momjian

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Pavel Stehule wrote:
 
 This is the most recent email I have on this.  Was the scrollable patch
 applied?  If not, would you resubmit?
 
 
 I resubmit scrollable cursor patch
 
 Regards
 Pavel Stehule
 
 _
 Emotikony a pozadi programu MSN Messenger ozivi vasi konverzaci. 
 http://messenger.msn.cz/

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PATCHES] [PATCH] add CLUSTER table USING index (take 2)

2007-03-28 Thread Holger Schurig
Index: src/doc/src/sgml/ref/cluster.sgml
===
*** src.orig/doc/src/sgml/ref/cluster.sgml  2007-03-28 23:02:12.0 
+0200
--- src/doc/src/sgml/ref/cluster.sgml   2007-03-28 23:03:14.0 +0200
***
*** 20,27 
  
   refsynopsisdiv
  synopsis
! CLUSTER replaceable class=PARAMETERindexname/replaceable ON 
replaceable class=PARAMETERtablename/replaceable
! CLUSTER replaceable class=PARAMETERtablename/replaceable
  CLUSTER
  /synopsis
   /refsynopsisdiv
--- 20,26 
  
   refsynopsisdiv
  synopsis
! CLUSTER replaceable class=PARAMETERtablename/replaceable [ USING 
replaceable class=PARAMETERindexname/replaceable ]
  CLUSTER
  /synopsis
   /refsynopsisdiv
Index: src/src/backend/parser/gram.y
===
*** src.orig/src/backend/parser/gram.y  2007-03-28 22:58:48.0 +0200
--- src/src/backend/parser/gram.y   2007-03-28 22:59:15.0 +0200
***
*** 209,215 
  
  %type str   relation_name copy_file_name
database_name access_method_clause 
access_method attr_name
!   index_name name file_name
  
  %type list  func_name handler_name qual_Op qual_all_Op subquery_Op
opt_class opt_validator
--- 209,215 
  
  %type str   relation_name copy_file_name
database_name access_method_clause 
access_method attr_name
!   index_name name file_name opt_cluster_using
  
  %type list  func_name handler_name qual_Op qual_all_Op subquery_Op
opt_class opt_validator
***
*** 5327,5332 
--- 5327,5333 
   *
   *QUERY:
   *cluster index_name on qualified_name
+  *cluster qualified_name USING index_name
   *cluster qualified_name
   *cluster
   *
***
*** 5340,5350 
   n-indexname = $2;
   $$ = (Node*)n;
}
!   | CLUSTER qualified_name
{
   ClusterStmt *n = makeNode(ClusterStmt);
   n-relation = $2;
!  n-indexname = NULL;
   $$ = (Node*)n;
}
| CLUSTER
--- 5341,5351 
   n-indexname = $2;
   $$ = (Node*)n;
}
!   | CLUSTER qualified_name opt_cluster_using
{
   ClusterStmt *n = makeNode(ClusterStmt);
   n-relation = $2;
!  n-indexname = $3;
   $$ = (Node*)n;
}
| CLUSTER
***
*** 5356,5361 
--- 5357,5368 
}
;
  
+ opt_cluster_using:
+   USING index_name{ $$ = $2; }
+   | /*EMPTY*/ { $$ = NULL; }
+   ;
+ 
+ 
  /*
   *
   *QUERY:
Index: src/src/bin/psql/tab-complete.c
===
*** src.orig/src/bin/psql/tab-complete.c2007-03-28 22:58:48.0 
+0200
--- src/src/bin/psql/tab-complete.c 2007-03-28 22:59:15.0 +0200
***
*** 822,832 
  
COMPLETE_WITH_LIST(list_COLUMNALTER);
}
!   else if (pg_strcasecmp(prev3_wd, TABLE) == 0 
!pg_strcasecmp(prev_wd, CLUSTER) == 0)
COMPLETE_WITH_CONST(ON);
else if (pg_strcasecmp(prev4_wd, TABLE) == 0 
-pg_strcasecmp(prev2_wd, CLUSTER) == 0 
 pg_strcasecmp(prev_wd, ON) == 0)
{
completion_info_charp = prev3_wd;
--- 822,830 
  
COMPLETE_WITH_LIST(list_COLUMNALTER);
}
!   else if (pg_strcasecmp(prev3_wd, TABLE) == 0)
COMPLETE_WITH_CONST(ON);
else if (pg_strcasecmp(prev4_wd, TABLE) == 0 
 pg_strcasecmp(prev_wd, ON) == 0)
{
completion_info_charp = prev3_wd;
***
*** 929,952 
  
/*
 * If the previous word is CLUSTER and not without produce list of
!* indexes.
 */
else if (pg_strcasecmp(prev_wd, CLUSTER) == 0 
 pg_strcasecmp(prev2_wd, WITHOUT) != 0)
!   

Re: [PATCHES] patch adding new regexp functions

2007-03-28 Thread Bruce Momjian

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Jeremy Drake wrote:
  Jeremy Drake wrote:
   On Thu, 22 Mar 2007, Tom Lane wrote:
  
I'd vote for making this new code look like the rest of it, to wit
hardwire the values.
  
   Attached please find a patch which does this.
 
 I just realized that the last patch removed all usage of fcinfo in the
 setup_regexp_matches function, so this version of the patch also removes
 it as a parameter to that function.
 
 -- 
 Think of it!  With VLSI we can pack 100 ENIACs in 1 sq. cm.!
Content-Description: 

[ Attachment, skipping... ]

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PATCHES] Replace badly licensed blf.c in pgcrypto

2007-03-28 Thread Neil Conway

Marko Kreen wrote:

Replace 4-clause licensed blf.[ch] with blowfish implementation
from PuTTY with is under minimal BSD/MIT license.


Applied -- thanks for the patch.

-Neil


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PATCHES] patch adding new regexp functions

2007-03-28 Thread Neil Conway

Jeremy Drake wrote:

I just realized that the last patch removed all usage of fcinfo in the
setup_regexp_matches function, so this version of the patch also removes
it as a parameter to that function.

Applied, thanks.


-Neil


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PATCHES] Fast CLUSTER

2007-03-28 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 [ make CLUSTER skip WAL when possible ]

Applied with some editorialization.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings