Re: [HACKERS] Copyright (C) 1996-2002

2003-12-02 Thread Jean-Michel POURE
Le Mardi 25 Novembre 2003 07:32, Randolf Richardson a crit :
 I'm curious, has anyone consulted with a lawyer on this?

Yes, the lawyer concluded that the number 2003 had been both registered as a 
trademark and a patented invention. Therefore, it is very likely that 
Humanity will be able to jump directly to the next non-registered digit, 
which is the year 200440033, called 'year of innovation', which also happens 
to be the price asked by the lawyer for bringing us into the future.

Jean-Michel


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] rebuilding rpm for RH9 error

2003-12-02 Thread Gaetano Mendola
Joshua D. Drake wrote:




I seen that the configure is done with:
--with-krb5=/usr.

make sure that you have krb5-devel installed.
Unfortunately yes:

# rpm -qa | grep krb5
krb5-devel-1.2.7-14
krb5-server-1.2.7-14
krb5-server-1.2.7-10
pam_krb5-1.60-1
krb5-workstation-1.2.7-10
krb5-libs-1.2.7-10
krb5-devel-1.2.7-10
krb5-libs-1.2.7-14
krb5-workstation-1.2.7-14


Regards
Gaetano Mendola


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] porting from 7.2 - 7.4

2003-12-02 Thread bpalmer
I'm trying to port some replication code from 7.2 - 7.4 and am running 
into a block.

In the file:
  /src/backend/tcop/postgres.c

My diff was for:

void
pg_exec_query(char *query_string)
{

   pg_exec_query_string(query_string, whereToSendOutput, 
 QueryContext);


}


But the pg_exec_query_string function seems to have left.  Can someone 
share with me insight as to what it has been replaced by?

Thanks,
- Brandon



 c: 917-697-8665h: 201-435-6226
 b. palmer,  [EMAIL PROTECTED]   pgp:crimelabs.net/bpalmer.pgp5

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] rebuilding rpm for RH9 error

2003-12-02 Thread Lamar Owen
On Monday 01 December 2003 08:53 pm, Gaetano Mendola wrote:
 Hi all,
 I'm still experiencing problem trying to
 rebuild the rpm from the file:
 postgresql-7.4-0.5PGDG.src.rpm

 I seen that the configure is done with:
 --with-krb5=/usr.

You need to specify that you are building for Red Hat 9 on the command line.   
The command line argument would be:
rpmbuild --define 'build89 1' --rebuild

That command line passes --with-krb5=/usr/kerberos to configure (which 
unfortunately will have to change for next cycle, but that's a different 
problem).  Red Hat 9 and earlier put the kerberos stuff under /usr/kerberos; 
Fedora Core 1 puts it under /usr.  My source RPM is being developed currently 
on Fedora Core, which is what I run on my laptop.

 I also try to install the RPM already builded but I obtain:
 file /usr/include/sqltypes.h from install of
 postgresql-devel-7.4-0.5PGDG conflicts with file from package
 unixODBC-devel-2.2.3-6

There is a namespace collision between the unixODBC includes and the 
PostgreSQL includes.  It is being looked at; one or both will have to move 
the include to somewhere else.  So, at the moment, you can't have 
unixODBC-devel and postgresql-devel installed at the same time.  I am 
investigating the best way of correcting this without breaking too many 
things.
-- 
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772
(828)862-5554
www.pari.edu


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] [pgsql-hackers-win32] What packages from Mingw and MSys are needed?

2003-12-02 Thread Andrew Dunstan
Tony and Bryn Reina wrote:

1. Which parts of MSYS and Mingw are needed for the building a Win32 
version of PostgreSQL?
 
There are several packages listed on the Mingw website:
MingGW-3.1.0-1.exe
mingw-utils-0.2
mingw-runtime-3.2
msys-1.0.9.exe
msysDTK-1.0.1.exe
binutils
gcc
win32api-2.4
mingw32-make
 
2. Do I have to build OpenSSL from Mingw to compile --with-openssl? Or 
does Mingw already have it as a package?
 
3. Does the current PostgreSQL 7.4 compile with Mingw?
 
Thanks.
-Tony Reina
 
see here for some info:

http://archives.postgresql.org/pgsql-hackers/2003-09/msg00938.php

I didn't try compiling with openssl.

I believe we are still waiting for signalling and process management 
code to complete the Win32 port.

cheers

andrew

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] request for feedback - read-only GUC variables, pg_settings changes

2003-12-02 Thread Joe Conway
We (mostly Bruce, Tom, Peter, and I) have been having a discussion on 
the PATCHES list regarding some new functionality related to read-only 
GUC variables. The net result is pasted at the bottom of this post. Here 
is a link to the discussion:
http://archives.postgresql.org/pgsql-patches/2003-11/msg00363.php

In short, 5 new read-only GUC variables are created allowing the value 
of certain compile-time settings to be queried. Also the pg_settings 
system view has been expanded to include category, short_desc, and 
extra_desc (corresponding to group, short_desc, and long_desc in the 
generic guc structure). The 5 GUC variables are:

block_size - int
  Shows size of a disk block
integer_datetimes - bool
  Datetimes are integer based
max_function_args - int
  Shows the maximum number of function arguments
max_identifier_length - int
  Shows the maximum identifier length
max_index_keys - int
  Shows the maximum number of index keys
The main open question at this point is the name for the block_size 
variable. Peter favors block_size, Bruce favors page_size, Tom 
hasn't taken a position on that specific issue. Does anyone have and 
opinion on the variable name, or any general comments before I commit this?

Thanks,

Joe

regression=# select * from pg_settings where category like 'Compile%';
-[ RECORD 1 ]--
name   | block_size
setting| 8192
category   | Compiled-in Options
short_desc | Shows size of a disk block
extra_desc |
context| internal
vartype| integer
source | default
min_val| 8192
max_val| 8192
-[ RECORD 2 ]--
name   | integer_datetimes
setting| on
category   | Compiled-in Options
short_desc | Datetimes are integer based
extra_desc |
context| internal
vartype| bool
source | default
min_val|
max_val|
-[ RECORD 3 ]--
name   | max_function_args
setting| 32
category   | Compiled-in Options
short_desc | Shows the maximum number of function arguments
extra_desc |
context| internal
vartype| integer
source | default
min_val| 32
max_val| 32
-[ RECORD 4 ]--
name   | max_identifier_length
setting| 63
category   | Compiled-in Options
short_desc | Shows the maximum identifier length
extra_desc |
context| internal
vartype| integer
source | default
min_val| 63
max_val| 63
-[ RECORD 5 ]--
name   | max_index_keys
setting| 32
category   | Compiled-in Options
short_desc | Shows the maximum number of index keys
extra_desc |
context| internal
vartype| integer
source | default
min_val| 32
max_val| 32


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] rebuilding rpm for RH9 error

2003-12-02 Thread Gaetano Mendola
Lamar Owen wrote:

On Monday 01 December 2003 08:53 pm, Gaetano Mendola wrote:

Hi all,
I'm still experiencing problem trying to
rebuild the rpm from the file:
postgresql-7.4-0.5PGDG.src.rpm


I seen that the configure is done with:
--with-krb5=/usr.


You need to specify that you are building for Red Hat 9 on the command line.   
The command line argument would be:
rpmbuild --define 'build89 1' --rebuild
I'll try.


I also try to install the RPM already builded but I obtain:
file /usr/include/sqltypes.h from install of
postgresql-devel-7.4-0.5PGDG conflicts with file from package
unixODBC-devel-2.2.3-6


There is a namespace collision between the unixODBC includes and the 
PostgreSQL includes.  It is being looked at; one or both will have to move 
the include to somewhere else.  So, at the moment, you can't have 
unixODBC-devel and postgresql-devel installed at the same time.  I am 
investigating the best way of correcting this without breaking too many 
things.


After uninstalling the package unixODBC-devel-2.2.3-6 all was fine.

Regards
Gaetano Mendola
PS: the 7.4 will be remembered as the longest release to be developed
and for the longest period needed in order to have the RPM for RH.
Good job anyway.






---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] request for feedback - read-only GUC variables,

2003-12-02 Thread Marc G. Fournier
On Tue, 2 Dec 2003, Joe Conway wrote:

 We (mostly Bruce, Tom, Peter, and I) have been having a discussion on
 the PATCHES list regarding some new functionality related to read-only
 GUC variables. The net result is pasted at the bottom of this post. Here
 is a link to the discussion:
 http://archives.postgresql.org/pgsql-patches/2003-11/msg00363.php

 In short, 5 new read-only GUC variables are created allowing the value
 of certain compile-time settings to be queried. Also the pg_settings
 system view has been expanded to include category, short_desc, and
 extra_desc (corresponding to group, short_desc, and long_desc in the
 generic guc structure). The 5 GUC variables are:

 block_size - int
Shows size of a disk block
 integer_datetimes - bool
Datetimes are integer based
 max_function_args - int
Shows the maximum number of function arguments
 max_identifier_length - int
Shows the maximum identifier length
 max_index_keys - int
Shows the maximum number of index keys

 The main open question at this point is the name for the block_size
 variable. Peter favors block_size, Bruce favors page_size, Tom
 hasn't taken a position on that specific issue. Does anyone have and
 opinion on the variable name, or any general comments before I commit this?

PAGE_SIZE generally refers to memory allocations, no?

I'd go with block_size ...

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] fork/exec problem: DynaHashCxt

2003-12-02 Thread Claudio Natoli

Hi all,

I'm working on getting BackendFork converted to a fork/exec model, and have
hit the following wall.

All the ShmemInitHash structures are allocated using DynaHashCxt. Clearly,
this context is going to have to be shared across backends in the fork/exec
case, but I can't see a non-trivial way of doing so.

Any ideas? (or have I mis-diagnosed the problem completely)

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] Inside the Regex Engine

2003-12-02 Thread David Fetter
Kind people,

As a perl weenie, I'm used to being able to do things with regexes
like

$text =~ s/(foo|bar|baz)/NO UNIX WEENIES HERE/;
$got_it = $1;

While PL/Perl is great, it's not available everywhere, and I'd like to
be able to grab atoms from a regex match in, say, a SELECT.  Is there
some way to get access to them?

TIA for any pointers on this :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100cell: +1 415 235 3778

Civil government, so far as it is instituted for the security of
property, is in reality instituted for the defense of the rich against
the poor, or of those who have some property against those who have
none at all.
Adam Smith

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] ALTER SEQUENCE enchancement

2003-12-02 Thread greg

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
 
 The above is an operation that would not help me a lot, but a way of
 performing currval() without knowing the sequence name would be good.
 
You could do this with a function. Here is a quick one in SQL:
 
CREATE OR REPLACE FUNCTION seqname(NAME,NAME) RETURNS TEXT AS '
SELECT TRIM(BOTH \'\'\'\' FROM SUBSTRING(d.adsrc, \'\'\'.*\'\'\'))
FROM pg_catalog.pg_attrdef d, pg_catalog.pg_attribute a, pg_class c
WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef
AND c.relname = $1
AND c.oid = a.attrelid
AND a.attname = $2;
' LANGUAGE SQL;
 
Usage: SELECT nextval(seqname('tablename','colname'));
 
You might also want to simply keep a table of the sequence names
if you plan on doing this a lot, or make sure you name them in
a consistent and unsurprising manner.
 
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200312022159
 
-BEGIN PGP SIGNATURE-
 
iD8DBQE/zVV/vJuQZxSWSsgRArvaAKCM91V5su/AoOKXWVf+JhdrklRfQQCfXXfp
R7Pqjd/U0p+xSrS+sMxEoRE=
=FA6a
-END PGP SIGNATURE-




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Inside the Regex Engine

2003-12-02 Thread Alvaro Herrera
On Tue, Dec 02, 2003 at 07:52:57PM -0600, David Fetter wrote:

 As a perl weenie, I'm used to being able to do things with regexes
 like
 
 $text =~ s/(foo|bar|baz)/NO UNIX WEENIES HERE/;
 $got_it = $1;
 
 While PL/Perl is great, it's not available everywhere, and I'd like to
 be able to grab atoms from a regex match in, say, a SELECT.  Is there
 some way to get access to them?

Huh, the best I am able to do is

alvh= select substring('bazfoo fubar', 'fu(foo|bar)');
 substring
---
 bar
(1 fila)

The choice of the name for the function seems weird to me.  Also note
that you are able to use only one set of parenthesis (i.e. the first
gets picked up, the rest is ignored).

If you need to be able to extract further things, there's a tip in the
docuemntation that reads

If you have pattern matching needs that go beyond this, consider
writing a user-defined function in Perl or Tcl.

It does not appear to be that difficult to add the functionality needed
to extract random atoms, but there's some hacking involved.

-- 
Alvaro Herrera (alvherre[a]dcc.uchile.cl)
Cuando no hay humildad las personas se degradan (A. Christie)

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Inside the Regex Engine

2003-12-02 Thread Tom Lane
[EMAIL PROTECTED] (David Fetter) writes:
 While PL/Perl is great, it's not available everywhere, and I'd like to
 be able to grab atoms from a regex match in, say, a SELECT.  Is there
 some way to get access to them?

There's a three-parameter variant of substring() that allows extraction
of a portion of a regex match --- unfortunately it uses SQL99's
brain-dead notion of regex, which will not satisfy any Perl weenie :-(

I think it'd be worth our while to define some comparable functionality
that depends only on the POSIX regex engine ...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] fork/exec problem: DynaHashCxt

2003-12-02 Thread Tom Lane
Claudio Natoli [EMAIL PROTECTED] writes:
 All the ShmemInitHash structures are allocated using DynaHashCxt.

I'm not sure if you're confusing backend-local hashes with shared
hashes, or hash control headers with the actual shared data.  But
the above is a false statement.  DynaHashCxt is not shared.

Shared hashes are a bit tricky because there is a backend-local
structure that has a pointer into the shared memory --- is that
what's confusing you?

Keep in mind that this code did work in a fork/exec context not
so many moons ago.  If you think you see a showstopper, it's a
relatively recent change and can be undone.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] rebuilding rpm for RH9 error

2003-12-02 Thread Lamar Owen
On Tuesday 02 December 2003 06:29 pm, Gaetano Mendola wrote:
 Lamar Owen wrote:
  You need to specify that you are building for Red Hat 9 on the command

 I'll try.

Ok.

 PS: the 7.4 will be remembered as the longest release to be developed
 and for the longest period needed in order to have the RPM for RH.

Not quite.  The reason I got involved in this is the lag in getting 6.5 RPM's 
way back when.  As well as 6.4 and prior.  Prior to 6.5 it would have been 
possible to have six months between release of a tarball and release of an 
RPM, usually by Red Hat itself.

I think I was slower with 7.1.x, but that was due to a different reason.

 Good job anyway.

Well, thanks, anyway :-)  No, seriously, I must be doing ok if the longest 
time to RPM release for the final release was just a week or so.  That's not 
too bad, really.  But I do try to get closer to the release, and not farther.  
And I will try to get 7.4.1 and 7.3.5 RPMs out sooner.
-- 
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772
(828)862-5554
www.pari.edu


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] 7.3.5 bundled ...

2003-12-02 Thread Marc G. Fournier

Please take a quick peak at it ...

ftp://ftp.postgresql.org/pub/source/v7.3.5



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] fork/exec problem: DynaHashCxt

2003-12-02 Thread Claudio Natoli

 I'm not sure if you're confusing backend-local hashes with shared
 hashes, or hash control headers with the actual shared data.  But
 the above is a false statement.  DynaHashCxt is not shared.

No, wasn't confused over that. Was confused over something else though :-)

 
 Shared hashes are a bit tricky because there is a backend-local
 structure that has a pointer into the shared memory --- is that
 what's confusing you?

That's pretty much right on the mark, and the heart of the problem I
suspect. 

So this means we'll have to pull relHash out of the shared FreeSpaceMap
structure and make it a variable in it's own right? [Same goes for lockHash
and proclockHash in the LockMethod structure reference by LockTable (lock
method table)]


 Keep in mind that this code did work in a fork/exec context not
 so many moons ago.  If you think you see a showstopper, it's a
 relatively recent change and can be undone.

Keeping this firmly in mind, trust me.

Thanks,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] fork/exec problem: DynaHashCxt

2003-12-02 Thread Tom Lane
Claudio Natoli [EMAIL PROTECTED] writes:
 So this means we'll have to pull relHash out of the shared FreeSpaceMap
 structure and make it a variable in it's own right?

Hm.  The freespace.c code is relatively new and might not be jumping
through all of the hoops it should be jumping through.  My recollection
of the older code is that the logic was like create or attach to shared
memory structure named 'foo', if not create then initialize the shared
structure.  I'll take the blame if freespace.c doesn't do this right...

 [Same goes for lockHash and proclockHash in the LockMethod structure
 reference by LockTable (lock method table)]

The lock code *should* be doing this right, unless I've totally
forgotten the modification history ...

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] fork/exec problem: DynaHashCxt

2003-12-02 Thread Claudio Natoli

 Claudio Natoli [EMAIL PROTECTED] writes:
  So this means we'll have to pull relHash out of the shared FreeSpaceMap
  structure and make it a variable in it's own right?
 
 Hm.  The freespace.c code is relatively new and might not be jumping
 through all of the hoops it should be jumping through.  My 
 recollectionof the older code is that the logic was like create or 
 attach to shared memory structure named 'foo', if not create then
initialize the shared
 structure.  I'll take the blame if freespace.c doesn't do this right...

The way I see it:
  * Postmaster call InitFreeSpaceMap
  * When a fork/exec backend starts, it'll be able to access the
FreeSpaceMap struct via shared memory
  * But this very same struct contains a pointer (relHash) to a struct
which, in the fork/exec case, is valid only in the postmaster (ie. the hashp
HTAB struct created in the postmaster's DynaHashCxt context and returned by
hash_create/ShmemInitHash)
  [of course, this HTABs struct htcl value will point to shared memory, but
it is a moot point by this stage]


  [Same goes for lockHash and proclockHash in the LockMethod structure
  reference by LockTable (lock method table)]
 
 The lock code *should* be doing this right, unless I've totally
 forgotten the modification history ...

AFAICS, the reasoning above applies to these hashes. Of course, I'm more
than ready to be corrected on this reasoning...

Cheers,
Claudio

--- 
Certain disclaimers and policies apply to all email sent from Memetrics.
For the full text of these disclaimers and policies see 
a
href=http://www.memetrics.com/emailpolicy.html;http://www.memetrics.com/em
ailpolicy.html/a

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] *sigh*

2003-12-02 Thread Christopher Browne
Martha Stewart called it a Good Thing when Randolf Richardson [EMAIL PROTECTED] 
wrote:
 The count(*) information can be revisioned too, am I wrong ? I'm able
 to create a trigger that store the count(*) information in a special
 table, why not implement the same in a way builded in ?
 
 Then every insert or delete would have to lock that count. Nobody else
 would be able to insert or delete any records until you either commit or
 roll back. 
 
 That would lead to much lower concurrency, much more contention for
 locks, and tons of deadlocks.

   What about queueing all these updates for a separate
 low-priority thread?  The thread would be the only one with access
 to update this field.

If updates are queued, then how do you get to use them if the
update thread isn't running because it's not high enough in
priority?

I am not being facetious.

The one way that is expected to be successful would be to have a
trigger that, upon seeing an insert of 5 rows to table ABC, puts,
into table count_detail, something like:

  insert into count_detail (table, value) values ('ABC', 5);

You then replace
  select count(*) from abc;

with
  select sum(value) from count_detail where table = 'ABC';

The low priority thread would be a process that does something akin
to vacuuming, where it would replace the contents of the table every
so often...

 for curr_table in (select table from count_detail) do
   new_total = select sum(value) from count_detail 
  where table = curr_table;
   delete from count_detail where table = curr_table;
   insert into count_detail (table, value) values (curr_table,
   new_total);
 done

The point of this being to try to keep the number of rows to 1 per
table.

Note that this gets _real_ expensive for tables that see lots of
single row inserts and deletes.  There isn't a cheaper way that will
actually account for the true numbers of records that have been
committed.  

For a small table, it will be cheaper to walk through and calculate
count(*) directly from the tuples themselves.

The situation where it may be worthwhile to do this is a table which
is rather large (thus count(*) is expensive) where there is some
special reason to truly care how many rows there are in the table.
For _most_ tables, it seems unlikely that this will be true.  For
_most_ tables, it is absolutely not worth the cost of tracking the
information.
-- 
(format nil [EMAIL PROTECTED] cbbrowne acm.org)
http://cbbrowne.com/info/spreadsheets.html
Predestination was doomed from the start. 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PATCHES] [HACKERS] Index creation takes for ever

2003-12-02 Thread Manfred Koizar
On Mon, 01 Dec 2003 13:32:10 -0500, Tom Lane [EMAIL PROTECTED] wrote:
Manfred Koizar [EMAIL PROTECTED] writes:
 comparetup_index() compares two IndexTuples.  The structure
 IndexTupleData consists basically of not much more than an ItemPointer,
 and the patch is not much more than adding a comparison of two
 ItemPointers.  So how does the patch introduce a new low level
 implementation dependency?

Because it sorts on tuple position, which is certainly about as low
level as you can get.

The patch affects only the sort during index creation.  Mapping key
values to tuple positions is the sole purpose of an index.  The notion
that an index should not care for tuple positions looks a bit strange to
me.

  More to the point, though, no evidence has been
provided that this is a good idea.

The test script I posted with the patch shows that the patch produces
more efficient b-tree indices when there are lots of duplicates.

Servus
 Manfred

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster