Re: [HACKERS] psql \d+ and oid display

2014-03-30 Thread Bruce Momjian
On Sat, Mar 29, 2014 at 06:33:39PM -0400, Bruce Momjian wrote:
 On Sat, Mar 29, 2014 at 06:16:19PM -0400, Tom Lane wrote:
  Bruce Momjian br...@momjian.us writes:
   Are you saying most people like Has OIDs: yes, or the idea of just
   displaying _a_ line if there are OIDs?  Based on default_with_oids,
   perhaps we should display With OIDs.
  
   I agree it is no unanimous.  I am curious how large the majority has to
   be to change a psql display value.
  
  What I actually suggested was not *changing* the line when it's to be
  displayed, but suppressing it in the now-standard case where there's no
  OIDs.
  
  Personally I find the argument that backwards compatibility must be
  preserved to be pretty bogus; we have no hesitation in changing the
  output of \d anytime we add a new feature.  So I don't think there's
  a good compatibility reason why the line has to be spelled exactly
  Has OIDs: yes --- but there is a consistency reason, which is that
  everything else we print in this part of the \d output is of the form
  label: info.
 
 Ah, now I understand it --- you can argue that the new Replica
 Identity follows the same pattern, showing only for non-defaults (or at
 least it will once I commit the pending patch to do that).

OK, I have now applied the conditional display of Replica Identity
patch (which is how it was originally coded anyway).  The attached patch
matches Tom's suggestion of displaying the same OID text, just
conditionally.

Seeing psql \d+ will have a conditional display line in PG 9.4, making
OIDs conditional seems to make sense.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +
diff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c
new file mode 100644
index d1447fe..22b643f
*** a/src/bin/psql/describe.c
--- b/src/bin/psql/describe.c
*** describeOneTableDetails(const char *sche
*** 2365,2378 
  		}
  
  		/* OIDs, if verbose and not a materialized view */
! 		if (verbose  tableinfo.relkind != 'm')
! 		{
! 			const char *s = _(Has OIDs);
! 
! 			printfPQExpBuffer(buf, %s: %s, s,
! 			  (tableinfo.hasoids ? _(yes) : _(no)));
! 			printTableAddFooter(cont, buf.data);
! 		}
  
  		/* Tablespace info */
  		add_tablespace_footer(cont, tableinfo.relkind, tableinfo.tablespace,
--- 2365,2372 
  		}
  
  		/* OIDs, if verbose and not a materialized view */
! 		if (verbose  tableinfo.relkind != 'm'  tableinfo.hasoids)
! 			printTableAddFooter(cont, _(Has OIDs: yes));
  
  		/* Tablespace info */
  		add_tablespace_footer(cont, tableinfo.relkind, tableinfo.tablespace,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] MultiXactId error after upgrade to 9.3.4

2014-03-30 Thread Stephen Frost
All,

* Stephen Frost (sfr...@snowman.net) wrote:
   Looks like we might not be entirely out of the woods yet regarding
   MultiXactId's.  After doing an upgrade from 9.2.6 to 9.3.4, we saw the
   following:
 
   ERROR:  MultiXactId 6849409 has not been created yet -- apparent wraparound

While trying to get the production system back in order, I was able to
simply do:

select * from table for update;

Which happily updated the xmax for all of the rows- evidently without
any care that the MultiXactId in one of the tuples was considered
invalid (by at least some parts of the code).

I have the pre-upgrade database and can upgrade/rollback/etc that pretty
easily.  Note that the table contents weren't changed during the
upgrade, of course, and so the 9.2.6 instance has HEAP_XMAX_IS_MULTI set
while t_xmax is 6849409 for the tuple in question- even though
pg_controldata reports NextMultiXactId as 1601462 (and it seems very
unlikely that there's been a wraparound on that in this database..).

Perhaps something screwed up xmax/HEAP_XMAX_IS_MULTI under 9.2 and the
9.3 instance now detects that something is wrong?  Or is this a case
which was previously allowed and it's just in 9.3 that we don't like it?
Hard for me to see why that would be the case, but this really feels
like HEAP_XMAX_IS_MULTI was incorrectly set on the old cluster and the
xmax in the table was actually a regular xid..  That would have come
from 9.2 though.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] MultiXactId error after upgrade to 9.3.4

2014-03-30 Thread Stephen Frost
* Stephen Frost (sfr...@snowman.net) wrote:
 I have the pre-upgrade database and can upgrade/rollback/etc that pretty
 easily.  Note that the table contents weren't changed during the
 upgrade, of course, and so the 9.2.6 instance has HEAP_XMAX_IS_MULTI set
 while t_xmax is 6849409 for the tuple in question- even though
 pg_controldata reports NextMultiXactId as 1601462 (and it seems very
 unlikely that there's been a wraparound on that in this database..).

Further review leads me to notice that both HEAP_XMAX_IS_MULTI and
HEAP_XMAX_INVALID are set:

t_infomask  | 6528

6528 decimal - 0x1980

0001 1001 1000 

Which gives us:

  1000  - HEAP_XMAX_LOCK_ONLY
 0001   - HEAP_XMIN_COMMITTED
 1000   - HEAP_XMAX_INVALID
0001    - HEAP_XMAX_IS_MULTI

Which shows that both HEAP_XMAX_INVALID and HEAP_XMAX_IS_MULTI are set.
Of some interest is that HEAP_XMAX_LOCK_ONLY is also set..

 Perhaps something screwed up xmax/HEAP_XMAX_IS_MULTI under 9.2 and the
 9.3 instance now detects that something is wrong?  Or is this a case
 which was previously allowed and it's just in 9.3 that we don't like it?

The 'improve concurrency of FK locking' patch included a change to
UpdateXmaxHintBits():

- * [...] Hence callers should look
- * only at XMAX_INVALID.

...

+ * Hence callers should look only at XMAX_INVALID.
+ *
+ * Note this is not allowed for tuples whose xmax is a multixact.

[...]

+   Assert(!(tuple-t_infomask  HEAP_XMAX_IS_MULTI));

What isn't clear to me is if this restriction was supposed to always be
there and something pre-9.3 screwed this up, or if this is a *new*
restriction on what's allowed, in which case it's an on-disk format
change that needs to be accounted for.

One other thing to mention is that this system was originally a 9.0
system and the last update to this tuple that we believe happened was
when it was on 9.0, prior to the 9.2 upgrade (which happened about a
year ago), so it's possible the issue is from the 9.0 era.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Securing make check (CVE-2014-0067)

2014-03-30 Thread Christoph Berg
Re: Noah Misch 2014-03-30 20140330014531.ge170...@tornado.leadboat.com
 On Sat, Mar 29, 2014 at 10:04:55AM +0100, Christoph Berg wrote:
  Fwiw, to relocate the pg_regress socket dir, there is already the
  possibility to run make check EXTRA_REGRESS_OPTS=--host=/tmp. (With
  the pending fix I sent yesterday to extend this to contrib/test_decoding.)
 
 That doesn't work for make check, because the postmaster ends up with
 listen_addresses=/tmp.

Oh, right. There's this other patch which apparently works so well
that I already forgot it's there:

Enable pg_regress --host=/path/to/socket:
https://alioth.debian.org/scm/loggerhead/pkg-postgresql/postgresql-9.4/trunk/view/head:/debian/patches/60-pg_regress_socketdir.patch

This, along with 61-extra_regress_opts and 64-pg_upgrade-sockdir (at
the same location, both also recently posted here) should be safe for
general use, i.e. inclusion in git. (I didn't check how much this
overlaps with what Noah tried, I'm just mentioning the patches here
because they work for Debian.)

There's two other patches: 62-pg_upgrade-test-in-tmp hardcodes /tmp
for the pg_upgrade test which should obviously be done smarter, and
63-pg_upgrade-test-bindir which forwards PSQLDIR through
contrib/pg_upgrade/test.sh. The latter is probably also safe for
general use, but I'd be more confident if someone rechecked that.

  We've been putting a small patch into pg_upgrade in Debian to work
  around too long socket paths generated by pg_upgrade during running
  the testsuite (and effectively on end user systems, but I don't think
  anyone is using such long paths there).
  
  A similar code bit could be put into pg_regress itself.
 
 Thanks for reminding me about Debian's troubles here.  Once the dust settles
 on pg_regress, it will probably make sense to do likewise to pg_upgrade.

Nod, it'd be nice if we could get rid of some patches in Debian.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GSoC 2014 proposal

2014-03-30 Thread Иван Парфилов
Hello, hackers! This is my GSoC proposal.

*Short description:*

Cluster analysis or clustering is the task of grouping a set of objects in
such a way that objects in the same group (called a cluster) are more
similar (in some sense or another) to each other than to those in other
groups (clusters). It is a main task of exploratory data mining, and a
common technique for statistical data analysis, used in many fields,
including machine learning, pattern recognition, image analysis,
information retrieval, and bioinformatics. The purpose of this project is
to add support of BIRCH(balanced iterative reducing and clustering using
hierarchies) algorithm [1] for data type cube.

*Benefits to the PostgreSQL Community*

Support of BIRCH algorithm for data type cube would be actual for many
PostgreSQL applications (for example, to solve data clustering problem for
high dimensional datasets and for large datasets).

* Quantifiable results*

 Adding support of BIRCH algorithm for data type cube

*Project Details*
BIRCH (balanced iterative reducing and clustering using hierarchies) is an
unsupervised data mining algorithm used to perform hierarchical clustering
over particularly large data-sets.

The BIRCH algorithm (Balanced Iterative Reducing and Clustering
Hierarchies) of Zhang [1] was developed to handle massive datasets that are
too large to be contained in the main memory (RAM). To minimize I/O costs,
every datum is read once and only once. BIRCH transforms the data set into
compact, locally similar subclusters, each with summary statistics attached
(called clustering features). Then, instead of using the full data set,
these summary statistics can be used. This approach is most advantageous in
two situations: when the data cannot be loaded into memory due to its size;
and/or when some form of combinatorial optimization is required and the
size of the solution space makes finding global maximums/minimums difficult.

Key properties of BIRCH algorithm:

single scan of the dataset enough;

I/O cost minimization: Organize data in a in-memory, height-balanced tree;

Each clustering decision is made without scanning all the points or
clusters.

The implementation of this algorithm would be for data type cube and based
on GiST.

The key concept of BIRCH algorithm is clustering feature. Given a set of N
d-dimensional data points, the clustering feature CF of the set is defined
as the triple CF = (N,LS,SS), where LS is the linear sum and SS is the
square sum of data points. Clustering features are organized in a CF tree,
which is a height balanced tree with two parameters: branching factor B and
threshold T.

Because the structure of CF tree is similar to B+-tree we can use GiST for
implementation [2].
The GiST is a balanced tree structure like a B-tree, containing key,
pointer pairs. GiST key is a member of a user-defined class, and
represents some property that is true of all data items reachable from the
pointer associated with the key. The GiST provides a possibility to create
custom data types with indexed access methods and extensible set of
queries.

There are seven methods that an index operator class for GiST must provide,
and an eighth that is optional:

-consistent

-union

-compress

-decompress

-penalty

-picksplit

-equal

-distance (optional).

We need to implement it to create GiST-based CF-tree to use it in BIRCH
algorithm.


*Example of usage(approximate):*

create table cube_test (v cube);

insert into cube_test values (cube(array[1.2, 0.4]), cube(array[0.5, -0.2]),

  cube(array[0.6, 1.0]),cube(array[1.0, 0.6]) );

create index gist_cf on cube_test using gist(v);

--Prototype(approximate)

--birch(maxNodeEntries, distThreshold, distFunction)

SELECT birch(4.1, 0.2, 1) FROM cube_test;

 cluster | val1 | val2

-+--+

  1  |  1.2 |  0.4

  0  |  0.5 | -0.2

  1  |  0.6 |  1.0

  1  |  1.0 |  0.6

Accordingly, in this GSoC project BIRCH algorithm for data type cube would
be implemented.


*Inch-stones*

 1) Solve architecture questions with help of community.

 2) First, approximate implementation(implement distance methods, implement
GiST interface methods, implement BIRCH algorithm for data type cube).

3) Approximate implementation evaluation.

4) Final refactoring, documentation, testing.


* Project Schedule*

 until May 19

 Solve architecture questions with help of community.

 20 May - 27 June

 First, approximate implementation.

 28 June - 11 August

 Approximate implementation evaluation. Fixing bugs and performance testing.

 August 11 - August 18:

 Final refactoring, write tests, improve documentation.

* Completeness Criteria*

 Support of BIRCH algorithm for data type cube is implemented and working.

* Links*

1) http://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf
2) http://www.postgresql.org/docs/9.1/static/gist-implementation.html

 

With best regards, Ivan Parfilov.


[HACKERS] GSoC project suggestion: PIVOT ?

2014-03-30 Thread Craig Ringer
Hi all

The thought just occurred to me that a PIVOT feature might be a
respectable GSoC project.

tablefunc/crosstab works, but it's very clumsy to use, and difficult to
consume the data from. Going by Stack Overflow activity, pivot/crosstab
is second only to upsert when it comes to things that users find hard to
do in Pg.

I haven't investigated what'd be involved in implementing any integrated
form of pivoting, and I'm aware that there are some serious challenges
when it comes to generating a TupleDesc for pivot output. I thought I'd
raise it in case anyone has looked into this and has any comments on
whether this'd be a viable GSoC for an interested student.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] issue log message to suggest VACUUM FULL if a table is nearly empty

2014-03-30 Thread Amit Kapila
On Wed, Mar 26, 2014 at 11:32 AM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Mar 9, 2014 at 5:28 PM, Wang, Jing ji...@fast.au.fujitsu.com wrote:
 Enclosed is the patch to implement the requirement that issue log message to
 suggest VACUUM FULL if a table is nearly empty.

 The requirement comes from the Postgresql TODO list.

 If the relpage of the table  RELPAGES_VALUES_THRESHOLD(default 1000) then
 the table is considered to be large enough.

 If the free_space/total_space  FREESPACE_PERCENTAGE_THRESHOLD(default 0.5)
 then the table is considered to have large numbers of unused rows.

 I'm not sure that we want people to automatically VF a table just
 because it's 2x bloated.  Doesn't it depend on the table size?  And in
 sort of a funny way, too, like, if the tables is small, 2x bloat is
 not wasting much disk space, but getting rid of it is probably easy,
 so maybe you should - but if the table is a terabyte, even 50% bloat
 might be pretty intolerable, but whether it makes sense to try to get
 rid of it depends on your access pattern.  I'm not really too sure
 whether it makes sense to try to make an automated recommendation
 here, or maybe only in egregious cases.

I think here main difficulty is to decide when it will be considered good
to display such a message. As you said, that it depends on access pattern
whether 50% bloat is tolerable or not, so one way could be to increase the
bloat limit and table size threshold to higher value (bloat - 80%,
table_size = 500M) where it would make sense to  recommend VF for all cases
or another way could be to consider using some auto vacuum threshold parameter
like autovacuum_vacuum_scale_factor to calculate threshold value for issuing
this message. I think parameter like scale factor can make sense as to an extent
this parameter is an indicative of how much dead space percentage is tolerable
for user.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC project suggestion: PIVOT ?

2014-03-30 Thread Fabrízio de Royes Mello
On Sun, Mar 30, 2014 at 11:30 PM, Craig Ringer cr...@2ndquadrant.com
wrote:

 Hi all

 The thought just occurred to me that a PIVOT feature might be a
 respectable GSoC project.

 tablefunc/crosstab works, but it's very clumsy to use, and difficult to
 consume the data from. Going by Stack Overflow activity, pivot/crosstab
 is second only to upsert when it comes to things that users find hard to
 do in Pg.

 I haven't investigated what'd be involved in implementing any integrated
 form of pivoting, and I'm aware that there are some serious challenges
 when it comes to generating a TupleDesc for pivot output. I thought I'd
 raise it in case anyone has looked into this and has any comments on
 whether this'd be a viable GSoC for an interested student.


It's a nice idea, but the deadline to students send a proposal was 21th
April.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] GSoC project suggestion: PIVOT ?

2014-03-30 Thread Michael Paquier
On Mon, Mar 31, 2014 at 1:36 PM, Fabrízio de Royes Mello
fabriziome...@gmail.com wrote:
 It's a nice idea, but the deadline to students send a proposal was 21th
 April.
21st of March. All the details are here:
http://www.postgresql.org/developer/summerofcode/
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC project suggestion: PIVOT ?

2014-03-30 Thread Craig Ringer
On 03/31/2014 12:49 PM, Michael Paquier wrote:
 On Mon, Mar 31, 2014 at 1:36 PM, Fabrízio de Royes Mello
 fabriziome...@gmail.com wrote:
 It's a nice idea, but the deadline to students send a proposal was 21th
 April.
 21st of March. All the details are here:
 http://www.postgresql.org/developer/summerofcode/

Ah, thanks.

There's always next year. Still curious about whether anyone's
investigated it / tried it out.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GSoC project suggestion: PIVOT ?

2014-03-30 Thread Fabrízio de Royes Mello
Em segunda-feira, 31 de março de 2014, Michael Paquier 
michael.paqu...@gmail.com escreveu:

 On Mon, Mar 31, 2014 at 1:36 PM, Fabrízio de Royes Mello
 fabriziome...@gmail.com javascript:; wrote:
  It's a nice idea, but the deadline to students send a proposal was 21th
  April.
 21st of March. All the details are here:
 http://www.postgresql.org/developer/summerofcode/


Sorry by my mistake. I'm with april in my mind.

Fabrízio.


-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello


Re: [HACKERS] B-Tree support function number 3 (strxfrm() optimization)

2014-03-30 Thread Peter Geoghegan
On Wed, Mar 26, 2014 at 8:08 PM, Peter Geoghegan p...@heroku.com wrote:
 The API I envisage is a new support function 3 that operator class
 authors may optionally provide.

I've built a prototype patch, attached, that extends SortSupport and
tuplesort to support poor man's normalized keys. All the regression
tests pass, so while it's just a proof of concept, it is reasonably
well put together for one. The primary shortcoming of the prototype
(the main reason why I'm calling it a prototype rather than just a
patch) is that it isn't sufficiently generalized (i.e. it only works
for the cases currently covered by SortSupport - not B-Tree index
builds, or B-Tree scanKeys). There is no B-Tree support function
number 3 in the patch. I didn't spend too long on this.

I'm pretty happy with the results for in-memory sorting of text (my
development system uses 'en_US.UTF8', so please assume that any costs
involved are for runs that use that collation). With the dellstore2
sample database [1] restored to my local development instance, the
following example demonstrates just how much the technique can help
performance.

With master:

pg@hamster:~/sort-tests$ cat sort.sql
select * from (select * from customers order by firstname offset 10) d;
pg@hamster:~/sort-tests$ pgbench -f sort.sql -n -T 100
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
duration: 100 s
number of transactions actually processed: 819
latency average: 122.100 ms
tps = 8.186197 (including connections establishing)
tps = 8.186522 (excluding connections establishing)

With patch applied (requires initdb for new text SortSupport pg_proc entry):

pg@hamster:~/sort-tests$ cat sort.sql
select * from (select * from customers order by firstname offset 10) d;
pg@hamster:~/sort-tests$ pgbench -f sort.sql -n -T 100
transaction type: Custom query
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
duration: 100 s
number of transactions actually processed: 2525
latency average: 39.604 ms
tps = 25.241723 (including connections establishing)
tps = 25.242447 (excluding connections establishing)

It looks like this technique is very valuable indeed, at least in the
average or best case. We're not just benefiting from following the
advice of the standard, and using strxfrm() for sorting to amortize
the cost of the strxfrm() transformation that strcoll() must do
anyway. It stands to reason that there is also a lot of benefit from
sorting tightly-packed keys. Quicksort is cache oblivious, and having
it sort tightly-packed binary data, as opposed to going through all of
that deferencing and deserialization indirection is probably also very
helpful. A tool like Cachegrind might offer some additional insights,
but I haven't gone to the trouble of trying that out.

(By the way, my earlier recollection about how memory-frugal
MinimalTuple/memtuple building is within tuplesort was incorrect, so
there are no savings in memory to be had here).

As I mentioned, something like a SortSupport for numeric, with poor
man's normalized keys might also be compelling. I suggest we focus on
how this technique can be further generalized, though. This prototype
patch is derivative of Robert's abandoned SortSupport for text patch.
If he wanted to take this off my hands, I'd have no objections - I
don't think I'm going to have time to take this as far as I'd like.

[1] http://pgfoundry.org/forum/forum.php?forum_id=603
-- 
Peter Geoghegan
*** a/src/backend/utils/adt/varlena.c
--- b/src/backend/utils/adt/varlena.c
***
*** 17,22 
--- 17,23 
  #include ctype.h
  #include limits.h
  
+ #include access/nbtree.h
  #include access/tuptoaster.h
  #include catalog/pg_collation.h
  #include catalog/pg_type.h
***
*** 29,34 
--- 30,36 
  #include utils/bytea.h
  #include utils/lsyscache.h
  #include utils/pg_locale.h
+ #include utils/sortsupport.h
  
  
  /* GUC variable */
*** typedef struct
*** 50,61 
--- 52,85 
  	int			skiptable[256]; /* skip distance for given mismatched char */
  } TextPositionState;
  
+ typedef struct
+ {
+ 	char	   *buf1;			/* Also used as strxfrm() scratch-space */
+ 	char	   *buf2;			/* Unused by leading key/poor man case */
+ 	int			buflen1;
+ 	int			buflen2;
+ #ifdef HAVE_LOCALE_T
+ 	pg_locale_t locale;
+ #endif
+ } TextSortSupport;
+ 
+ /*
+  * This should be large enough that most strings will be fit, but small enough
+  * that we feel comfortable putting it on the stack
+  */
+ #define TEXTBUFLEN		1024
+ 
  #define DatumGetUnknownP(X)			((unknown *) PG_DETOAST_DATUM(X))
  #define DatumGetUnknownPCopy(X)		((unknown *) PG_DETOAST_DATUM_COPY(X))
  #define PG_GETARG_UNKNOWN_P(n)		DatumGetUnknownP(PG_GETARG_DATUM(n))
  #define PG_GETARG_UNKNOWN_P_COPY(n) DatumGetUnknownPCopy(PG_GETARG_DATUM(n))
  #define PG_RETURN_UNKNOWN_P(x)		PG_RETURN_POINTER(x)
  
+ static void btpoorman_worker(SortSupport ssup, Oid collid);
+ static