Re: [HACKERS] CVS problems

2008-03-19 Thread Manolo

Alvaro Herrera wrote:

Manolo SupaMA escribió:
  

Hi.

Yesterday I downloaded source code from CVS following

developer.postgresql.org/docs/postgres/cvs.html

First of all it had problems downloading files, INSTALL text file has not
been downloaded, for example.
Today that web page isn't even available...



That URL certainly goes to the CVS instructions.

  

That's probably causing the problems I'm getting during
make
make check



Did you run configure?  Did you use --enable-depend?  What errors are
you getting?

  

I run:
./configure --prefix=/usr/local/pgsql/wip/ CFLAGS=-O0 --enable-debug 
--enable-cassert --enable-depend

make
make check

It's the usual way I use to build postgres source code for further 
installing.
Anyway the problem has been solved downloading from scratch source code 
from CVS... maybe I downloaded a corrupted copy of the source code.


Thanks for your time.
Regards, Manolo.


Re: [HACKERS] fast count(*) through statistics collector

2008-03-19 Thread Tom Lane
Artem Yazkov [EMAIL PROTECTED] writes:
 By  viewing  this  list,  I see a lot of discussions on the problem of
 fast  count  (*),  but acceptable decision have not been formulated.
 Well,  I  make  bold  to  propose  own  view on the problem.

A number of the things you suggest would be good for improving the
performance of the stats subsystem.  But I think you have failed to
grasp two pretty fundamental issues: (1) the correct answer to count(*)
varies depending on the observer's snapshot, and (2) the stats subsystem
is built to provide approximate answers not exact ones.

I encourage you to work on fixing the stats subsystem's performance
issues ... but if you think that's going to provide a substitute for
count(*), you're mistaken.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] regress test problems

2008-03-19 Thread Pavel Stehule
Hello

I tested head again from scratch with negative results:

http://www.pgsql.cz/data/regression.diffs
http://www.pgsql.cz/data/regression.out

Regards
Pavel Stehule


[EMAIL PROTECTED] pgsql]$ uname -a
Linux nemesis.nat.buk.cvut.cz 2.6.24.3-34.fc8 #1 SMP Wed Mar 12
18:17:20 EDT 2008 i686 i686 i386 GNU/Linux

[EMAIL PROTECTED] pgsql]$ cat /etc/redhat-release
Fedora release 8 (Werewolf)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] regress test problems

2008-03-19 Thread Heikki Linnakangas

Pavel Stehule wrote:

I tested head again from scratch with negative results:


There's a bunch of errors like this in there:


+ ERROR:  could not open file 
/home/pavel/src/pgsql/src/test/regress/results/onek.data for writing: 
Permission denied


Seems like a permission problem... Not sure if it explains all the 
failures, but at least many of them.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] regress test problems

2008-03-19 Thread Zdeněk Kotala


What locale do you use? It seems that problem is in collation. Try it with C.

Zdenek

Pavel Stehule napsal(a):

Hello

I tested head again from scratch with negative results:

http://www.pgsql.cz/data/regression.diffs
http://www.pgsql.cz/data/regression.out

Regards
Pavel Stehule


[EMAIL PROTECTED] pgsql]$ uname -a
Linux nemesis.nat.buk.cvut.cz 2.6.24.3-34.fc8 #1 SMP Wed Mar 12
18:17:20 EDT 2008 i686 i686 i386 GNU/Linux

[EMAIL PROTECTED] pgsql]$ cat /etc/redhat-release
Fedora release 8 (Werewolf)




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] Fix for large file support (nonsegment mode support)

2008-03-19 Thread Peter Eisentraut
Zdenek Kotala wrote:
 But how it was mentioned in this thread maybe
 somethink like this CREATE TABLESPACE name LOCATION '/my/location'
 SEGMENTS 10GB should good solution. If segments is not mentioned then
 default value is used.

I think you would need a tool to resegmentize a table or tablespace offline, 
usable for example when recovering a backup.

Also, tablespace configuration information is of course also stored in a 
table.  pg_tablespace probably won't become large, but it would probably 
still need to be special-cased, along with other system catalogs perhaps.

An then, how to coordindate offline resegmenting and online tablespace 
operations in a crash-safe way?

Another factor I just thought of is that tar, commonly used as part of a 
backup procedure, can on some systems only handle files up to 8 GB in size.  
There are supposed to be newer formats that can avoid that restriction, but 
it's not clear how widely available these are and what the incantation is to 
get at them.  Of course we don't use tar directly, but if we ever make large 
segments the default, we ought to provide some clear advice for the user on 
how to make their backups.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] fast count(*) through statistics collector

2008-03-19 Thread Artem Yazkov
Hi hackers,

I'm novice in PostgreSQL codebase (and in English too :-)), but I'd be
glad to make a modest contribution to this great project.

By  viewing  this  list,  I see a lot of discussions on the problem of
fast  count  (*),  but acceptable decision have not been formulated.
Well,  I  make  bold  to  propose  own  view on the problem. It can be
described as: Improve existing infrastructure of statistics collector
and use its for caching the number of rows in the table.

I plan to do the following steps to implement this principle:

1.  Eliminate  500  ms  lag  between transaction commit and statistics
portion   transfer   from  backend  to  collector.  For  this  purpose
pgstat_report_tabstat()  function  must  call in force mode only. We
pay   therefor   increased   backend--collector   traffic.  As  some
compensation could be invited to send TabStatusArray items, which have
not  changed with the last shipment. This measure will reduce the size
of a messages.
I  see  here  one  more  pitfall:  new transaction can start after the
changes  made  earlier  transaction became visible for other backends,
but  before  the statistics collector managed to take and process data
(despite  the  forced  transfer).  To  avoid  this,  one  may transfer
information  before  the  changes  made  transaction  will be visible,
collector, in one's turn, apply this info after that.
It  is  also  possible  that the use of shared memory instead of pipes
will help increase productivity.

2.  Eliminate  500 ms lag between recieve statistics portion and write
pgstat.stat file. Realize the next todo item for this purpose:
Allow   statistics  collector  information  to  be  pulled  from  the
collector  process  directly,  rather  than requiring the collector to
write a filesystem file twice a second.
As  an  additional  effect,  we  will  be able to reduce the burden on
I/O channels.

3.  Field n_live_tuples of PgStat_StatTabEntry structure now holds the
number of inserted - deleted tuples for successful transactions, which
are  known  to  collector.  But we need field, which would contain the
number of inserted - deleted tuples for ALL successful transactions in
the  history  of  the  table,  or  it would be undefined (e.g. -1). If
n_live_tuples  not suited for this role, creating additional field. In
any case, I will call this field live_tuples counter below.

4.  Values in the live_tuples counters be questioned, if there was any
interruption  of  statistics collection. Therefore, if trac_counts was
set  to  false  in  cluster-wide  or the collector process crash, then
live_tuples  become  undefined  for  all  tables  in  the  cluster. If
pg_stat_reset() call, then live_tuples become undefined for all tables
in  DB.  If pg_stat_clear_snapshot() call, or trac_counts set to false
during user session, then live_tuples counters should undefine for all
tables covered during this transaction/session. If compile such a list
of tables is not possible, well, for all tables in DB.

5.  If  live_tuples  counter  contain  undefined value, but statistics
collector  work  normal,  the  counter  must be restored through first
seqscan.

I  hope  that  these  steps  will  give us mvcc-compliant counters and
overhead cost will increase little.

The next step is relatively simple:

6.  In  the  optimizer/plan/planagg.c  file  add a function similar to
optimize_minmax_aggregates   ()   that   return   null  for  undefined
tuples_count  counters  (and count(*) determine by regular way through
seqscan) or plan for computation such as:

PgStat_StatTabEntry.live_tuples+
PgStat_TableCounts.t_new_lived_tuples  +
PgStat_TableXactStatus.tuples_inserted -
PgStat_TableXactStatus.tuples_deleted

Restrictions:
1.   Uninterrupted   supply  of  statistics  collector  necessary  for
efficient use of this algorithm.
2. Works only for simplest queries like:
 select count (*) from regular_table


Any comments are welcome.

-- 
   regards,  Artem  Yazkov  mailto:[EMAIL PROTECTED]


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] Fix for large file support (nonsegment mode support)

2008-03-19 Thread Martijn van Oosterhout
On Wed, Mar 19, 2008 at 09:38:12AM +0100, Peter Eisentraut wrote:
 Another factor I just thought of is that tar, commonly used as part of a 
 backup procedure, can on some systems only handle files up to 8 GB in size.  
 There are supposed to be newer formats that can avoid that restriction, but 
 it's not clear how widely available these are and what the incantation is to 
 get at them.  Of course we don't use tar directly, but if we ever make large 
 segments the default, we ought to provide some clear advice for the user on 
 how to make their backups.

By my reading, GNU tar handles larger files and no-one else (not even a
POSIX standard tar) can...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Please line up in a tree and maintain the heap invariant while 
 boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [HACKERS] fast count(*) through statistics collector

2008-03-19 Thread Simon Riggs
On Tue, 2008-03-18 at 21:26 +0700, Artem Yazkov wrote:

 Restrictions:
 1.   Uninterrupted   supply  of  statistics  collector  necessary  for
 efficient use of this algorithm.

In my understanding, we do not guarantee this. That leaves the rest of
your ideas drifting, I'm sorry to say. Your ideas and analysis did seem
good though, so please don't be put off.

 2. Works only for simplest queries like:
  select count (*) from regular_table

Personally, I don't see the need to make just that single statement go
faster. It only runs slowly on large tables and they seldom need exactly
accurate counts frequently enough for us to care. Or put it another way,
I never saw a real application where this was a significant performance
issue, on any RDBMS.

I believe this is a marketing issue for us, only. Other RDBMS supporters
know count(*) runs slower on PostgreSQL and use that as evidence of poor
performance.

We do need a general solution to improving the performance of
aggregates, though that is already roughly on the TODO in the shape of
materialized view support.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 

  PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Multiple SRF right after SELECT

2008-03-19 Thread Nikolay Samokhvalov
Hi,

I wonder, if the following is correct and provides expected result:

test=# select generate_series(1, 2), generate_series(1, 4);
 generate_series | generate_series
-+-
   1 |   1
   2 |   2
   1 |   3
   2 |   4
(4 rows)


Actually I have two questions on this:
 1. Is it correct at all to use SRF in select list, w/o explicit FROM?
Why then we do not allow using subselects that return multiple rows?
I'd rather expect that these two things work in similar manner.
 2. Why the query above provides 4 rows, not 2*4=8? Actually, that's
interesting -- I can use this query to find l.c.m. But it's defenetely
not that I'd expect before my try...
-- 
Best regards,
Nikolay

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] Fix for large file support (nonsegment mode support)

2008-03-19 Thread Kenneth Marshall
On Wed, Mar 19, 2008 at 10:51:12AM +0100, Martijn van Oosterhout wrote:
 On Wed, Mar 19, 2008 at 09:38:12AM +0100, Peter Eisentraut wrote:
  Another factor I just thought of is that tar, commonly used as part of a 
  backup procedure, can on some systems only handle files up to 8 GB in size. 
   
  There are supposed to be newer formats that can avoid that restriction, but 
  it's not clear how widely available these are and what the incantation is 
  to 
  get at them.  Of course we don't use tar directly, but if we ever make 
  large 
  segments the default, we ought to provide some clear advice for the user on 
  how to make their backups.
 
 By my reading, GNU tar handles larger files and no-one else (not even a
 POSIX standard tar) can...
 
 Have a nice day,
 -- 
 Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
  Please line up in a tree and maintain the heap invariant while 
  boarding. Thank you for flying nlogn airlines.

The star program written by Joerg Schilling is a very well written
POSIX compatible tar program that can easily handle files larger than
8GB. It is another backup option.

Cheers,
Ken

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multiple SRF right after SELECT

2008-03-19 Thread Albert Cervera i Areny
A Dimecres 19 Març 2008, Nikolay Samokhvalov va escriure:
  2. Why the query above provides 4 rows, not 2*4=8? Actually, that's
 interesting -- I can use this query to find l.c.m. But it's defenetely
 not that I'd expect before my try...

2*4 = 8: 

select * from generate_series(1, 2) a, generate_series(1, 4) b;

Can't tell you about the expected behaviour in the query you provided though.

-- 
Albert Cervera i Areny
http://www.NaN-tic.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCHES] Fix for large file support (nonsegment mode support)

2008-03-19 Thread Zdeněk Kotala

Peter Eisentraut napsal(a):

Zdenek Kotala wrote:

But how it was mentioned in this thread maybe
somethink like this CREATE TABLESPACE name LOCATION '/my/location'
SEGMENTS 10GB should good solution. If segments is not mentioned then
default value is used.


I think you would need a tool to resegmentize a table or tablespace offline, 
usable for example when recovering a backup.


Do you mean something like strip(1) command? I don't see any usecase for 
terrabytes data. You usually have a problem to find place where you can backup.


Also, tablespace configuration information is of course also stored in a 
table.  pg_tablespace probably won't become large, but it would probably 
still need to be special-cased, along with other system catalogs perhaps.


It is true and unfortunately singularity. Same as database list which is in a 
table as well, but it is stored also as a text file for startup purpose. I more 
incline to use non table configuration file for tablespaces, because I don't see 
any advantage to have it under MVCC control and it allow also to define storage 
for pg_global and pg_default.


An then, how to coordindate offline resegmenting and online tablespace 
operations in a crash-safe way?


Another factor I just thought of is that tar, commonly used as part of a 
backup procedure, can on some systems only handle files up to 8 GB in size.  
There are supposed to be newer formats that can avoid that restriction, but 
it's not clear how widely available these are and what the incantation is to 
get at them.  Of course we don't use tar directly, but if we ever make large 
segments the default, we ought to provide some clear advice for the user on 
how to make their backups.


I think tar is OK - minimal on Solaris. See man largefile.

Default segment size still should be 1GB. If DBA makes a decision to increase 
this to higher value, then it is his responsibility to find way how to process 
this big files.


Zdenek


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multiple SRF right after SELECT

2008-03-19 Thread David BOURIAUD
Le mercredi 19 mars 2008, Albert Cervera i Areny a écrit :
Hi !

 A Dimecres 19 Març 2008, Nikolay Samokhvalov va escriure:
   2. Why the query above provides 4 rows, not 2*4=8? Actually, that's
  interesting -- I can use this query to find l.c.m. But it's defenetely
  not that I'd expect before my try...

 2*4 = 8:

 select * from generate_series(1, 2) a, generate_series(1, 4) b;

If you launch the above query, you just get what you would get if you would do 
a select from two tables without joining them at all...
So, you get the cartesian product of the two ensembles.


 Can't tell you about the expected behaviour in the query you provided
 though.

I've made few tests with the primary query, and indeed it is strange 
behavoiour. Consider the following :

select generate_series(1, 3), generate_series(1, 4);
 generate_series | generate_series
-+-
   1 |   1
   2 |   2
   3 |   3
   1 |   4
   2 |   1
   3 |   2
   1 |   3
   2 |   4
   3 |   1
   1 |   2
   2 |   3
   3 |   4
which is not fully readeable but if you sort things, you get exactly the same 
as what you mentionned before :

select generate_series(1, 3), generate_series(1, 4) order by 1,2;
 generate_series | generate_series
-+-
   1 |   1
   1 |   2
   1 |   3
   1 |   4
   2 |   1
   2 |   2
   2 |   3
   2 |   4
   3 |   1
   3 |   2
   3 |   3
   3 |   4

So far it is clear, but if you just make things so that the the number of rows 
returned by one call to generate_series is a multiple of the other, the 
result is truncated :

select generate_series(1, 3), generate_series(1, 6) order by 1,2;
 generate_series | generate_series
-+-
   1 |   1
   1 |   4
   2 |   2
   2 |   5
   3 |   3
   3 |   6

provides the same strange result as initialy discovered, and 
select generate_series(1, 6), generate_series(1, 3) order by 2,1;
 generate_series | generate_series
-+-
   1 |   1
   4 |   1
   2 |   2
   5 |   2
   3 |   3
   6 |   3

provides the same, mirrored. So, it could be a bug somewhere.
Hoping that it will be of any help...
Regards.


 --
 Albert Cervera i Areny
 http://www.NaN-tic.com




signature.asc
Description: This is a digitally signed message part.


Re: [HACKERS] Multiple SRF right after SELECT

2008-03-19 Thread Volkan YAZICI
On Wed, 19 Mar 2008, Nikolay Samokhvalov [EMAIL PROTECTED] writes:
 I wonder, if the following is correct and provides expected result:

 test=# select generate_series(1, 2), generate_series(1, 4);
  generate_series | generate_series
 -+-
1 |   1
2 |   2
1 |   3
2 |   4
 (4 rows)


  1. Is it correct at all to use SRF in select list, w/o explicit FROM?
 Why then we do not allow using subselects that return multiple rows?
 I'd rather expect that these two things work in similar manner.
  2. Why the query above provides 4 rows, not 2*4=8? Actually, that's
 interesting -- I can use this query to find l.c.m. But it's defenetely
 not that I'd expect before my try...

From PL/scheme sources:

/*
 * There're 2 ways to return from an SRF:
 *
 * 1. Value-per-call Mode
 *You return each tuple one by one via SRF_RETURN_NEXT() macro. But
 *PG_RETURN_DATUM() calls in the macro, makes it quite
 *impracticble. OTOH, this method gives opportunity to call SRFs in
 *a fashion like SELECT mysrf();
 *
 * 2. Materialize Mode
 *In this mode, you collect all tuples in a single set and return
 *that set. When compared to previos method, it's not possible to
 *use SRF of materialize mode like SELECT my_materialized_srf();,
 *instead, you need to access it as a simple table: SELECT * FROM
 *my_materialized_srf();.
 *
 * ...
 */

And I conclude to that generate_series() is written as a SRF function of
value-per-call mode. (Also you may want to check Returning Sets[1]
chapter at PostgreSQL manual.)

[1] 
http://www.postgresql.org/docs/current/static/xfunc-c.html#XFUNC-C-RETURN-SET


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] stored procedure stats in collector

2008-03-19 Thread Alvaro Herrera
Hi,

 Good. I'll bring the patch up to date with HEAD.

Did you post an updated patch to HEAD?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multiple SRF right after SELECT

2008-03-19 Thread Tom Lane
Nikolay Samokhvalov [EMAIL PROTECTED] writes:
  1. Is it correct at all to use SRF in select list, w/o explicit FROM?

You can read about the current behavior in ExecTargetList, but basically
the idea is to cycle all the SRFs until they all say done at the same
time.  So the number of result rows is the least common multiple of the
sizes of the SRF outputs.

This behavior is pretty random, I agree, but we inherited it from
Berkeley ... and even if we didn't care about breaking existing
applications that might rely on it, it's not exactly clear what it
should be instead.

The fact that there isn't any obviously right behavior is why I'd
prefer to deprecate SRFs in targetlists ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Final Thoughts for 8.3 on LWLocking and Scalability

2008-03-19 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 I've completed a review of all of the LWlocking in the backends. This is
 documented in the enclosed file. I would propose that we use this as
 comments in lwlock.h or in the README, if people agree.

I don't think that putting this list in as documentation is a smart
idea --- it would inevitably become out-of-date, and misleading
documentation is worse than none.  Parts of it are out of date already
(which kinda proves my point considering that very little new development
has gone into the tree since September).  Since anyone who's concerned
about a particular lock can grep for uses of it pretty easily, I think
we should just figure on them doing that.

 2. CountActiveBackends() searches the whole of the proc array, even
 though it could stop when it gets to commit_siblings. Stopping once the
 heuristic has been determined seems like the best thing to do. A small
 patch to implement this is attached.

At the moment CountActiveBackends doesn't take the lock anymore, so
I'm thinking that changing this is not worthwhile.

 [ sinval lock management needs redesign ]

Yup it does.

 4. WALWriteLock is acquired in Shared mode by bgwriter when it runs
 GetLastSegSwitchTime(). All other callers are Exclusive lockers, so the
 Shared request will queue like everybody else. WALWriteLock queue length
 can be long, so the bgwriter can get stuck for much longer than
 bgwriter_delay when it makes this call; this happens only when
 archive_timeout  0 so probably has never shown up in any performance
 testing. XLogWrite takes info_lck also, so we can move the
 lastSegSwitchTime behind that lock instead. That way bgwriter need never
 wait on I/O, just spin for access to info_lck. Minor change.

This seems like a possibly reasonable thing to do; did you ever write
a patch for it?

 5. ReadNewTransactionId() is only called now by GetNextXidAndEpoch(),
 but I can't find a caller of that anywhere in core or contrib. Can those
 now be removed?

No.  It's needed by Slony.

 6. David Strong talked about doing some testing to see if
 NUM_BUFFER_PARTITIONS should be increased above 16. We don't have any
 further information on that. Should we increase the value to 32 or 64?

Not without some testing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] Text - C string

2008-03-19 Thread Tom Lane
Brendan Jurd [EMAIL PROTECTED] writes:
 As discussed on -hackers, I'm trying to get rid of some redundant code
 by creating a widely useful set of functions to convert between text
 and C string in the backend.

 The new extern functions, declared in include/utils/builtins.h and
 defined in backend/utils/adt/varlena.c, are:

 char * text_cstring(const text *t)
 char * text_cstring_limit(const text *t, int len)
 text * cstring_text(const char *s)
 text * cstring_text_limit(const char *s, int len)

I started to look at applying this patch and immediately decided that
I didn't like these names --- it's exceeding un-obvious which direction
of conversion any one of the functions performs.  Seems like every time
you wanted to call one, you'd be going back to look at the source code
to remember which to use.

What do people think of text_to_cstring?  Or should we go with
TextPGetCString for consistency with the Datum-whacking macros?  (I seem
to recall having argued against the latter idea, but am reconsidering.)
Or something else?

BTW, I suspect that the _limit functions are mostly useless ---
a quick look through the patch did not suggest that any of the places
using them really needed a limit.  The point of, for instance,
TZ_STRLEN_MAX is to be able to use fixed-size local buffers, and
if you're going to pay for a palloc anyway then you might as well
forget it.  (There might be some value in a strlcpy equivalent that
copies from a text datum into a limited-size caller-supplied buffer,
but that's not what we've got here.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Request for feature - ECPGget_PGconn

2008-03-19 Thread Michael Meskes
On Mon, Mar 17, 2008 at 11:26:59AM +, Mike Aubury wrote:
 Add a function to return the current PGConn used within ecpg..

I still consider such a feature is a little bit hackish, but I also
think Mike has some valid points. And given that adding this patch does
not create side effects I will commit it if no one objects.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go VfL Borussia! Go SF 49ers! Use Debian GNU/Linux! Use PostgreSQL!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] Text - C string

2008-03-19 Thread Sam Mason
On Wed, Mar 19, 2008 at 12:51:35PM -0400, Tom Lane wrote:
 Brendan Jurd [EMAIL PROTECTED] writes:
  char * text_cstring(const text *t)
 
 What do people think of text_to_cstring?

I tend to put things the other way around in my code, i.e:

  char * cstring_of_text(const text *t)

mainly because things read more easily---type definitions of new
variables are next to the first part of the word.

  char * str = cstring_of_text(src_text);
vs.
  char * str = text_to_cstring(src_text);

I think I got my original inspiration for doing it this way around from
the Caml language.


  Sam

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Text - C string

2008-03-19 Thread Volkan YAZICI
On Wed, 19 Mar 2008, Sam Mason [EMAIL PROTECTED] writes:
 ...
   char * str = cstring_of_text(src_text);
 ...

 I think I got my original inspiration for doing it this way around from
 the Caml language.

Also, used in Common Lisp as class accessors:

  char *s = cstring_of(text);
  text *t = text_of(cstring);

But I'd vote for TextPGetCString style Tom suggested for the eye-habit
compatibility with the rest of the code.


Regards.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Enable probes to work with Mac OS X Leopard and other OSes that

2008-03-19 Thread Andrew Dunstan



Robert Lor wrote:

Peter Eisentraut wrote:
Well, yes.  I meant to say, a build system that can supply the 
functionality of Gen_fmgrtab can surely implement this new thing.  I 
see there is Perl being used, so it should be simple.
  
I was thinking of using a Perl script to generate the dummy header 
file but decided against it to avoid disrupting the build on other 
platforms. If sed doesn't work on Windows for some reason, we can use 
a Perl script instead.





What we should do in fact is to use psed on Windows to execute the 
script. This is a sed emulator that is part of the standard perl 
install, and will save us the bother of having to maintain two 
generation scripts, or switching from sed to perl. We just need to find 
where to plug that into the Windows build process.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Final Thoughts for 8.3 on LWLocking and Scalability

2008-03-19 Thread Simon Riggs
On Wed, 2008-03-19 at 12:24 -0400, Tom Lane wrote:

  [ sinval lock management needs redesign ]
 
 Yup it does.

I wrote a redesigned, simplified version of my earlier patch. Enclosed
here for discussion only, not expecting this to be the final version.
Comments at top of patch explain it.

The basic idea is to identify a single backend to delete the sinval
message queue, without the need for redesigning the postmaster to handle
single-backend invalidation messages.

  4. WALWriteLock is acquired in Shared mode by bgwriter when it runs
  GetLastSegSwitchTime(). All other callers are Exclusive lockers, so the
  Shared request will queue like everybody else. WALWriteLock queue length
  can be long, so the bgwriter can get stuck for much longer than
  bgwriter_delay when it makes this call; this happens only when
  archive_timeout  0 so probably has never shown up in any performance
  testing. XLogWrite takes info_lck also, so we can move the
  lastSegSwitchTime behind that lock instead. That way bgwriter need never
  wait on I/O, just spin for access to info_lck. Minor change.
 
 This seems like a possibly reasonable thing to do; did you ever write
 a patch for it?

No, but happy to do so.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com 

  PostgreSQL UK 2008 Conference: http://www.postgresql.org.uk
Index: src/backend/storage/ipc/sinval.c
===
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/backend/storage/ipc/sinval.c,v
retrieving revision 1.85
diff -c -r1.85 sinval.c
*** src/backend/storage/ipc/sinval.c	17 Mar 2008 11:50:26 -	1.85
--- src/backend/storage/ipc/sinval.c	19 Mar 2008 17:51:19 -
***
*** 69,74 
--- 69,86 
   * Hence, we must be holding no SI resources when we call them.  The only
   * bad side-effect is that SIDelExpiredDataEntries might be called extra
   * times on the way out of a nested call.
+  *
+  * We minimise the number of times SIDelExpiredDataEntries is called by having
+  * just one backend set as the appointed deleter. Initially this is not set
+  * until the queue reaches 50% full. Once set the appointed backend will
+  * perform the delete and then pass the role onto another backend. If the
+  * appointed backend should go away then new inserted messages will reappoint
+  * another backend when the queue gets long enough. This works well if there
+  * is just one backend with the longest queue length. If more than one backend
+  * shares the burden of having the longest queue length then we may experience
+  * some inefficiency. This is unlikely with long queue lengths, but could be
+  * fairly common with short queues. So we unset the appointed_deleter if the
+  * queue length drops below 25% of the maximum queue length. 
   */
  void
  ReceiveSharedInvalidMessages(
***
*** 77,83 
  {
  	SharedInvalidationMessage data;
  	int			getResult;
! 	bool		gotMessage = false;
  
  	for (;;)
  	{
--- 89,95 
  {
  	SharedInvalidationMessage data;
  	int			getResult;
! 	bool			appointed_deleter = false;
  
  	for (;;)
  	{
***
*** 87,93 
  		 */
  		catchupInterruptOccurred = 0;
  
! 		getResult = SIGetDataEntry(MyBackendId, data);
  
  		if (getResult == 0)
  			break;/* nothing more to do */
--- 99,105 
  		 */
  		catchupInterruptOccurred = 0;
  
! 		getResult = SIGetDataEntry(MyBackendId, data, appointed_deleter);
  
  		if (getResult == 0)
  			break;/* nothing more to do */
***
*** 102,112 
  			/* got a normal data message */
  			invalFunction(data);
  		}
- 		gotMessage = true;
  	}
  
! 	/* If we got any messages, try to release dead messages */
! 	if (gotMessage)
  		SIDelExpiredDataEntries(false);
  }
  
--- 114,123 
  			/* got a normal data message */
  			invalFunction(data);
  		}
  	}
  
! 	/* If we are The One, try to release dead messages */
! 	if (appointed_deleter)
  		SIDelExpiredDataEntries(false);
  }
  
Index: src/backend/storage/ipc/sinvaladt.c
===
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/backend/storage/ipc/sinvaladt.c,v
retrieving revision 1.68
diff -c -r1.68 sinvaladt.c
*** src/backend/storage/ipc/sinvaladt.c	17 Mar 2008 11:50:27 -	1.68
--- src/backend/storage/ipc/sinvaladt.c	19 Mar 2008 18:11:35 -
***
*** 83,88 
--- 83,92 
  	int			lastBackend;	/* index of last active procState entry, +1 */
  	int			maxBackends;	/* size of procState array */
  	int			freeBackends;	/* number of empty procState slots */
+ 	int			appointed_deleter;  /* backend has been appointed as
+ 			 * the one to perform deletion of the
+ 			 * message queue
+ 			 */
  
  	/*
  	 * Next LocalTransactionId to use for each idle backend slot.  We keep
***
*** 159,164 
--- 163,170 
  	shmInvalBuffer-maxBackends = MaxBackends;
  	shmInvalBuffer-freeBackends = MaxBackends;
  
+ 	

Re: [HACKERS] Re: [COMMITTERS] pgsql: Don't need -Wno-error anymore, because flex is no longer

2008-03-19 Thread Kurt Roeckx
On Tue, Mar 18, 2008 at 07:23:40PM +0100, Peter Eisentraut wrote:
 Am Dienstag, 18. März 2008 schrieb Bruce Momjian:
  Peter Eisentraut wrote:
   Log Message:
   ---
   Don't need -Wno-error anymore, because flex is no longer producing
   warnings.
 
  I see this patch only affects ecpg?  Is this related to Tom's report
  from trying the new flex?
 
 Yes.  There used to be a few -Wno-error uses throughout the source code, near 
 the flex calls, to work around the well-known flex warnings issue in case 
 someone wanted to use -Werror to check his builds.  The fact that there was 
 only one such use left in the ecpg subtree shows that this wasn't really 
 maintained and could never have worked reliably.

There currently are a few warnings (with gcc 4.3.0):
bootscanner.c:1461: warning: 'input' defined but not used
scan.c:6670: warning: 'input' defined but not used
guc-file.c:1233: warning: 'input' defined but not used
tuplesort.c:2475: warning: inlining failed in call to 'myFunctionCall2': 
--param large-stack-frame-growth limit reached
tuplesort.c:2526: warning: called from here
tuplesort.c:2475: warning: inlining failed in call to 'myFunctionCall2': 
--param large-stack-frame-growth limit reached
tuplesort.c:2526: warning: called from here
[...]
tqual.c: In function 'HeapTupleSatisfiesVacuum':
tqual.c:115: warning: inlining failed in call to 'SetHintBits': call is
unlikely and code size would grow
tqual.c:1084: warning: called from here
tqual.c:115: warning: inlining failed in call to 'SetHintBits': call is
unlikely and code size would grow
tqual.c:1088: warning: called from here
[...]
pgc.c:3507: warning: 'input' defined but not used
psqlscan.c:3853: warning: 'input' defined but not used
pl_scan.c:2458: warning: 'input' defined but not used


Kurt


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Enable probes to work with Mac OS X Leopard and other OSes that

2008-03-19 Thread Magnus Hagander

Andrew Dunstan wrote:



Robert Lor wrote:

Peter Eisentraut wrote:
Well, yes.  I meant to say, a build system that can supply the 
functionality of Gen_fmgrtab can surely implement this new thing.  I 
see there is Perl being used, so it should be simple.
  
I was thinking of using a Perl script to generate the dummy header 
file but decided against it to avoid disrupting the build on other 
platforms. If sed doesn't work on Windows for some reason, we can use 
a Perl script instead.





What we should do in fact is to use psed on Windows to execute the 
script. This is a sed emulator that is part of the standard perl 
install, and will save us the bother of having to maintain two 
generation scripts, or switching from sed to perl. We just need to find 
where to plug that into the Windows build process.


Hadn't heard about that one. Sounds like a good way to do it, if you can 
find the correct place for it :-)


//Magnus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] tsearch2 in postgresql 8.3.1 - invalid byte sequence for encoding UTF8: 0xc3

2008-03-19 Thread Tom Lane
Richard Huxton [EMAIL PROTECTED] writes:
 Missed the mailing list on the last reply
 patrick wrote:
 thoses queries are not working, same message:
 ERROR: invalid byte sequence for encoding UTF8: 0xc3

 what i found is in postgresql.conf if i change:
 default_text_search_config from pg_catalog.french to 
 pg_catalog.english then the query is working fine.

I am just about convinced the problem is with french.stop.

There is more to that error message than meets the eye: 0xc3 is a valid
first byte for a two-byte UTF8 character, so the only way that the
message would look just like that is if 0xc3 is the last byte in the
presented string.  Looking at french.stop, the only plausible place for
this to happen is the line

à

(that's \303\240 or 0xc3 0xa0).  I am thinking that something decided
the \240 was junk and removed it.

I wonder whether the dictionaries ought not be reading their data files
in binary mode.  They appear to all be using AllocateFile(filename, r)
which means that we're at the mercy of whatever text-mode conversion
Windows feels like doing.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Text - C string

2008-03-19 Thread Tom Lane
Volkan YAZICI [EMAIL PROTECTED] writes:
 But I'd vote for TextPGetCString style Tom suggested for the eye-habit
 compatibility with the rest of the code.

If there are not additional votes, I'll go with TextPGetCString
and CStringGetTextP.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Enable probes to work with Mac OS X Leopard and other OSes that

2008-03-19 Thread Andrew Dunstan



Magnus Hagander wrote:

Andrew Dunstan wrote:



Robert Lor wrote:

Peter Eisentraut wrote:
Well, yes.  I meant to say, a build system that can supply the 
functionality of Gen_fmgrtab can surely implement this new thing.  
I see there is Perl being used, so it should be simple.
  
I was thinking of using a Perl script to generate the dummy header 
file but decided against it to avoid disrupting the build on other 
platforms. If sed doesn't work on Windows for some reason, we can 
use a Perl script instead.





What we should do in fact is to use psed on Windows to execute the 
script. This is a sed emulator that is part of the standard perl 
install, and will save us the bother of having to maintain two 
generation scripts, or switching from sed to perl. We just need to 
find where to plug that into the Windows build process.


Hadn't heard about that one. Sounds like a good way to do it, if you 
can find the correct place for it :-)



I'm thinking of the patch below.  I'll try it out tomorrow.

cheers

andrew

Index: Solution.pm
===
RCS file: /cvsroot/pgsql/src/tools/msvc/Solution.pm,v
retrieving revision 1.36
diff -c -r1.36 Solution.pm
*** Solution.pm 28 Feb 2008 12:17:59 -  1.36
--- Solution.pm 20 Mar 2008 00:02:05 -
***
*** 205,210 
--- 205,216 
 close(T);
 }

+ if 
(IsNewer('src\include\utils\probes.h','src\backend\utils\probes.d'))

+ {
+   print Generating probes.h...\n;
+   system('psed -f src\backend\utils\Gen_dummy_probes.sed 
src\backend\utils\probes.d  src\include\utils\probes.h');

+   }
+
 if 
(IsNewer('src\interfaces\libpq\libpq.rc','src\interfaces\libpq\libpq.rc.in'))

 {
 print Generating libpq.rc...\n;




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [PATCHES] [HACKERS] Text - C string

2008-03-19 Thread Brendan Jurd
On 20/03/2008, Tom Lane [EMAIL PROTECTED] wrote:

 I started to look at applying this patch and immediately decided that
  I didn't like these names --- it's exceeding un-obvious which direction
  of conversion any one of the functions performs.  Seems like every time
  you wanted to call one, you'd be going back to look at the source code
  to remember which to use.


That's a fair criticism.  I wanted to make the function names as
compact as possible, but your comment about the directionality of the
functions rings true.

  What do people think of text_to_cstring?  Or should we go with
  TextPGetCString for consistency with the Datum-whacking macros?  (I seem
  to recall having argued against the latter idea, but am reconsidering.)
  Or something else?


Your original argument against FooGetBar was that it would be *too*
consistent with the Datum macros, leading people to think that these
functions actually were macros.

As long as we don't want people getting confused about the
function/macro distinction, that argument still makes sense to me, and
I'd be more inclined towards the foo_to_bar() convention.

  BTW, I suspect that the _limit functions are mostly useless ---
  a quick look through the patch did not suggest that any of the places
  using them really needed a limit.  The point of, for instance,
  TZ_STRLEN_MAX is to be able to use fixed-size local buffers, and
  if you're going to pay for a palloc anyway then you might as well
  forget it.

What about callers like dotrim() in oracle_compat.c, which only want
to copy characters from the source string up to a particular length?
Doesn't that indicate a legitimate requirement for a
cstring_to_text_limit() (the call site was palloc'ing the text value
anyway)?

On the other hand, we do have those call sites (TZ_STRLEN_MAX is a
good example) where the caller just wanted to use a local buffer.  In
which case your strlcpy-equivalent function would probably be the
right thing to offer.

 (There might be some value in a strlcpy equivalent that
  copies from a text datum into a limited-size caller-supplied buffer,
  but that's not what we've got here.)


Perhaps we keep cstring_to_text_limit(), but make
text_to_cstring_limit() behave like strlcpy() instead?

One of the questions in the original patch submission was whether it
would be worth changing all those DirectFunctionCall(textin) and
(textout) calls to use the new functions.  Is it worthwhile avoiding
the fmgr overhead?

Thanks for taking the time to review my patch and provide this
feedback.  I'm happy to send in an updated version once we settle on
the naming convention for the functions.

Last time I looked, the codebase had shifted quite a bit since I
originally wrote the patch.  So it probably needs some work to apply
cleanly on the latest sources anyway.

Regards,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count(*) performance improvement ideas

2008-03-19 Thread Stephen Denne
Mark Mielke wrote
 This returns to the question of whether count of the whole table is useful, 
 or whether 
 count of a GROUP BY or WHERE is useful.
 If GROUP BY or WHERE is useful, then trigger on UPDATE becomes necessary.

True... for the example I gave I should have had an update trigger on my table.
I neglected it as I based the example of a more complex multi-table example 
from my application, where I know that the value I'm grouping by doesn't change.

 What is the direction here? Is it count of the whole table only? (-- not 
 interesting to me)
 Or count of more practical real life examples, which I completely agree with 
 Greg, 
 that this gets into the materialized view realm, and becomes very interesting.
 In my current db project, I never count all of the rows in a table. However, 
 I do use count(*) with GROUP BY and WHERE.

I'm trying to figure out how close I can come to a useful efficient 
materialized view with current production builds of postgresql,
and identifying areas where changes to postgresql could make it easier.

Currently I can see three areas of concern:

1) Turning it on
My query to initially populate the materialized view table takes 6 hours on my 
(limited hardware) development system, which highlights the problem of when do 
you turn on the triggers.
An outage is one way to guarantee that there are neither missing details nor 
double counted details.
Would turning on the triggers and then running my initial population query in 
the same transaction work?

2) Table bloat
I'm trying to group together a large number of +1 or -1 deltas into a single 
delta per transaction.
This creates as many dead rows as there are updates.
This is also a problem with Pavan's suggestion of maintaining a counter table.
They can all be HOT updates in 8.3, but they still all create new row versions.

Tom says Collapsing a transaction's changes
into a single row would require keeping some intra-transaction state,
which is do-able at the C-code level but not from plpgsql.

So can I hope that this problem is solvable as a contrib module that will work 
with at least 8.3?
I'd still want to write plpgsql trigger function myself, as I know the 
aggregation rules, and call contributed functions to integrate with the 
collapsing of the transaction's changes into a single row. (Expect I'd need a 
small number of rows per transaction, as I need to create delta rows for each 
value of my grouping field involved in the transaction.

3) How to incorporate the deltas.
With my technique, if the transaction rolls back the delta record becomes a 
dead row, if it commits, the delta is then visible to transaction started after 
this time.
I need to have a regular process run to sum and remove the deltas, rewriting 
the summary rows.
I'd like to be able to write an after-commit trigger that fires after changes 
in dependent tables that I identify, which can add the transaction's deltas to 
the summary table. I would want it to effectively be a new, small transaction.

I think that if these three areas are addressed, then before even considering 
writing code to automatically convert any given view into a materialized view.

2) and 3) could perhaps be implemented with a per transaction map from 
my_custom_key to a prepared statement and a list of parameter values.
Provide access to the values, making them updateable within the transaction. 
Have the statement automatically executed on commit.

Pavan also refers to deferred triggers, which has got me thinking about another 
possible solution:

Instead of inserting a delta row, that will be updated a lot of times, create 
an on commit drop temp table named after the txid and the grouping value (which 
is an integer in my case, perhaps hash it if you're grouping by something that 
doesn't easily convert to part of a table name), 
create an after insert initially deferred constraint to call a function which 
will move the 'at commit' values of the rows in the temp table to the summary 
table.
The temp table will only have one row inserted, updated many times, then on 
commit the trigger is fired once, and the temp table is dropped.

Does anyone think this will or won't work for some reason?

Regards,
Stephen Denne.

Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer 
focus, and courage. This email with any attachments is confidential and may be 
subject to legal privilege.  If it is not intended for you please advise by 
reply immediately, destroy it and do not copy, disclose or use it in any way.
__
  This email has been scanned by the DMZGlobal Business Quality
  Electronic Messaging Suite.
Please see http://www.dmzglobal.com/dmzmessaging.htm for details.
__



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

[HACKERS] diabolic state

2008-03-19 Thread Omar Bettin
I know you are i a C state but,
A string plus a NULL string is A string
mandi TOM


Re: [PATCHES] [HACKERS] Text - C string

2008-03-19 Thread Tom Lane
Brendan Jurd [EMAIL PROTECTED] writes:
 One of the questions in the original patch submission was whether it
 would be worth changing all those DirectFunctionCall(textin) and
 (textout) calls to use the new functions.  Is it worthwhile avoiding
 the fmgr overhead?

I think that's worth doing just on notational clarity grounds.
The small cycle savings doesn't excite me, but understanding
DirectFunctionCall1(textin, CStringGetDatum(foo)) just involves
more different bits of trivia than cstring_to_text(foo).

 Last time I looked, the codebase had shifted quite a bit since I
 originally wrote the patch.  So it probably needs some work to apply
 cleanly on the latest sources anyway.

Yeah, with wide-impact patches like this you are always going to have
that problem.  One point though is that we don't have to improve every
call site at the same time.  I'd be inclined to put in the new functions
and hit some representative sample of utils/adt/ files in the first
commit, and then incrementally fix other stuff.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers