[HACKERS] Earlier suggestion to get gcov to work by adding $(CFLAGS) to module link line

2007-07-17 Thread Gregory Stark

Was there any consensus on this change? It or something like it is necessary
to get gcov to work for contrib modules. I think adding all of $(CFLAGS) is
the correct thing to do on linux because if we're going to use $(CC) to link
then you don't know which of $(CFLAGS) might be necessary at link time as
well. I think -pg will suffer from the same problems for example. 



Index: Makefile.linux
===
RCS file: /home/stark/src/REPOSITORY/pgsql/src/makefiles/Makefile.linux,v
retrieving revision 1.22
diff -u -r1.22 Makefile.linux
--- Makefile.linux  9 Dec 2005 21:19:36 -   1.22
+++ Makefile.linux  17 Jul 2007 05:52:43 -
@@ -11,6 +11,6 @@
 endif
 
 %.so: %.o
-   $(CC) -shared -o $@ $
+   $(CC) $(CFLAGS) -shared -o $@ $
 
 sqlmansect = 7


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] SSPI authentication

2007-07-17 Thread Magnus Hagander
Stephen Frost wrote:
 * Magnus Hagander ([EMAIL PROTECTED]) wrote:
 The way this is handled in a number of other applications (putty being
 the one that comes to mind easily) is that two DLLs are built- one for
 SSPI and one for GSSAPI and you can easily switch between them on the
 client.  That'd work fine for us.
 Well, that you can do - you just need one libpq with sspi and one with
 gssapi.
 
 If both are made available then I think that'd work fine for us.  I'm
 concerned that the windows builds wouldn't include a version of libpq w/
 GSSAPI... 

The default build wouldn't. The binary build wouldn't. If you by GSSAPI
mean MIT linking - SSPI does GSSAPI *authentication* just fine.

One reason is that bringing in and configuring the MIT libraries is a
significant overhead.

Nothing would prevent you from building your own DLL with Kerberos linking.


 If I was confident that we could easily build it ourselves
 then I wouldn't care as much but, since I've never had to build libpq on
 Windows before, I'm not sure what effort is involved or what tools are
 required.  I'm also not thrilled by the prospect. :)

It's not hard, at least if you use MSVC to build it. It's harder with
MingW, but far from impossible.


 I don't like the idea of having to rebuild things under Windows,
 honestly..  Not that I like to build anything these days...  If it's not
 enabled by default in some way I expect that it'd get 'forgotten'.
 Ok, so looking at it from the other direction, say we wanted to support
 both. Then we need to invent a new way for the client to tell libpq
 which one to use. I think that's sensible if it's a common thing, but I
 still see it as a *very* narrow use-case that needs both in the same DLL.
 Or do you have a better idea on how to solve that?
 
 Supporting both is actually exactly what Mozilla does...  Check out the
 'network.auth.use-sspi' flag in about:config.  It's also what KfW does
 Include Windows LSA cache and Import windows credentials, which has
 the interesting option of only when principals match.  I'm not sure if
 there's a sane way to test at run-time if KfW exists but its existance
 could be used as a factor. 

Not easily - it can certainly be done, but it requires a significant
change in both krb5 and gssapi auth codepaths.

Also, the fact that kfw exists in no way means that it's configured and
set up, so in itself that's not a good enough way to decide. I don't see
any way of doing that automatically.


 I have to admit that this does kind of make
 me wish a bit for a 'libpq config file' even though I'm generally against
 such things.  Having the same easy switch as we do w/ Mozilla would be
 really nice.

So what we'd need in that case is a new libpq connectionstring
parameter. Which can be done, but it'd require that all frontends that
use libpq add support for it - such as pgadmin. I'm not sure if the ODBC
driver will support arbitrary arguments, otherwise that one needs it too.

As I'm sure you can tell, I'm far from convinced this is a good idea ;-)
Anybody else want to comment on this?

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Altering a plan

2007-07-17 Thread Warren Turkal
On Monday 16 July 2007 22:32:07 Shruthi A wrote:
   Please reply soon, this is an emergency..

This may be obvious, but a quick reply might call for commercial support. 
Check out [1].

[1]http://www.postgresql.org/support/professional_support

wt
-- 
Warren Turkal (w00t)

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] SSPI authentication

2007-07-17 Thread Stephen Frost
* Magnus Hagander ([EMAIL PROTECTED]) wrote:
 Stephen Frost wrote:
  If both are made available then I think that'd work fine for us.  I'm
  concerned that the windows builds wouldn't include a version of libpq w/
  GSSAPI... 
 
 The default build wouldn't. The binary build wouldn't. If you by GSSAPI
 mean MIT linking - SSPI does GSSAPI *authentication* just fine.

I don't think SSPI supports having seperate credential caches, a
different default realm, etc...
(and I'm not sure how you'd set them up even if it did).

 One reason is that bringing in and configuring the MIT libraries is a
 significant overhead.

Erm, isn't this what's done now?  Are we actually overloaded in some way
on the buildds?  Would this actually be a measurable reduction in the
overhead of the buildds?  I find this argument less than convincing
reasoning for dropping existing functionality...

 Nothing would prevent you from building your own DLL with Kerberos linking.

Except when it breaks because it's not being tested in the build
system... :/  I expect there are other such things in the same situation
but I'm rather unhappy that it's something which is actually going to
impact people (at the least me) as opposed to GNU readline on some
esoteric architecture.

  If I was confident that we could easily build it ourselves
  then I wouldn't care as much but, since I've never had to build libpq on
  Windows before, I'm not sure what effort is involved or what tools are
  required.  I'm also not thrilled by the prospect. :)
 
 It's not hard, at least if you use MSVC to build it. It's harder with
 MingW, but far from impossible.

MSVC would be a rather unhappy requirement.  Do we have buildds running
with MingW?  Settings up buildds is documented, etc, no?  I don't know
if I could dedicate a machine to it but at least if I can build my own
buildd setup using the scripts and whatnot it might not be too bad..

  I have to admit that this does kind of make
  me wish a bit for a 'libpq config file' even though I'm generally against
  such things.  Having the same easy switch as we do w/ Mozilla would be
  really nice.
 
 So what we'd need in that case is a new libpq connectionstring
 parameter. Which can be done, but it'd require that all frontends that
 use libpq add support for it - such as pgadmin. I'm not sure if the ODBC
 driver will support arbitrary arguments, otherwise that one needs it too.

If the ODBC driver doesn't support changes to the connectionstring (and
I think it does, actually), that'd probably be a sensible thing to add
anyway.  Having to have what's essentially a library-config option
handled by all the clients does kind of suck though.

 As I'm sure you can tell, I'm far from convinced this is a good idea ;-)

It's also not exactly unheard of.  I'm pretty sure what mozilla does is
basically just dlopen() the appropriate library.  I'm not sure if it's
even got an internal set of dlls which link to the sspi/gssapi dlls
explicitly.  If it does we might be able to swipe it.  Sorry for my lack
of familiarity, but does SSPI provide a GSSAPI identical to the MIT one?
For some reason I was thinking it did (hence why the dll magic just
works, but there could be more going on in those possibly) in which case
I'm not even sure you'd need the MIT stuff available to compile with
support for it?

 Anybody else want to comment on this?

I've always been rather unhappy at the apparent lack of user participation
on this list. :/  I don't mean to imply that I speak for the silent
majority, just that it's frustrating when trying to gauge the impact
of changes.

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] plpgsql TABLE patch

2007-07-17 Thread Neil Conway
To review, Pavel Stehule submitted a proposal and patch to add support
for table functions a few months back:

http://archives.postgresql.org/pgsql-hackers/2007-02/msg00318.php
http://archives.postgresql.org/pgsql-patches/2007-05/msg00054.php

Pavel proposed two basically independent features:

(1) RETURN TABLE syntax sugar for PL/PgSQL

This allows you to return the result of evaluating a SELECT query as the
result of a SETOF pl/pgsql function. I don't like the RETURN TABLE
syntax, because TABLE (...) is defined as part of SQL (6.39 in SQL:2003,
as one of the variants of multiset value constructor). If we're going
to implement TABLE (...), the right place to do that is in the Postgres
backend proper (presumably as part of a larger effort to implement
multisets). Therefore I'd like to rename the PL/PgSQL syntax sugar to
RETURN QUERY (I'm open to other suggestions for the name).

Another question is whether it is sensible to allow RETURN QUERY and
RETURN NEXT to be combined in a single function. That is, whether RETURN
QUERY should be more like RETURN (and return from the function
immediately), or more like RETURN NEXT (just append a result set to the
SRF's tuplestore and continue evaluating the function). I think making
it behave more like RETURN NEXT would be more flexible, but perhaps it
would be confusing for users to see a RETURN QUERY statement that does
not in fact return control to the caller of the function... (Is RETURN
NEXT QUERY too ugly a name?)

(2) RETURNS TABLE (...) syntax sugar for CREATE FUNCTION

This lets you write CREATE FUNCTION ... RETURNS TABLE (x int, y int)
as essentially syntax sugar for OUT parameters. The syntax is specified
by SQL:2003, so I think this feature is worth implementing.

When Pavel proposed this, the sticking point is whether RETURNS TABLE
(...) is truly just syntax sugar for OUT parameters, or whether it
should behave differently with regard to variables with the same name in
the function body:[1]

CREATE OR REPLACE FUNCTION foo(arg int) RETURNS TABLE (cust_id int) AS
$$
BEGIN
  RETURN QUERY (SELECT cust_id FROM tab WHERE some = arg);
END; $$ LANGUAGE plpgsql;

would cause a name collision if RETURNS TABLE were treated as syntax
sugar for OUT parameters. Pavel's patch fixes this by introducing a new
proargmode for RETURNS TABLE parameters. Tom objected to this on the
grounds that it could break user code that examines pg_proc.proargmode,
but I'm inclined to think that it is worth the trouble to avoid what
could be a common source of confusion.

Comments welcome; I'll submit revised patches for these features
shortly.

-Neil

[1] example stolen shamelessly from a prior mail from Pavel



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] minor compiler warning on OpenBSD

2007-07-17 Thread Michael Meskes
On Mon, Jul 16, 2007 at 12:02:18PM -0400, Tom Lane wrote:
 No, ecpg is the only one producing warnings for me.  What flex version
 do you use?

2.5.33

 What I get with flex 2.5.4 is
 
 pgc.c: In function `base_yylex':
 pgc.c:1564: warning: label `find_rule' defined but not used
 preproc.y: At top level:
 pgc.c:3818: warning: `yy_flex_realloc' defined but not used

These don't appear with my flex version.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] minor compiler warning on OpenBSD

2007-07-17 Thread Michael Meskes
On Mon, Jul 16, 2007 at 06:09:47PM +0200, Stefan Kaltenbrunner wrote:
 I think Michael is refering to:

 In file included from bootparse.y:380:
 bootscanner.c:1855: warning: no previous prototype for 
 ‘boot_yyget_lineno’
 ...

Right, I was talking about these messages.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] SSPI authentication

2007-07-17 Thread Dave Page
Magnus Hagander wrote:
 So what we'd need in that case is a new libpq connectionstring
 parameter. Which can be done, but it'd require that all frontends that
 use libpq add support for it - such as pgadmin. I'm not sure if the ODBC
 driver will support arbitrary arguments, otherwise that one needs it too.
 
 As I'm sure you can tell, I'm far from convinced this is a good idea ;-)
 Anybody else want to comment on this?

The ODBC driver would need modification (as would pgAdmin of course).
Whats more of a concern is that we already have ODBC connection strings
that can be too long - adding yet another option will make that worse of
course.

Regards, Dave

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Altering a plan

2007-07-17 Thread Heikki Linnakangas
Shruthi A wrote:
  I want to take a plan generated by the postgres optimizer and insert a
  constant in place of another constant in the plan. There is a function
  OidOutputFunctionCall( ) to get the constant. Similarly, is there any
  function to set the value of the constant?   Also what does
  OidInputFunctionCall( ) do?

Why?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Updated tsearch documentation

2007-07-17 Thread Oleg Bartunov

On Tue, 17 Jul 2007, Bruce Momjian wrote:


I think the tsearch documentation is nearing completion:

http://momjian.us/expire/fulltext/HTML/textsearch.html

but I am not happy with how tsearch is enabled in a user table:

http://momjian.us/expire/fulltext/HTML/textsearch-app-tutorial.html

Aside from the fact that it needs more examples, it only illustrates an
example where someone creates a table, populates it, then adds a
tsvector column, populates that, then creates an index.

That seems quite inflexible.  Is there a way to avoid having a separate
tsvector column?  What happens if the table is dynamic?  How is that
column updated based on table changes?  Triggers?  Where are the
examples?  Can you create an index like this:


I agree, that there are could be more examples, but text search doesn't
require something special !
*Example* of trigger function is documented on 
http://momjian.us/expire/fulltext/HTML/textsearch-opfunc.html





CREATE INDEX textsearch_id ON pgweb USING gin(to_tsvector(column));

That avoids having to have a separate column because you can just say:

WHERE to_query('XXX') @@ to_tsvector(column)


yes, it's possible, but without ranking, since currently it's impossible 
to store any information in index (it's pg's feature). btw, this should

works and for GiST index also.

That kind of search is useful if there is  another natural ordering of search 
results, for example, by timestamp.




How do we make sure that the to_query is using the same text search
configuration as the 'column' or index?  Perhaps we should suggest:


please, keep in mind, it's not mandatory to use the same configuration
at search time, that was used at index creation.



 CREATE INDEX textsearch_idx ON pgweb USING gin(to_tsvector('english',column));

so that at least the configuration is documented in the index.


yes, it's better to always explicitly specify configuration name and not 
rely on default configuration. 
Unfortunately, configuration name doesn't saved in the index.


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Altering a plan

2007-07-17 Thread Heikki Linnakangas
Please keep the list cc'd.

Shruthi A wrote:
 On 7/17/07, Heikki Linnakangas [EMAIL PROTECTED] wrote:
 Shruthi A wrote:
   I want to take a plan generated by the postgres optimizer and insert
 a
   constant in place of another constant in the plan. There is a
 function
   OidOutputFunctionCall( ) to get the constant. Similarly, is there
 any
   function to set the value of the constant?   Also what does
   OidInputFunctionCall( ) do?

 Why?

 Actually i'm trying to write a function where the plan which is optimal for
 one query is enforced for another query (and the 2 queries differ only in a
 constant value of a predicate).

How about using a parameter instead of a constant?

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Updated tsearch documentation

2007-07-17 Thread Oleg Bartunov

On Tue, 17 Jul 2007, Oleg Bartunov wrote:


On Tue, 17 Jul 2007, Bruce Momjian wrote:


I think the tsearch documentation is nearing completion:

http://momjian.us/expire/fulltext/HTML/textsearch.html

but I am not happy with how tsearch is enabled in a user table:

http://momjian.us/expire/fulltext/HTML/textsearch-app-tutorial.html

Aside from the fact that it needs more examples, it only illustrates an
example where someone creates a table, populates it, then adds a
tsvector column, populates that, then creates an index.

That seems quite inflexible.  Is there a way to avoid having a separate
tsvector column?  What happens if the table is dynamic?  How is that
column updated based on table changes?  Triggers?  Where are the
examples?  Can you create an index like this:


I agree, that there are could be more examples, but text search doesn't
require something special !
*Example* of trigger function is documented on 
http://momjian.us/expire/fulltext/HTML/textsearch-opfunc.html




Bruce,

below is an example of trigger for  insert/update of example table

create function pgweb_update() returns trigger as 
$$

BEGIN
   NEW.textsearch_index=
   setweight( to_tsvector( coalesce (title,'')), 'A' ) || ' ' ||
   setweight( to_tsvector(coalesce (body,'')),'D'); RETURN NEW;
END;
$$ 
language plpgsql;


CREATE TRIGGER fts_update BEFORE INSERT OR UPDATE ON pgweb
FOR EACH ROW EXECUTE PROCEDURE pgweb_update();






CREATE INDEX textsearch_id ON pgweb USING gin(to_tsvector(column));

That avoids having to have a separate column because you can just say:

WHERE to_query('XXX') @@ to_tsvector(column)


yes, it's possible, but without ranking, since currently it's impossible to 
store any information in index (it's pg's feature). btw, this should

works and for GiST index also.

That kind of search is useful if there is  another natural ordering of search 
results, for example, by timestamp.




How do we make sure that the to_query is using the same text search
configuration as the 'column' or index?  Perhaps we should suggest:


please, keep in mind, it's not mandatory to use the same configuration
at search time, that was used at index creation.



one example is when text search index created without taking into account 
stop-words. Then you could search famous 'to be or not to be' with the

same configuration, or ignore stop words with other.




 CREATE INDEX textsearch_idx ON pgweb USING 
gin(to_tsvector('english',column));


so that at least the configuration is documented in the index.


yes, it's better to always explicitly specify configuration name and not rely 
on default configuration. Unfortunately, configuration name doesn't saved in 
the index.


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

 http://archives.postgresql.org



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Straightforward changes for increased SMP scalability

2007-07-17 Thread Zeugswetter Andreas ADI SD

 The NUM_BUFFER_PARTITIONS patch is fairly simple. We've 
 noticed gains with NUM_BUFFER_PARTITIONS set between 256 and 
 2048, but little to no gain after 2048, although this might 
 depend on the benchmark and platform being used. We've 

Might this also be a padding issue, because 2048 partitions seems mighty
high ?
Other db's seem to cope well with a max of 64 partitions.

Andreas

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] plpgsql FOR loop doesn't guard against strange step values

2007-07-17 Thread Tom Lane
Jaime Casanova [EMAIL PROTECTED] writes:
 http://archives.postgresql.org/pgsql-committers/2007-07/msg00142.php
 at least the part that prevents overflow and probably the one that
 reject zero in BY are clearly bugs and should be backpatched to 8.2,
 aren't they?

Well, it's a behavioral change, so given the lack of complaints from the
field I'm inclined not to back-patch.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Earlier suggestion to get gcov to work by adding $(CFLAGS) to module link line

2007-07-17 Thread Tom Lane
Gregory Stark [EMAIL PROTECTED] writes:
 Was there any consensus on this change?

The implicit .so rule sucks on nearly every port, not only Linux.
We should be getting rid of the things in favor of using the much more
complete rules in Makefile.shlib.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Backend memory growing too much

2007-07-17 Thread ohp
Hi everyone,

I've been using sqlgrey for some time now and I'm very surprised by the
memory taken by the backends to which sqlgrey is connected.

look at process 4111 and 28108 . They roughly take twice the space the
other backend take.

Could there be a memory leak?

I don't know much about DBI/DBD but I know sqlgrey uses a lot of
prepare/prepare_cached statements.

What could cause?

shared_buffer is 400MB here.

TIA

-- 
Olivier PRENANT Tel: +33-5-61-50-97-00 (Work)
15, Chemin des Monges+33-5-61-50-97-01 (Fax)
31190 AUTERIVE   +33-6-07-63-80-64 (GSM)
FRANCE  Email: [EMAIL PROTECTED]
--
Make your life a dream, make your dream a reality. (St Exupery) 2 S postgres  1610  1604   TS  80 20  0 e6c19540 107480 e6a773cc   jun 29 ?
   568:26 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres  2441  1604   TS  80 20  0 e2f17fe0 107714 e33685b0   jun 29 ?
0:13 /databases/pgsql-v8.2/bin/postgres 
42 S postgres  1604 1   TS  80 20  0 e6c3dfc0 107418 e69886ec   jun 29 ?
8:18 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres  1611  1604   TS  80 20  0 e6c88500   1737 e6a7723c   jun 29 ?
   136:30 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres  2124  1604   TS  80 20  0 e2f17540 108188 e690a230   jun 29 ?
0:01 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres  4111  1604   TS  80 20  0 ef1db500 256516 f73eb330   jul 15 ?
2:28 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 28673  1604   TS  80 20  0 d04aa540 107713 e7efc250 15:10:52 ?
0:00 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 28499  1604   TS  80 20  0 d9c7d500 107715 e32e0af0 15:09:31 ?
0:00 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 28108  1604   TS  80 20  0 d5171a60 223750 e7efb2c0 23:58:49 ?
2:18 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 28256  1604   TS  80 20  0 d1f5daa0 107713 de7f1480 15:06:26 ?
0:00 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 26463  1604   TS  80 20  0 df639fc0 107715 de52f1d0 14:40:09 ?
0:01 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 26464  1604   TS  80 20  0 d9c7b520 107711 e9acc9f0 14:40:09 ?
0:00 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 27972  1604   TS  80 20  0 e6bcdfe0 107711 e6a75dd0 15:01:10 ?
0:00 /databases/pgsql-v8.2/bin/postgres 
 2 S postgres 28720  1604   TS  80 20  0 dea14540 107713 e34c9c80 15:11:22 ?
0:00 /databases/pgsql-v8.2/bin/postgres 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Invalid to_date patterns (was: [PATCHES] [GENERAL] ISO week dates)

2007-07-17 Thread Brendan Jurd

On 4/3/07, Bruce Momjian [EMAIL PROTECTED] wrote:


Because this patch was not completed, I have added it to the TODO list:

* Fix to_date()-related functions to consistently issue errors

  http://archives.postgresql.org/pgsql-hackers/2007-02/msg00915.php


I'm now taking another run at this issue.  Here's what I've got in mind.

There are three distinct conventions for specifying a date that we
consider in Postgres.  These are
* Julian day,
* ISO week date, and
* Standard Gregorian.

Within an ISO week date, you can identify a date using either
* year, week and day-of-week, or
* year and day-of-year.

Likewise within a Gregorian date, you can identify a date using
* year, month and day-of-month,
* year, month, week-of-month and day-of-week (extremely weird, but there it is)
* year, week, and day-of-week, or
* year and day-of-year.

Chad Wagner mentioned that Oracle will allow a combination of Julian
and Gregorian formats so long as both formats yield the same date.  If
we're going to stick with the theme of imitating Oracle, I propose the
following:

* No mixing of Gregorian and ISO fields permitted.  If the format
string contains both Gregorian and ISO normative fields in any
sequence or combination, we throw an ERRCODE_INVALID_DATETIME_FORMAT
and reject the query.
* Either Gregorian or ISO format strings may include a Julian date
field, as long as the results are in agreement.  If the results
disagree, we reject the query.
* Purely non-normative fields (like Q) are completely and silently
disregarded.
* A Gregorian or ISO format may be over-constraining as long as all
values are in agreement.  If there are any conflicts we reject the
query.

So, for example, we would reject something like -IDDD out of
hand because it combines the ISO and Gregorian conventions, making it
impossible to ascertain what the user really wants to do.

We would allow -MM-DD J as long as the result for the -MM-DD
part matches the result for the J part.

We would also allow something like -MM-DD D as long as the results
of -MM-DD and D matched.  So to_date('2007-07-18 4', '-MM-DD
D') would successfully return the date 18 July 2007, but if you tried
to_date('2007-07-18 5', '-MM-DD D') you would get an error.

If there are no objections I'd be happy to cook a patch up.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Backend memory growing too much

2007-07-17 Thread Tom Lane
[EMAIL PROTECTED] writes:
 I don't know much about DBI/DBD but I know sqlgrey uses a lot of
 prepare/prepare_cached statements.

Well, those aren't exactly free.

Possibly you could learn something about it by attaching to one of
these backends with gdb and executing

call MemoryContextStats(TopMemoryContext)

This will dump a memory map to stderr.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Invalid to_date patterns (was: [PATCHES] [GENERAL] ISO week dates)

2007-07-17 Thread Tom Lane
Brendan Jurd [EMAIL PROTECTED] writes:
 * Fix to_date()-related functions to consistently issue errors
 http://archives.postgresql.org/pgsql-hackers/2007-02/msg00915.php

 I'm now taking another run at this issue.  Here's what I've got in mind.

This is all good but I think that self-inconsistent format strings are
not really the main source of to_date problems.  Most of the complaints
I've seen arise from to_date plowing ahead to deliver a ridiculous
answer when the input data string doesn't match the format.  I'd like to
see the code try a little harder to validate the input data.

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] write_pipe_chunks patch messes up early error message output

2007-07-17 Thread Andrew Dunstan



Tom Lane wrote:

Andrew Dunstan [EMAIL PROTECTED] writes:
  

Tom Lane wrote:


I think offhand that the correct semantics of the flag are we have
redirected our original stderr into a pipe for syslogger,
  


  
We could expose syslogger's redirection_done flag, which I think has the 
semantics you want.



Yeah, that would work.  You'd have to get rid of the current ad-hoc
method by which it is propagated to the syslogger child process
(EXEC_BACKEND case), because now it will have to be propagated to all
children; so postmaster.c should handle it in BackendParameters.


  



The problem with this as it stands is that the syslogger itself is 
forked before the redirection is done. I guess we need to make sure the 
syslogger itself never calls write_pipe_chunks() - which makes sense 
anyway - should probably call write_syslogger_file() directly, I think.


cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] compiler warnings on the buildfarm

2007-07-17 Thread Zdenek Kotala

Stefan Kaltenbrunner napsal(a):

Zdenek Kotala wrote:

Stefan Kaltenbrunner wrote:

Zdenek Kotala wrote:

Stefan Kaltenbrunner wrote:

Zdenek Kotala wrote:

For sun studio -erroff=E_STATEMENT_NOT_REACHED is useful there. If you
want to determine warning tags for each warning add -errtags.

Is that supported on all versions of sun studio(Sun WorkShop 6, Sun
Studio 8,11) we have on the farm ?

Yes. Also on SS12.

hmm - sure about that ? I was about to submit a patch to disable some
compiler warnings but then I noted the following discussion thread:

http://forum.java.sun.com/thread.jspa?threadID=5163903messageID=9637391

which seems to indicate that at least the compiler installed on kudu:

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kududt=2007-07-15%2003:30:01


does NOT support turning of specific warnings.


I tested it on cc version 5.3 and it works. See


ah cool - thanks for testing!

so on my box we would need to add
-erroff=E_EMPTY_TRANSLATION_UNIT,E_STATEMENT_NOT_REACHED,E_END_OF_LOOP_CODE_NOT_REACHED,E_FUNC_HAS_NO_RETURN_STMT,E_LOOP_NOT_ENTERED_AT_TOP

to CFLAGS to get down to the following 2 warnings:

pgstat.c, line 652: warning: const object should have initializer:
all_zeroes (E_CONST_OBJ_SHOULD_HAVE_INITIZR)
pgstat.c, line 2118: warning: const object should have initializer:
all_zeroes (E_CONST_OBJ_SHOULD_HAVE_INITIZR)

the open question is if that is what want or if we would be simply
adding (unnecessary) complexity (or confusion).

comments ?


E_STATEMENT_NOT_REACHED,E_END_OF_LOOP_CODE_NOT_REACHED, E_EMPTY_TRANSLATION_UNIT 
are probably ok to ignore.
E_FUNC_HAS_NO_RETURN_STMT is there because main is leaved by exit() instead 
return. And In another case It should be regular warning.



I think good solution is compare previous warning log with latest build and make 
a diff. If some new warning appears it is probably regular warning.


Zdenek

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Backend memory growing too much

2007-07-17 Thread Andrew Dunstan



[EMAIL PROTECTED] wrote:


I don't know much about DBI/DBD but I know sqlgrey uses a lot of
prepare/prepare_cached statements.

  


You can inhibit DBD::Pg from using server side prepares  if you need to, 
by executing:


 $dbh-{pg_server_prepare} = 0;

(as documented in the excellent DBD::Pg docs).

cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] write_pipe_chunks patch messes up early error message output

2007-07-17 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Yeah, that would work.  You'd have to get rid of the current ad-hoc
 method by which it is propagated to the syslogger child process
 (EXEC_BACKEND case), because now it will have to be propagated to all
 children; so postmaster.c should handle it in BackendParameters.

 The problem with this as it stands is that the syslogger itself is 
 forked before the redirection is done.

Which is entirely correct.  Re-read what I said about first launch vs
relaunch of the syslogger.  Its stderr will be connected differently in
the two cases, and should be handled differently --- we want the first
launch to try to report problems on its own stderr, but there's no point
after a relaunch.  That's why we pass down redirection_done to it.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] compiler warnings on the buildfarm

2007-07-17 Thread Tom Lane
Zdenek Kotala [EMAIL PROTECTED] writes:
 E_FUNC_HAS_NO_RETURN_STMT is there because main is leaved by exit() instead 
 return. And In another case It should be regular warning.

That should be gone now; I changed the two places that triggered it.
I'd suggest not disabling that warning.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] write_pipe_chunks patch messes up early error message output

2007-07-17 Thread Andrew Dunstan



Tom Lane wrote:

Andrew Dunstan [EMAIL PROTECTED] writes:
  

Tom Lane wrote:


Yeah, that would work.  You'd have to get rid of the current ad-hoc
method by which it is propagated to the syslogger child process
(EXEC_BACKEND case), because now it will have to be propagated to all
children; so postmaster.c should handle it in BackendParameters.
  


  
The problem with this as it stands is that the syslogger itself is 
forked before the redirection is done.



Which is entirely correct.  Re-read what I said about first launch vs
relaunch of the syslogger.  Its stderr will be connected differently in
the two cases, and should be handled differently --- we want the first
launch to try to report problems on its own stderr, but there's no point
after a relaunch.  That's why we pass down redirection_done to it.


  


What I was trying to nut out was how to handle logging from the first 
launched syslogger after redirection is done.


Or, looking at it another way, why would we ever want the syslogger to 
use the chunking protocol at all?


cheers

andrew

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] compiler warnings on the buildfarm

2007-07-17 Thread Zdenek Kotala

Tom Lane napsal(a):

Zdenek Kotala [EMAIL PROTECTED] writes:
E_FUNC_HAS_NO_RETURN_STMT is there because main is leaved by exit() instead 
return. And In another case It should be regular warning.


That should be gone now; I changed the two places that triggered it.
I'd suggest not disabling that warning.


Yes I agree. Did you also clean up on old branches? If not I think we can live 
with this warning on old branches.



Zdenek

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] compiler warnings on the buildfarm

2007-07-17 Thread Tom Lane
Zdenek Kotala [EMAIL PROTECTED] writes:
 Tom Lane napsal(a):
 That should be gone now; I changed the two places that triggered it.
 I'd suggest not disabling that warning.

 Yes I agree. Did you also clean up on old branches?

No, I'm not interested in doing that kind of fiddling on old branches.
I think we only care about warnings in HEAD.  (Unless an actual bug is
exposed, of course, in which case we'd back-patch the fix as appropriate.)

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] write_pipe_chunks patch messes up early error message output

2007-07-17 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Or, looking at it another way, why would we ever want the syslogger to 
 use the chunking protocol at all?

Ah, I misunderstood you.  Yeah, I think you are right: if we are
special-casing the syslogger process anyway, then it need only have
these two behaviors:

not redirection_done: write to own stderr (not chunked) and directly to
file

redirection_done: write directly to file

One thing to watch out for is infinite recursion if the write-to-file
gets an error.  I don't remember if we have a defense against that
in there now, but we probably should.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] SSPI authentication

2007-07-17 Thread Magnus Hagander
Stephen Frost wrote:
 * Magnus Hagander ([EMAIL PROTECTED]) wrote:
 Stephen Frost wrote:
 If both are made available then I think that'd work fine for us.  I'm
 concerned that the windows builds wouldn't include a version of libpq w/
 GSSAPI... 
 The default build wouldn't. The binary build wouldn't. If you by GSSAPI
 mean MIT linking - SSPI does GSSAPI *authentication* just fine.
 
 I don't think SSPI supports having seperate credential caches, a
 different default realm, etc...
 (and I'm not sure how you'd set them up even if it did).

No, it does not. But that in itself has nothing to do with GSSAPI - your
requirement is something different. That's not saying we should ignore
your requirement :-)


 One reason is that bringing in and configuring the MIT libraries is a
 significant overhead.
 
 Erm, isn't this what's done now?

Except you don't need to configure it unless you use it, but yes, it's
an overhead we have now. That I would very much like to get rid of.

  Are we actually overloaded in some way
 on the buildds?  Would this actually be a measurable reduction in the
 overhead of the buildds?  I find this argument less than convincing
 reasoning for dropping existing functionality...

Yes. It's a pain to get the kerberos stuff set up :-(


 Nothing would prevent you from building your own DLL with Kerberos linking.
 
 Except when it breaks because it's not being tested in the build
 system... :/  I expect there are other such things in the same situation
 but I'm rather unhappy that it's something which is actually going to
 impact people (at the least me) as opposed to GNU readline on some
 esoteric architecture.

Say what?
You'd still get them tested on any BF member that configures it. Just
like now - you won't get it tested unless the BF member is specifically
configured to get it. The only difference is that the SSPI code *would*
get tested even if you don't specifically configure it.

We can easily make sure that at we have BF coverage of the feature. It's
also the same codepath that would be used on Unix, so you'd get that
coverage as well - not complete, but a good part of the way.


 If I was confident that we could easily build it ourselves
 then I wouldn't care as much but, since I've never had to build libpq on
 Windows before, I'm not sure what effort is involved or what tools are
 required.  I'm also not thrilled by the prospect. :)
 It's not hard, at least if you use MSVC to build it. It's harder with
 MingW, but far from impossible.
 
 MSVC would be a rather unhappy requirement. 

Why? But again, you can do mingw if you want to.

 Do we have buildds running
 with MingW?  Settings up buildds is documented, etc, no?  I don't know
 if I could dedicate a machine to it but at least if I can build my own
 buildd setup using the scripts and whatnot it might not be too bad..

I have no idea even what buildds is, so I can't comment on if it works
with mingw :-)


 I have to admit that this does kind of make
 me wish a bit for a 'libpq config file' even though I'm generally against
 such things.  Having the same easy switch as we do w/ Mozilla would be
 really nice.
 So what we'd need in that case is a new libpq connectionstring
 parameter. Which can be done, but it'd require that all frontends that
 use libpq add support for it - such as pgadmin. I'm not sure if the ODBC
 driver will support arbitrary arguments, otherwise that one needs it too.
 
 If the ODBC driver doesn't support changes to the connectionstring (and
 I think it does, actually), that'd probably be a sensible thing to add
 anyway.  Having to have what's essentially a library-config option
 handled by all the clients does kind of suck though.

The only other option I can think of is an environment variable, which
seems very un-windowsy.  But we could have something like sent
environment variable PG_GSSAPI=mit and then attempt to dynamically load
the kerberos libraries. Specifically for win32. It's a bit of a kludge,
and it'll certainly add more code, but it's not *hard* to do.

It will keep the overhead for the builder of the distribution (hello,
Dave) since it will still require the headers to be present on the build
machine, but not for end-users that don't want it (assuming we stop
shipping the MIT DLLs in the package, which I'd like to do).

Also, remember that there is no actual testing of it  on the BF. We
don't test the functionality today on the BF, but at least we detect
link-time errors ;-)

 As I'm sure you can tell, I'm far from convinced this is a good idea ;-)
 
 It's also not exactly unheard of.  I'm pretty sure what mozilla does is
 basically just dlopen() the appropriate library.  I'm not sure if it's
 even got an internal set of dlls which link to the sspi/gssapi dlls
 explicitly.  If it does we might be able to swipe it.  Sorry for my lack
 of familiarity, but does SSPI provide a GSSAPI identical to the MIT one?

GSSAPI protocol, yes.
GSSAPI API, no.


 For some reason I was thinking it did (hence why 

Re: [HACKERS] SSPI authentication

2007-07-17 Thread Magnus Hagander
Dave Page wrote:
 Magnus Hagander wrote:
 So what we'd need in that case is a new libpq connectionstring
 parameter. Which can be done, but it'd require that all frontends that
 use libpq add support for it - such as pgadmin. I'm not sure if the ODBC
 driver will support arbitrary arguments, otherwise that one needs it too.

 As I'm sure you can tell, I'm far from convinced this is a good idea ;-)
 Anybody else want to comment on this?
 
 The ODBC driver would need modification (as would pgAdmin of course).
 Whats more of a concern is that we already have ODBC connection strings
 that can be too long - adding yet another option will make that worse of
 course.

Interesting, didn't know that. That makes that option even less interesting.

Can you comment on if the current ODBC driver will pick up GSSAPI
authentication from libpq or if it needs new code to deal with it? I
never quite figured out how they integrate with libpq for the
authentication part since it moved away from using libpq for everything
again.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] SSPI authentication

2007-07-17 Thread Paul Silveira

This is great.  I've worked on 2 projects in the last year that desperately
needed this.  It will certainly make the security model more seamless...

-Paul




Magnus Hagander-2 wrote:
 
 A quick status update on the SSPI authentication part of the GSSAPI
 project.
 
 I have libpq SSPI working now, with a few hardcoded things still in
 there to be fixed. But it means that I can connect to a linux server
 using kerberos/GSSAPI *without* the need to set up MIR Kerberos
 libraries and settings on the client. This is great :-) The code is
 fairly trivial.
 
 I've set it up as a different way of doing GSSAPI authentication. This
 means that if you can't have both SSPI and MIT KRB GSSAPI in the same
 installation. I don't see a problem with this - 99.9% of windows users
 will just want the SSPI version anyway. But I figured I'd throw it out
 here to see if there are any objections to this?
 
 I'd like to make this enabled by default on Win32, since all supported
 windows platforms have support for it. Then we can add a configure
 option to turn it *off* if we want to. Comments? Do we even need such an
 option?
 
 Right now, the SSPI path is hardcoded to just support Kerberos. Once we
 have both client and server with SSPI support I see no reason to keep
 this restriction. Anybody against that? (Not saying that'll happen for
 8.3, because it certainly needs a bunch of extra testing, but eventually)
 
 
 //Magnus
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 
 

-- 
View this message in context: 
http://www.nabble.com/SSPI-authentication-tf4090227.html#a11654750
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] [HACKERS] msvc, build and install with cygwin in the PATH

2007-07-17 Thread Magnus Hagander
I used to have a different patch from Andrew that did part of this, and
more, and conflicted rather badly with it. However, I never got around
to applying that one, and I can't seem to find it anymore.

Andrew -do you recall if you had all this in yours, and is it still
something you want in, or should we just go with this one?

//Magnus

Bruce Momjian wrote:
 Magnus, what is your reaction to this patch?
 
 ---
 
 Hannes Eder wrote:
 Magnus Hagander wrote:
  Hannes Eder wrote:
   Is it worth doing this the Perl-way and using File::Find? If so, I 
 can
   work an a patch for that.
  
   It's certainly cleaner that way, but I don't find it a major issue. 
 But I'd
   rather see that fix than the other one.

 Here we go. See attached patch. Your comments are welcome.

 Hannes.

 
 *** ..\pgsql-cvshead\src\tools\msvc\Install.pm   Mo Mai 14 16:36:10 2007
 --- src\tools\msvc\Install.pmMi Jun  6 20:39:47 2007
 ***
 *** 10,15 
 --- 10,18 
   use Carp;
   use File::Basename;
   use File::Copy;
 + use File::Find;
 + use File::Glob;
 + use File::Spec;
   
   use Exporter;
   our (@ISA,@EXPORT_OK);
 ***
 *** 99,104 
 --- 102,142 
   print \n;
   }
   
 + sub FindFiles
 + {
 + my $spec = shift;
 + my $nonrecursive = shift;
 + my $pat = basename($spec);
 + my $dir = dirname($spec);
 + 
 + if ($dir eq '') { $dir = '.'; }
 + 
 + -d $dir || croak Could not list directory $dir: $!\n;
 + 
 + if ($nonrecursive)
 + {
 + return glob($spec);
 + }
 + 
 + # borrowed from File::DosGlob
 + # escape regex metachars but not glob chars
 + $pat =~ s:([].+^\-\${}[|]):\\$1:g;
 + # and convert DOS-style wildcards to regex
 + $pat =~ s/\*/.*/g;
 + $pat =~ s/\?/.?/g;
 + 
 + $pat = '^' . $pat . '\z';
 + 
 + my @res;
 + find(
 + {
 + wanted = sub { /$pat/s  push (@res, 
 File::Spec-canonpath($File::Find::name)); }
 + },
 + $dir
 + );
 + return @res;
 + }
 + 
   sub CopySetOfFiles
   {
   my $what = shift;
 ***
 *** 106,126 
   my $target = shift;
   my $silent = shift;
   my $norecurse = shift;
 - my $D;
   
 - my $subdirs = $norecurse?'':'/s';
   print Copying $what unless ($silent);
 ! open($D, dir /b $subdirs $spec |) || croak Could not list $spec\n;
 ! while ($D)
   {
 - chomp;
   next if /regress/; # Skip temporary install in regression subdir
 ! my $tgt = $target . basename($_);
   print .;
 ! my $src = $norecurse?(dirname($spec) . '/' . $_):$_;
 ! copy($src, $tgt) || croak Could not copy $src: $!\n;
   }
 ! close($D);
   print \n;
   }
   
 --- 144,161 
   my $target = shift;
   my $silent = shift;
   my $norecurse = shift;
   
   print Copying $what unless ($silent);
 ! 
 ! foreach (FindFiles($spec, $norecurse))
   {
   next if /regress/; # Skip temporary install in regression subdir
 ! my $src = $_;
 ! my $tgt = $target . basename($src);
   print .;
 ! copy($src, $tgt) || croak Could not copy $src to $tgt: $!\n;
   }
 ! 
   print \n;
   }
   
 ***
 *** 371,395 
   {
   my $target = shift;
   my $nlspath = shift;
 - my $D;
   
   print Installing NLS files...;
   EnsureDirectories($target, share/locale);
 ! open($D,dir /b /s nls.mk|) || croak Could not list nls.mk\n;
 ! while ($D)
   {
 - chomp;
   s/nls.mk/po/;
   my $dir = $_;
   next unless ($dir =~ /([^\\]+)\\po$/);
   my $prgm = $1;
   $prgm = 'postgres' if ($prgm eq 'backend');
 - my $E;
 - open($E,dir /b $dir\\*.po|) || croak Could not list contents of 
 $_\n;
   
 ! while ($E)
   {
 - chomp;
   my $lang;
   next unless /^(.*)\.po/;
   $lang = $1;
 --- 406,425 
   {
   my $target = shift;
   my $nlspath = shift;
   
   print Installing NLS files...;
   EnsureDirectories($target, share/locale);
 ! 
 ! foreach (FindFiles(nls.mk))
   {
   s/nls.mk/po/;
   my $dir = $_;
   next unless ($dir =~ /([^\\]+)\\po$/);
   my $prgm = $1;
   $prgm = 'postgres' if ($prgm eq 'backend');
   
 ! foreach (FindFiles($dir\\*.po, 1))
   {
   my $lang;
   next unless /^(.*)\.po/;
   $lang = $1;
 ***
 *** 401,409 
  croak(Could not run msgfmt on $dir\\$_);
   print .;
   }
 - close($E);
   }
 ! close($D);
   print \n;
   }
   
 --- 431,438 
  croak(Could not run msgfmt on $dir\\$_);
   print .;
   }
   }
 ! 
   print \n;
   }
   
 
 

Re: [HACKERS] Updated tsearch documentation

2007-07-17 Thread Bruce Momjian
Oleg Bartunov wrote:
 On Tue, 17 Jul 2007, Bruce Momjian wrote:
 
  I think the tsearch documentation is nearing completion:
 
  http://momjian.us/expire/fulltext/HTML/textsearch.html
 
  but I am not happy with how tsearch is enabled in a user table:
 
  http://momjian.us/expire/fulltext/HTML/textsearch-app-tutorial.html
 
  Aside from the fact that it needs more examples, it only illustrates an
  example where someone creates a table, populates it, then adds a
  tsvector column, populates that, then creates an index.
 
  That seems quite inflexible.  Is there a way to avoid having a separate
  tsvector column?  What happens if the table is dynamic?  How is that
  column updated based on table changes?  Triggers?  Where are the
  examples?  Can you create an index like this:
 
 I agree, that there are could be more examples, but text search doesn't
 require something special !
 *Example* of trigger function is documented on 
 http://momjian.us/expire/fulltext/HTML/textsearch-opfunc.html

Yes, I see that in tsearch() here:

http://momjian.us/expire/fulltext/HTML/textsearch-opfunc.html#TEXTSEARC$

I assume my_filter_name is optional right?  I have updated the prototype
to be:

tsearch([vector_column_name], [my_filter_name], text_column_name [, ... 
])

Is this accurate?  What does this text below it mean?

There can be many functions and text columns specified in a tsearch()
trigger. The following rule is used: a function is applied to all
subsequent TEXT columns until the next matching column occurs. 

Why are we allowing my_filter_name here?  Isn't that something for a
custom trigger.  Is calling it tsearch() a good idea?  Why not
tsvector_trigger().

  CREATE INDEX textsearch_id ON pgweb USING gin(to_tsvector(column));
 
  That avoids having to have a separate column because you can just say:
 
  WHERE to_query('XXX') @@ to_tsvector(column)
 
 yes, it's possible, but without ranking, since currently it's impossible 
 to store any information in index (it's pg's feature). btw, this should
 works and for GiST index also.

What if they use @@@.  Wouldn't that work because it is going to check
the heap?

 That kind of search is useful if there is  another natural ordering of search 
 results, for example, by timestamp.
 
 
  How do we make sure that the to_query is using the same text search
  configuration as the 'column' or index?  Perhaps we should suggest:
 
 please, keep in mind, it's not mandatory to use the same configuration
 at search time, that was used at index creation.

Well, sort of.  If you have stop words in the tquery configuration, you
aren't going to hit any matches in the tsvector, right?  Same for
synonymns, I suppose.  I can see that stemming would work if there was a
mismatch between tsquery and tsvector.

   CREATE INDEX textsearch_idx ON pgweb USING 
  gin(to_tsvector('english',column));
 
  so that at least the configuration is documented in the index.
 
 yes, it's better to always explicitly specify configuration name and not 
 rely on default configuration. 
 Unfortunately, configuration name doesn't saved in the index.

I was more concerned that there is nothing documenting the configuration
used by the index or the tsvector table column trigger.  By doing:

CREATE INDEX textsearch_idx ON pgweb USING 
gin(to_tsvector('english',column));

you guarantee that the index uses 'english' for all its entries.  If you
omit the 'english' or use a different configuration, it will heap scan
the table, which at least gives the right answer.

Also, how do you guarantee that tsearch() triggers always uses the same
configuration?  The existing tsearch() API seems to make that
impossible.  I am wondering if we need to add the configuration name as
a mandatory parameter to tsearch().

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Updated tsearch documentation

2007-07-17 Thread Michael Glaesemann


On Jul 17, 2007, at 16:24 , Bruce Momjian wrote:

I assume my_filter_name is optional right?  I have updated the  
prototype

to be:

	tsearch([vector_column_name], [my_filter_name], text_column_name  
[, ... ])


Just a style point, but would [filter_name] be better than  
[my_filter_name]? You're not qualifying the others with my_ ... or is  
there something you want to tell us, Bruce? :)


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Updated tsearch documentation

2007-07-17 Thread Bruce Momjian
Michael Glaesemann wrote:
 
 On Jul 17, 2007, at 16:24 , Bruce Momjian wrote:
 
  I assume my_filter_name is optional right?  I have updated the  
  prototype
  to be:
 
  tsearch([vector_column_name], [my_filter_name], text_column_name  
  [, ... ])
 
 Just a style point, but would [filter_name] be better than  
 [my_filter_name]? You're not qualifying the others with my_ ... or is  
 there something you want to tell us, Bruce? :)

Agreed.  Done.

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Reducing NUMERIC size for 8.3

2007-07-17 Thread Bruce Momjian

This has been saved for the 8.4 release:

http://momjian.postgresql.org/cgi-bin/pgpatches_hold

---

Simon Riggs wrote:
 We've changed the on-disk database format in 8.3, so we have an
 opportunity to change other things also. There is a patch thats been on
 the patch queue for some time called numeric508, submitted Dec 2005;
 I've updated this patch now for 8.3 to remove bit rot (an hour's work).
 This is posted to pgsql-patches now and it works.
 
 The benefit of the patch is that it reduces each NUMERIC value by 2
 bytes, so will speed up things considerably. This is now especially
 important if we are looking to reduce the speed of numeric division by a
 factor of 4 (recent WIP patch).
 
 The objections to applying this patch originally were:
 1. it changes on-disk format (we've done this, so argument is void)
 2. it would restrict number of digits to 508 and there are allegedly
 some people that want to store  508 digits.
 
 The current patch passes all regression tests, but currently fails
 numeric_big.sql since this explicitly checks for support of
 numeric(1000,800).
 
 We could:
 a) accept the patch as-is and restrict NUMERIC to 508 digits
 b) refine the patch somewhat to allow 1000 digits
 
 (b) is possible in a couple of ways, both fairly quick:
 - extend the patch so that one of the spare bits from the second digit
 is used to represent dscale 508-1000. 
 - extend the patch so that if weight  127 or dscale  127 we would use
 the first byte in the digits as an extra indicator byte holding the high
 bits of both fields.
 Neither change makes any difference to numbers below
 1,000,000,000,000,000(127 zeroes in total)...000 which probably
 covers the vast majority of people's usage.
 
 Objections: True, we are passed feature freeze, but this patch has been
 on the queue for 14 months prior to freeze and has been waiting on disk
 format changes to make patch application acceptable. We definitely want
 to reduce the size of Numeric by 2 bytes at some point. The question in
 my mind is: When is the best time to make this change? If we put this
 off until 8.4, then it will get rejected again because we won't want to
 change the disk format again. So the best time to do this is now,
 otherwise we'll put it off forever.
 
 Can I get somebody other than Tom to agree to review the patch? Clearly
 waiting for Tom to review this is just going to delay release, which I
 don't want to do.
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 
 
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] SetBufferCommitInfoNeedsSave and race conditions

2007-07-17 Thread Bruce Momjian

Where are we on this?

---

Gregory Stark wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
 
  I'd guess that storing 8 per page would be optimal, so each stored xid would
  track 4,000 transactions - probably around 1 sec worth of transactions when
  the feature is used.
 
 This is something we can experiment with but I suspect that while 8 might be
 sufficient for many use cases there would be others where more would be
 better. The cost to having more lsns stored in the clog would be pretty small.
 
 On TPCC which has longer transactions on moderate hardware we only see order
 of 1,000 txn/min. So a setting like 128 which allows a granularity of 256
 transactions would be about 15s which is not so much longer than the xmin
 horizon of the 90th percentile response time of 2*5s.
 
 -- 
   Gregory Stark
   EnterpriseDB  http://www.enterprisedb.com
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [GENERAL] AutoVacuum Behaviour Question

2007-07-17 Thread Bruce Momjian

Is this item closed?

---

Alvaro Herrera wrote:
 Tom Lane wrote:
  Alvaro Herrera [EMAIL PROTECTED] writes:
   Tom Lane wrote:
   Yeah, we had better investigate some way to clean them up.  It was never
   obvious before that it mattered to get rid of orphan temp tables, but I
   guess it does.
  
   Would it be enough to delete the tuple from pg_class?
  
  No, you need a full DROP.  I don't see that that's harder than removing
  only the pg_class tuple --- the problem in either case is to be sure
  it's OK.  In particular, how to avoid a race condition against an
  incoming backend that adopts that BackendId?  Worst-case, you could be
  deleting a temp table he just made.
 
 Oh, I was just thinking in way for Bruce to get out of his current
 situation.
 
 -- 
 Alvaro Herrerahttp://www.CommandPrompt.com/
 The PostgreSQL Company - Command Prompt, Inc.
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] lazy vacuum sleeps with exclusive lock on table

2007-07-17 Thread Bruce Momjian

This has been saved for the 8.4 release:

http://momjian.postgresql.org/cgi-bin/pgpatches_hold

---

Simon Riggs wrote:
 On Fri, 2007-06-29 at 09:29 +0900, ITAGAKI Takahiro wrote:
  Alvaro Herrera [EMAIL PROTECTED] wrote:
  
   What I'm requesting here is that the sleep in count_nondeletable_pages()
   be removed and that change backpatched to 8.2 and 8.1.
  
  Agreed. We'd better to shorten the exclusive locking as far as possible.
 
 That is just one possibility, but I'd like to consider other
 possibilities before we go for that, especially backpatched.
 
 ISTM holding the lock across many I/Os is the thing that is causing long
 lock times. Removing the sleep may not substantially reduce the time on
 a busy system. Alvaro's example also shows that the number of blocks
 removed could be a substantial number - reminding us that the time the
 lock is held would still be O(N), whereas we would like it to be O(1).
 This is important since we don't even attempt truncation until the
 number of blocks is large enough to be worth bothering with.
 
 Would it be better to keep the sleep in there, but release and
 re-acquire the lock either side of the sleep? That would allow other
 transactions to progress without long lock waits.
 
 Currently, releasing the lock is a problem because the new FSM entries
 are added after truncation, so any updates and inserts would probably
 try to extend the relation, thus preventing further truncation. If we
 did things differently, we would have no reason to fail when we attempt
 to re-acquire the lock:
 - analyze where the truncation point would be on the vacuum pass
 - add FSM entries for all blocks below the truncation point. If that is
 below a minimum of 5% of the entries/16 blocks then we can move the
 truncation point higher so that the FSM entry is large enough to allow
 us time to truncate.
 - truncate the file, one bite at a time as we sleep (or max 16 blocks at
 a time if no sleep requested), possibly scanning forwards not back
 
 I would still like to see VACUUM spin a few times trying to acquire the
 lock before it gives up attempting to truncate. Re-running the whole
 VACUUM just to get another split-second chance to truncate is not very
 useful behaviour either.
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Restartable signals 'n all that

2007-07-17 Thread Bruce Momjian

This has been saved for the 8.4 release:

http://momjian.postgresql.org/cgi-bin/pgpatches_hold

---

Tom Lane wrote:
 While poking at the vacuum-launcher issue currently under discussion,
 I got annoyed again at the behavior we've known for a long while that
 on some platforms pg_usleep() won't be interrupted by signals.  (In
 particular I see this on HPUX, though not on Linux or Darwin.  Anyone
 know if it happens on any BSDen?)  I noticed that with the launcher set
 to sleep at most one second between checks for signals, it seemed to
 *always* take the full second before shutting down, which seemed awfully
 unlucky.
 
 Some more testing and man-page-reading revealed the full truth of what's
 going on.  The Single Unix Spec's select(2) page says under ERRORS
 
 [EINTR]
 The select() function was interrupted before any of the selected events
 occurred and before the timeout interval expired. If SA_RESTART has been
 set for the interrupting signal, it is implementation-dependent whether
 select() restarts or returns with [EINTR].
 
 Since pqsignal() sets SA_RESTART for all trapped signals except SIGALRM,
 that means we are exposing ourselves to the implementation dependency.
 What I furthermore realized while tracing is that restart means
 start counting down the full timeout interval over again.  Thus, if
 we have told select() to time out after 1 second, and SIGINT arrives
 after 0.9 second, we will wait a full second more before responding.
 
 Bad as that is, it gets worse rapidly: each new signal arrival restarts
 the cycle.  So a continuous flow of signals at a spacing of less than
 1 second would prevent the delay from *ever* terminating.
 
 This may be why some kernels reduce the timeout value before returning,
 so that a restart behavior in userland behaves sanely.  But that's
 not what's happening for me :-(.
 
 To me, this calls into question whether we should try to avoid using
 SA_RESTART at all.  The reason for doing it of course is to avoid
 unexpected syscall EINTR failures as well as short read/short write
 behaviors during disk I/O.  However, if that's the plan then what the
 heck is pqsignal() doing giving an exception for SIGALRM?  As soon as
 you have even one non-restartable trapped signal, it seems you need
 to handle EINTR everywhere.
 
 I looked into the CVS history and found that we inherited the SIGALRM
 exception from Berkeley (in fact it's in the v4r2 sources from 1994).
 Back then the system's usage of SIGALRM was pretty darn narrow --- it
 was used only to trigger the deadlock checker, which means it applied
 only while waiting for a lock, and the range of code in which the
 interrupt could occur was just a few lines.  Now that we use SIGALRM for
 statement_timeout, the interrupt can potentially happen almost anywhere
 in the backend code.
 
 So we've got two problems: SA_RESTART is preventing some EINTRs from
 happening when we'd like, and yet it seems we are at risk of unwanted
 EINTRs anyway.
 
 The only really clean solution I can see is to stop using SA_RESTART
 and try to make all our syscalls EINTR-proof.  But the probability
 of bugs-of-omission seems just about 100%, especially in third party
 backend add-ons that we don't get to review the code for.
 
 If we do nothing, anyone using statement_timeout is at risk.  The
 risk is somewhat ameliorated by the fact that occurrence of the
 interrupt means transaction cancellation anyway, so an unexpected
 error of some other type isn't really a fatal problem.  But it's
 still a bit nervous-making.  I don't currently see a way to get
 corrupted data from an EINTR (bufmgr is fairly robust about write
 failures, for instance) but ...
 
 If we decide to live with that, and fix any reported problems, then
 one thing we could do to ameliorate the sleep problem is to turn
 off SA_RESTART for all activity-cancelling interrupts, in particular
 SIGINT/SIGTERM/SIGQUIT.  This wouldn't make it safe for bgwriter
 and friends to go back to long sleep intervals, because they are
 watching for other interrupts too that don't represent reasons to
 cancel transactions.  But it would at least solve the problem of
 slow response to shutdown requests.
 
 Comments?  I sure hope someone has a better idea.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Invalid to_date patterns (was: [PATCHES] [GENERAL] ISO week dates)

2007-07-17 Thread Brendan Jurd

On 7/18/07, Tom Lane [EMAIL PROTECTED] wrote:

This is all good but I think that self-inconsistent format strings are
not really the main source of to_date problems.  Most of the complaints
I've seen arise from to_date plowing ahead to deliver a ridiculous
answer when the input data string doesn't match the format.  I'd like to
see the code try a little harder to validate the input data.



Agreed, but so far it doesn't look like there's much overlap between
the parsing and validating code and the stuff I'm playing with.

I'll keep an eye out for any opportunities to improve on this, but it
might end up being a separate TODO.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [HACKERS] SetBufferCommitInfoNeedsSave and race conditions

2007-07-17 Thread Gregory Stark
Bruce Momjian [EMAIL PROTECTED] writes:

 Where are we on this?

Well Simon just sent the reworked patch yesterday so the answer is we haven't
started tuning this parameter. (Bruce's message is referring to the discussion
about what the optimal value of lsns per clog page would be.)

I intend to do some benchmarking on it and testing what the best value of this
parameter is one of the things I aim to determine with these benchmarks.

I'm not 100% sure yet what to measure though. There are two questions here:

1) What are some good workloads to test which will require larger values for
   this parameter or which will be hurt by excessively large values?

I think any short-transaction workload should be basically good enough. I'm
thinking of using just pgbench's default workload. Does anyone think there are
other workloads that we need to specifically test?

2) What metric do I use to determine if the value is large enough or too
   large?

The gross measurement would be to look at tps. I would prefer to actually
count events which we want to minimize such as xlogflushes because the clog
lsn is too old and how much extra work the visibility checks do. I'm not sure
exactly how much of this we can really measure though. Are there any other
events having an insufficiently large or excessively large value of this
parameter will cause which we can count?

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] what is difference between LOCAL and GLOBAL TEMP TABLES in PostgreSQL

2007-07-17 Thread Bruce Momjian

Added to TODO:

o Allow GLOBAL temporary tables to exist as empty by default in
  all sessions

  http://archives.postgresql.org/pgsql-hackers/2007-07/msg6.php


---

Gregory Stark wrote:
 Pavel Stehule [EMAIL PROTECTED] writes:
 
  2007/7/4, Bruce Momjian [EMAIL PROTECTED]:
   The use case is any system that uses temp tables in an OLTP setting,
   which certainly isn't uncommon. The problem is that today (and as well
   with a global temp table that is still writing to the catalogs) is that
   every OLTP operation that creates or drops a temp table is doing DDL.
   At best, that leads to a lot of catalog bloat. Right now, it appears to
   also expose some race conditions (we've got a customer that's been bit
   by this and we've been able to reproduce some odd behavior in the lab).
 
  The solution is to fix the bloat, not add a work-around.
 
 The bloat is a direct consequence of performing DDL in the midst of an OLTP
 transaction. And it's not the only consequence either. Off the top of my head
 trying to do DDL in an OLTP environment will cause OID inflation, locking
 issues, catcache problems, unnecessary prepared query replans, and the list
 goes on, what happens to views defined on the temporary tables? Foreign key
 references to the temporary tables?
 
 You've got it backwards: addressing the artificially imposed requirement to do
 DDL to create new tables for what should be purely DML operations is fixing
 the root problem, not a work-around. What would be a work-around is trying to
 deal with the consequences as they come up.
 
  Catalog bloat is one unwanted effect. Second is different behave of
  temp tables  than other mayor rdbms, and uncomfortable work with temp
  tables in stored procedures. Third argument for implementation of
  global temp tables is full support of ANSI SQL,
 
 I think the ANSI concept of temporary tables which are defined once but give
 you a fresh empty work-space for each transaction only makes sense if you're
 thinking in terms of an OLTP environment. Otherwise you would just go ahead
 and do the DDL to create new tables for each query and not worry about the
 down-sides.
 
 The advantages of the ANSI temporary tables are all things you would worry
 about in an OLTP environment but not a data warehousing environment:
 
 1) Overhead to perform DDL
 
 2) Replanning overhead
 
 3) Security issues of doing DDL at run-time
 
 4) Difficulty structuring code when multiple procedures need the same
temporary tables but the procedures may be called in different orders for
different jobs and need different sets of tables.
 
 -- 
   Gregory Stark
   EnterpriseDB  http://www.enterprisedb.com
 
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] [HACKERS] msvc, build and install with cygwin in the PATH

2007-07-17 Thread Andrew Dunstan


I am fighting some fires in my day job.

My pesonal TODO list for pg up to beta is:

. fix chunking muddle (see recent emails)
. complete CSV logs patch
. harden MSVC builds

I'll get to this when I can.  I can dig up the patch I did if you want 
it again.


cheers

andrew


Magnus Hagander wrote:

I used to have a different patch from Andrew that did part of this, and
more, and conflicted rather badly with it. However, I never got around
to applying that one, and I can't seem to find it anymore.

Andrew -do you recall if you had all this in yours, and is it still
something you want in, or should we just go with this one?

//Magnus

Bruce Momjian wrote:
  

Magnus, what is your reaction to this patch?

---

Hannes Eder wrote:


Magnus Hagander wrote:
 Hannes Eder wrote:
  Is it worth doing this the Perl-way and using File::Find? If so, I 
can

  work an a patch for that.
 
  It's certainly cleaner that way, but I don't find it a major issue. 
But I'd

  rather see that fix than the other one.

Here we go. See attached patch. Your comments are welcome.

Hannes.

  
*** ..\pgsql-cvshead\src\tools\msvc\Install.pm	Mo Mai 14 16:36:10 2007

--- src\tools\msvc\Install.pm   Mi Jun  6 20:39:47 2007
***
*** 10,15 
--- 10,18 
  use Carp;
  use File::Basename;
  use File::Copy;
+ use File::Find;
+ use File::Glob;
+ use File::Spec;
  
  use Exporter;

  our (@ISA,@EXPORT_OK);
***
*** 99,104 
--- 102,142 
  print \n;
  }
  
+ sub FindFiles

+ {
+ my $spec = shift;
+ my $nonrecursive = shift;
+ my $pat = basename($spec);
+ my $dir = dirname($spec);
+ 
+ if ($dir eq '') { $dir = '.'; }
+ 
+ -d $dir || croak Could not list directory $dir: $!\n;
+ 
+ if ($nonrecursive)

+ {
+ return glob($spec);
+ }
+ 
+ # borrowed from File::DosGlob

+ # escape regex metachars but not glob chars
+ $pat =~ s:([].+^\-\${}[|]):\\$1:g;
+ # and convert DOS-style wildcards to regex
+ $pat =~ s/\*/.*/g;
+ $pat =~ s/\?/.?/g;
+ 
+ $pat = '^' . $pat . '\z';
+ 
+ my @res;

+ find(
+ {
+ wanted = sub { /$pat/s  push (@res, 
File::Spec-canonpath($File::Find::name)); }
+ },
+ $dir
+ );
+ return @res;
+ }
+ 
  sub CopySetOfFiles

  {
  my $what = shift;
***
*** 106,126 
  my $target = shift;
  my $silent = shift;
  my $norecurse = shift;
- my $D;
  
- my $subdirs = $norecurse?'':'/s';

  print Copying $what unless ($silent);
! open($D, dir /b $subdirs $spec |) || croak Could not list $spec\n;
! while ($D)
  {
- chomp;
  next if /regress/; # Skip temporary install in regression subdir
! my $tgt = $target . basename($_);
  print .;
! my $src = $norecurse?(dirname($spec) . '/' . $_):$_;
! copy($src, $tgt) || croak Could not copy $src: $!\n;
  }
! close($D);
  print \n;
  }
  
--- 144,161 

  my $target = shift;
  my $silent = shift;
  my $norecurse = shift;
  
  print Copying $what unless ($silent);
! 
! foreach (FindFiles($spec, $norecurse))

  {
  next if /regress/; # Skip temporary install in regression subdir
! my $src = $_;
! my $tgt = $target . basename($src);
  print .;
! copy($src, $tgt) || croak Could not copy $src to $tgt: $!\n;
  }
! 
  print \n;

  }
  
***

*** 371,395 
  {
  my $target = shift;
  my $nlspath = shift;
- my $D;
  
  print Installing NLS files...;

  EnsureDirectories($target, share/locale);
! open($D,dir /b /s nls.mk|) || croak Could not list nls.mk\n;
! while ($D)
  {
- chomp;
  s/nls.mk/po/;
  my $dir = $_;
  next unless ($dir =~ /([^\\]+)\\po$/);
  my $prgm = $1;
  $prgm = 'postgres' if ($prgm eq 'backend');
- my $E;
- open($E,dir /b $dir\\*.po|) || croak Could not list contents of 
$_\n;
  
! while ($E)

  {
- chomp;
  my $lang;
  next unless /^(.*)\.po/;
  $lang = $1;
--- 406,425 
  {
  my $target = shift;
  my $nlspath = shift;
  
  print Installing NLS files...;

  EnsureDirectories($target, share/locale);
! 
! foreach (FindFiles(nls.mk))

  {
  s/nls.mk/po/;
  my $dir = $_;
  next unless ($dir =~ /([^\\]+)\\po$/);
  my $prgm = $1;
  $prgm = 'postgres' if ($prgm eq 'backend');
  
! foreach (FindFiles($dir\\*.po, 1))

  {
  my $lang;
  next unless /^(.*)\.po/;
  $lang = $1;
***
*** 401,409 
 croak(Could not run msgfmt on $dir\\$_);
  print .;
  }
- close($E);
  }
! close($D);
  print \n;
  }
  
--- 431,438 


Re: [HACKERS] Should we bump libpq major version for 8.3?

2007-07-17 Thread Bruce Momjian
Stephen Frost wrote:
-- Start of PGP signed section.
 * Tom Lane ([EMAIL PROTECTED]) wrote:
  As of today there are two new functions exported by libpq.so since
  8.2 (lo_truncate and PQconnectionUsedPassword).  Currently,
  libpq/Makefile sets the major.minor shlib version to 5.1 as compared
  to 5.0 in PG 8.2.  Should it be 6.0?  I seem to recall people
  chastizing us for not bumping the major version if there were any
  ABI changes at all, forward-compatible or not.
 
 No, it should be 5.1 (we havn't released a 5.1 at all yet, have we?).
 Adding functions is a minor shlib bump and should *not* change the
 SONAME (which includes the major here, objdump -p libpq.so.5.0).

Agreed.  src/tools/RELEASE_CHANGES says only a minor bump is needed, and
I already did that for 8.3.  We are OK.

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [BUGS] BUG #3431: age() gets the days wrong

2007-07-17 Thread Bruce Momjian

I don't see this as applied yet.

---

Tom Lane wrote:
 Pelle Johansson [EMAIL PROTECTED] writes:
  The age() function seem to work by first counting months until less than a
  month remains to to the second argument, then counting days left. This
  doesn't give the correct result, as shown by this example:
 
  # select column1, age(column1, '2006-11-02'), date '2006-11-02' +
  age(column1, '2006-11-02') from (values ('2007-01-31'::date),
  ('2007-02-01')) as alias;
column1   |  age   |  ?column?   
  ++-
   2007-01-31 | 2 mons 29 days | 2007-01-31 00:00:00
   2007-02-01 | 2 mons 29 days | 2007-01-31 00:00:00
  (2 rows)
 
 I took another look at this example.  I believe what is actually going
 wrong here is that when timestamp_age converts a month into an
 equivalent number of days, it uses the number of days in the first
 month of the interval it's dealing with (ie, the month containing
 the earlier of the two dates).  This is just wrong, because interval
 addition adds months first and then days.  The appropriate conversion
 to use is actually the length of the next-to-last month of the interval.
 
 As an example, 8.2 and CVS HEAD produce
 
 regression=# select age('2007-03-14', '2007-02-15');
age   
 -
  27 days
 (1 row)
 
 which is reasonable, but
 
 regression=# select age('2007-04-14', '2007-02-15');
   age  
 ---
  1 mon 27 days
 (1 row)
 
 is not so reasonable, nor is
 
 regression=# select age('2007-03-14', '2007-01-15');
   age  
 ---
  1 mon 30 days
 (1 row)
 
 If we change the code to use the next-to-last month of the interval
 then these two cases produce '1 mon 30 days' and '1 mon 27 days'
 respectively.
 
 Another problem is that the code isn't doing the propagate-to-next-field
 bit for negative fractional seconds.  Hence it produces
 
 regression=# select age('2007-02-14 01:00:00', '2007-01-15 01:00:00.4');
  age  
 --
  30 days -00:00:00.40
 (1 row)
 
 which is maybe not incorrect, but certainly fairly inconsistent with
 
 regression=# select age('2007-02-14 01:00:00', '2007-01-15 01:00:01');
age
 --
  29 days 23:59:59
 (1 row)
 
 
 Hence I propose the attached patch.  This does not change any existing
 regression test outputs, but it does change the example given in the
 documentation: age(timestamp '2001-04-10', timestamp '1957-06-13')
 will now produce '43 years 9 mons 28 days' not 27 days.  Which actually
 is correct if you try to add back the result to timestamp '1957-06-13'.
 It also appears to fix Palle's example:
 
 regression=# select column1, age(column1, '2006-11-02'), date '2006-11-02' +
 age(column1, '2006-11-02') from (values ('2007-01-31'::date),
 ('2007-02-01')) as alias;
   column1   |  age   |  ?column?   
 ++-
  2007-01-31 | 2 mons 29 days | 2007-01-31 00:00:00
  2007-02-01 | 2 mons 30 days | 2007-02-01 00:00:00
 (2 rows)
 
 As I said earlier, I'm worried about changing the behavior of a function
 that's been around for so long, so I'm disinclined to back-patch this.
 But it seems like a reasonable change to make in 8.3.  Comments?
 
   regards, tom lane
 


-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] [BUGS] BUG #3431: age() gets the days wrong

2007-07-17 Thread Bruce Momjian

Sorry, I see there was later discussion.

---

Tom Lane wrote:
 Pelle Johansson [EMAIL PROTECTED] writes:
  The age() function seem to work by first counting months until less than a
  month remains to to the second argument, then counting days left. This
  doesn't give the correct result, as shown by this example:
 
  # select column1, age(column1, '2006-11-02'), date '2006-11-02' +
  age(column1, '2006-11-02') from (values ('2007-01-31'::date),
  ('2007-02-01')) as alias;
column1   |  age   |  ?column?   
  ++-
   2007-01-31 | 2 mons 29 days | 2007-01-31 00:00:00
   2007-02-01 | 2 mons 29 days | 2007-01-31 00:00:00
  (2 rows)
 
 I took another look at this example.  I believe what is actually going
 wrong here is that when timestamp_age converts a month into an
 equivalent number of days, it uses the number of days in the first
 month of the interval it's dealing with (ie, the month containing
 the earlier of the two dates).  This is just wrong, because interval
 addition adds months first and then days.  The appropriate conversion
 to use is actually the length of the next-to-last month of the interval.
 
 As an example, 8.2 and CVS HEAD produce
 
 regression=# select age('2007-03-14', '2007-02-15');
age   
 -
  27 days
 (1 row)
 
 which is reasonable, but
 
 regression=# select age('2007-04-14', '2007-02-15');
   age  
 ---
  1 mon 27 days
 (1 row)
 
 is not so reasonable, nor is
 
 regression=# select age('2007-03-14', '2007-01-15');
   age  
 ---
  1 mon 30 days
 (1 row)
 
 If we change the code to use the next-to-last month of the interval
 then these two cases produce '1 mon 30 days' and '1 mon 27 days'
 respectively.
 
 Another problem is that the code isn't doing the propagate-to-next-field
 bit for negative fractional seconds.  Hence it produces
 
 regression=# select age('2007-02-14 01:00:00', '2007-01-15 01:00:00.4');
  age  
 --
  30 days -00:00:00.40
 (1 row)
 
 which is maybe not incorrect, but certainly fairly inconsistent with
 
 regression=# select age('2007-02-14 01:00:00', '2007-01-15 01:00:01');
age
 --
  29 days 23:59:59
 (1 row)
 
 
 Hence I propose the attached patch.  This does not change any existing
 regression test outputs, but it does change the example given in the
 documentation: age(timestamp '2001-04-10', timestamp '1957-06-13')
 will now produce '43 years 9 mons 28 days' not 27 days.  Which actually
 is correct if you try to add back the result to timestamp '1957-06-13'.
 It also appears to fix Palle's example:
 
 regression=# select column1, age(column1, '2006-11-02'), date '2006-11-02' +
 age(column1, '2006-11-02') from (values ('2007-01-31'::date),
 ('2007-02-01')) as alias;
   column1   |  age   |  ?column?   
 ++-
  2007-01-31 | 2 mons 29 days | 2007-01-31 00:00:00
  2007-02-01 | 2 mons 30 days | 2007-02-01 00:00:00
 (2 rows)
 
 As I said earlier, I'm worried about changing the behavior of a function
 that's been around for so long, so I'm disinclined to back-patch this.
 But it seems like a reasonable change to make in 8.3.  Comments?
 
   regards, tom lane
 

Content-Description: age.patch

 Index: timestamp.c
 ===
 RCS file: /cvsroot/pgsql/src/backend/utils/adt/timestamp.c,v
 retrieving revision 1.179
 diff -c -r1.179 timestamp.c
 *** timestamp.c   6 Jul 2007 04:15:59 -   1.179
 --- timestamp.c   8 Jul 2007 19:45:04 -
 ***
 *** 3044,3050 
   if (timestamp2tm(dt1, NULL, tm1, fsec1, NULL, NULL) == 0 
   timestamp2tm(dt2, NULL, tm2, fsec2, NULL, NULL) == 0)
   {
 ! fsec = (fsec1 - fsec2);
   tm-tm_sec = tm1-tm_sec - tm2-tm_sec;
   tm-tm_min = tm1-tm_min - tm2-tm_min;
   tm-tm_hour = tm1-tm_hour - tm2-tm_hour;
 --- 3044,3051 
   if (timestamp2tm(dt1, NULL, tm1, fsec1, NULL, NULL) == 0 
   timestamp2tm(dt2, NULL, tm2, fsec2, NULL, NULL) == 0)
   {
 ! /* form the symbolic difference */
 ! fsec = fsec1 - fsec2;
   tm-tm_sec = tm1-tm_sec - tm2-tm_sec;
   tm-tm_min = tm1-tm_min - tm2-tm_min;
   tm-tm_hour = tm1-tm_hour - tm2-tm_hour;
 ***
 *** 3064,3069 
 --- 3065,3081 
   tm-tm_year = -tm-tm_year;
   }
   
 + /* propagate any negative fields into the next higher field */
 + while (fsec  0)
 + {
 + #ifdef HAVE_INT64_TIMESTAMP
 + fsec += USECS_PER_SEC;
 + #else
 + fsec += 1.0;
 + #endif
 + tm-tm_sec--;
 + }
 + 
   

Re: [HACKERS] [GENERAL] AutoVacuum Behaviour Question

2007-07-17 Thread Alvaro Herrera
Bruce Momjian wrote:
 
 Is this item closed?

No, it isn't.  Please add a TODO item about it:
 * Prevent long-lived temp tables from causing frozen-Xid advancement
   starvation


-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] postgresql compile problem

2007-07-17 Thread caij
Hi,
Eecently, I have downloaded the postgresql-8.1.9.tar.gz from the official 
website,and then I install in my linux System ,whose gcc version is 
2.9.6.Although I can install it successfully,then result version I check is 
7.2.1~£¬and how can this happen,can u tell me the reason?

Thanks a lot !

Regards
--

 Cai jun


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly