Re: [HACKERS] Pre-allocated free space for row updating

2005-08-31 Thread Simon Riggs
On Wed, 2005-08-31 at 08:32 +0900, Satoshi Nagayasu wrote:
 Simon Riggs wrote:
  The summary was:
  
  1. Have a PCTFREE column added on a table by table basis
 
 I think a good place to keep PCTFREE value is a new column
 in the pg_class, and ALTER TABLE should be able to change this value.

Agreed

  2. Apply PCTFREE for Inserts only
  3. Allow Updates to use the full space in the block.
 
 4. Allow to repair fragmentation in each page.
 
 Because updates cause fragmentation in the page.
 
 So we need to keep large continuous free space in each page,
 if we want to get more effective on PCTFREE feature.

...doesn't VACUUM already do that?

Anyway, if the setting is for each specific table then the performance
benefit is very clear.

Best Regards, Simon Riggs



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] ERROR: bogus varno: 1 with 8.1beta and postgis 1.0.3

2005-08-31 Thread rm_pg


With psql 8.1beta1 I get a pretty cryptic error message ERROR: bogus varno: 1
occasionally when I try to mix postgis (GIST-based) conditions with normal
conditions.  The statements where this happen are totally reroducable.

Attached is an example of this happening.   I'd be happy to look further,
but I'm not really sure where to look myself; and I must apologize in 
advance that I'll be traveling for a couple days so my response time 
for getting more info may be slow.  I'm using the latest postgis and the
beta of 8.1 on ubuntu.  The previous postgis claims to have some support 
for 8.1 added; and quite a few statements seem to work fine.


Any thoughts where I should look next?

   Ron Mayer



===
==  Mising postgis conditions with non postgis conditions in the where
==  clause, yielding ERROR: bogus varno: 1
===

[12]ron2:/home/ramayer/data/logs% psql fli fli
Welcome to psql 8.1beta1, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

fli=# explain select * from rtgr.lines,userfeatures.area_features
   where rtgr.lines.the_geom  setSRID('BOX3D(-121.29 38.58, -121.29 
38.58)'::BOX3D, -1 )
and rtgr.lines.the_geom  userfeatures.area_features.the_geom
and length(name)0  and userfeatures.area_features.featureid in (8);
fli-# 
fli-# 
fli-# ERROR:  bogus varno: 1




===
==  Just the postgis conditions - doesn't trigger the bogus varno.
===

fli=# explain select * from rtgr.lines,userfeatures.area_features
   where rtgr.lines.the_geom  setSRID('BOX3D(-121.29 38.58, -121.29 
38.58)'::BOX3D, -1 )
and rtgr.lines.the_geom  userfeatures.area_features.the_geom;
fli-# fli-# 
QUERY PLAN

--
 Nested Loop  (cost=0.00..8.90 rows=1 width=277)
   -  Index Scan using rtgr_lines__gist on lines  (cost=0.00..5.01 rows=1 
width=101)
 Index Cond: (the_geom  
'01030001000500C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340'::geometry)
 Filter: (the_geom  
'01030001000500C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340C3F5285C8F525EC00AD7A3703D4A4340'::geometry)
   -  Index Scan using area_features__postgis on area_features 
(cost=0.00..3.87 rows=1 width=176)

 Index Cond: (outer.the_geom  area_features.the_geom)
 Filter: (outer.the_geom  area_features.the_geom)
(7 rows)

fli=#



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Pre-allocated free space for row updating (like PCTFREE)

2005-08-31 Thread Satoshi Nagayasu
Simon Riggs wrote:
4. Allow to repair fragmentation in each page.

Because updates cause fragmentation in the page.

So we need to keep large continuous free space in each page,
if we want to get more effective on PCTFREE feature.
 
 
 ...doesn't VACUUM already do that?

VACUUM generates a huge load because it repaires all pages
on the table file.

I think (more light-weight) repairing on a single page
is needed to maintain free space in the specific page.

-- 
NAGAYASU Satoshi [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] ERROR: bogus varno: 1 with 8.1beta and postgis 1.0.3

2005-08-31 Thread Gavin Sherry
On Wed, 31 Aug 2005 [EMAIL PROTECTED] wrote:


 With psql 8.1beta1 I get a pretty cryptic error message ERROR: bogus varno: 
 1
 occasionally when I try to mix postgis (GIST-based) conditions with normal
 conditions.  The statements where this happen are totally reroducable.

Thanks for detailed report. However, others beat you to it. Tom fixed the
problem on the weekend.

http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/optimizer/plan/setrefs.c.diff?r1=1.111r2=1.112

Gavin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Pre-allocated free space for row

2005-08-31 Thread Hannu Krosing
On K, 2005-08-31 at 16:50 +0900, Satoshi Nagayasu wrote:
 Simon Riggs wrote:
 4. Allow to repair fragmentation in each page.
 
 Because updates cause fragmentation in the page.
 
 So we need to keep large continuous free space in each page,
 if we want to get more effective on PCTFREE feature.
  
  
  ...doesn't VACUUM already do that?
 
 VACUUM generates a huge load because it repaires all pages
 on the table file.
 
 I think (more light-weight) repairing on a single page
 is needed to maintain free space in the specific page.

There have been plans floating around for adding a more lightweight
vacuum, which uses something similar to FSM to keep track of pages which
need vacuuming. And possibly integreated with background writer to make
effective use of I/O resources.

I guess it could be used for this case of cheap page cleanups as well.

-- 
Hannu Krosing [EMAIL PROTECTED]


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Call for 7.5 feature completion

2005-08-31 Thread Dawid Kuroczko
On 8/26/05, Alvaro Herrera [EMAIL PROTECTED] wrote:
Or, slightly different, what are people's most wanted features?
One feature, or rather set of features which was missing from the list and I think
it is important: i18n. :)

I mean, PostgreSQL has a number of good features concerning internationalization,
like UTF-8 support, transparent charset conversions, etc, but it also is area where
new users are likely to get bit. One of the most gotcha-prone areas in PostgreSQL
IMHO.

If you stick with English, its OK. If you want different language, say Polish, German,
whatever you'll probably careful enough to set a good locale. If you decide you
want to make a hybrid Polish-German database -- you may run into problems, like
indexes and ordering -- indexes are ordered using only one collation mechanism,
so you should probably use C locale. If you're unlucky -- you have to recreate
whole database. And then if you intend to use tsearch2, you have to set it up carefully
for given needs. I'm not saying that mysqlish approach of setting collate per table

would be a good solution. 

Frankly I don't think there is an ideal solution for this.

Some time ago someone suggested using universal UTF-8 collation, which is
good for most languages (and not for Turkish :)) -- I believe I've seen a patch for
this on this list. Having some one size fits most solution could be helpful.

Anyway, the i18n problem is a child-age illness, once you get over with it, you're
most likely safe from it for the rest of your life. But some newbies may not get
through it. ;)

 Regards,
 Dawid



[HACKERS] problem with PQExecParams

2005-08-31 Thread prasanna mavinakuli

 
Hi,

Problem Description:
 
We need to insert binary data to tables and retrieve the 
Same-(data type is bytea).
We are using PQExecParams for inserting and retrieving
Data.
Table contains other than binary data also.

When we tried to fetch the integer data (type-int2)
We are not able to get anything from that result set(alws value is zero).but Pqexec returns the desired value

Questions:

1)how can we get the integer value from the result set which is got from executing PQexecParams;
(the sample code is given below)

2)if it's not possible then how can we get the BINARY values using
Pqexec  

Here is the code snapshot:

string query=select intval from dummy where intval=7;;

  res1=PQexecParams(conn,query.c_str(),0,NULL,NULL,NULL,NULL,1);
  for(unsigned char i=0;i4;i++)
  {
printf(%d \n,(PQgetvalue(res1,0,0)[i]));
//prints  here
  }

  coutvalue from pqexec params atoi(PQgetvalue(res1,0,0))endl; //prints 0 here

  coutusing pqexec ***endl;
  res1=PQexec(conn,select intval from dummy where intval=7;);//prints 7 below.
  coutvalue from pqexec params atoi(PQgetvalue(res1,0,0))endl;


Prasanna.





Re: [HACKERS] Call for 7.5 feature completion

2005-08-31 Thread William ZHANG
* Updatable Views per SQL
* INTERVAL data type per SQL
* BLOB/CLOB data type per SQL
* Faster bulk load
* Remove current transaction is aborted, commands ignored ...
* Compile with MSVC on Win32 platforms. MySQL support it.
* Thread safety libpq, ecpg.
-- 
Regards,
William ZHANG



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] On hardcoded type aliases and typmod for user types

2005-08-31 Thread Martijn van Oosterhout
Hi,

As part of previous discussions about typmod for user type, Tom
mentioned that you would need to make type and function names
equivalent. As it turns out, if you refactor a few rules, you can
actually make it work and manage them seperately. For this to work the
current col_name_keyword non-terminal has to be divided into few more
categories. The criterion is mostly whether it is followed by a left
parenthsis.

1. Some can be moved to unreserved words, mostly the types BIT, VARCHAR,
BIGINT etc since they don't need special rules anymore.

2. Words that have special productions (eg CONVERT, COALESCE etc),
these can still only be used as column names, not types or
(user-defined) functions.

3. Words which can be column names functions but not types. These never
appear normally with a parenthesis, so they will be interpreted as a
function if there is one. (eg SETOF, NATIONAL, etc)

4. Words that can be column names and types but not functions. These
are artifacts of the grammer due to the fact that VARYING is
unrestricted. These are BIT, NCHAR, CHAR_P and CHARACTER.

After this, you can have user-functions that take an arbitrary set of
parameters. There'll need to be a bit of code to verify the arguments.
It results in a net gain of 15 keywords for functions and about 30 for
types.

My question is, should users be able to create types schema.int4 and
schema.integer simultaneously. Currently it allows you but it's not
handled very well (\dT doesn't list both). Should this be allowed?
Should aliasing for DEC and DECIMAL - NUMERIC be done for
user-defined types?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpF8U8wTWMrX.pgp
Description: PGP signature


Re: [HACKERS] Call for 7.5 feature completion

2005-08-31 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of William ZHANG
 Sent: 31 August 2005 10:51
 To: pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Call for 7.5 feature completion
 
 * Faster bulk load

Done, iirc.

 * Compile with MSVC on Win32 platforms. MySQL support it.

So what? It would take a major amount of work, with no useful benefits.

 * Thread safety libpq, ecpg.

Done.

Regards, Dave.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] problem with PQExecParams

2005-08-31 Thread Michael Fuhr
On Wed, Aug 31, 2005 at 12:08:10PM -, prasanna mavinakuli wrote:
 We need to insert binary data to tables and retrieve the 
 Same-(data type is bytea).
 We are using PQExecParams for inserting and retrieving
 Data.

This doesn't belong on pgsql-hackers, which is for discussing
development of PostgreSQL itself.  Please post to one of the other
lists; if you're not sure which one then pgsql-general is usually
a good place.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Call for 7.5 feature completion

2005-08-31 Thread Andrew Dunstan



Dave Page wrote:

 


* Compile with MSVC on Win32 platforms. MySQL support it.
   



So what? It would take a major amount of work, with no useful benefits.

 



... and you can compile all the client and library stuff with MSVC - 
just not the server nor extensions. But the audience for compiling those 
is far smaller.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Pre-allocated free space for row

2005-08-31 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 On K, 2005-08-31 at 16:50 +0900, Satoshi Nagayasu wrote:
 VACUUM generates a huge load because it repaires all pages
 on the table file.
 
 I think (more light-weight) repairing on a single page
 is needed to maintain free space in the specific page.

 There have been plans floating around for adding a more lightweight
 vacuum, which uses something similar to FSM to keep track of pages which
 need vacuuming. And possibly integreated with background writer to make
 effective use of I/O resources.

 I guess it could be used for this case of cheap page cleanups as well.

Pretty much all of these ideas fall down when you remember that you have
to fix indexes too.  There's no such thing as a cheap page cleanup,
except maybe in a table with no indexes.  Cleaning out the indexes
efficiently requires a certain amount of batch processing, which leads
straight back to VACUUM.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] problems installing pgsql

2005-08-31 Thread Devrim GUNDUZ

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Hi,

On Tue, 30 Aug 2005, Ricardo Gamero wrote:


Hello everybody!
I'm so sorry to post this simple question but I don't know what to do, the
thing is I need to install postgresql 8.0.3 in red hat 9 but when I try to
do it this errors appear:
[EMAIL PROTECTED] root]# rpm -ivh postgresql-server-8.0.3-1PGDG.i686.rpm
warning: postgresql-server-8.0.3-1PGDG.i686.rpm: V3 DSA signature: NOKEY,
key ID 748f7d0e
error: Failed dependencies:
libcrypto.so.4 is needed by postgresql-server-8.0.3-1PGDG
libpq.so.4 is needed by postgresql-server-8.0.3-1PGDG
libssl.so.4 is needed by postgresql-server-8.0.3-1PGDG


First of all, please install this RPM to solve these 3 dependencies:

http://developer.postgresql.org/~devrim/rpms/compat/compat-postgresql-libs-3-3PGDG.i686.rpm


postgresql = 8.0.3 is needed by postgresql-server-8.0.3-1PGDG


Yes, before installing -server rpm, you have to install main package.



postgresql  7.4 conflicts with postgresql-server-8.0.3-1PGDG


Please follow the usual dump/reload process before upgrading PostgreSQL. 
Both 7.4 and 8.0 cannot stand on the same server.


Regards,
- --
Devrim GUNDUZ
Kivi Bilişim Teknolojileri - http://www.kivi.com.tr
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
 http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFDFcSStl86P3SPfQ4RAj7hAJ4twmxTnd7ikcoeYP4VYtX0UV1vbgCfaV60
Odd04eVW2rhPc9rBBNyxorI=
=rJa9
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] On hardcoded type aliases and typmod for user types

2005-08-31 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 My question is, should users be able to create types schema.int4 and
 schema.integer simultaneously. Currently it allows you but it's not
 handled very well (\dT doesn't list both). Should this be allowed?
 Should aliasing for DEC and DECIMAL - NUMERIC be done for
 user-defined types?

IMHO, ideally the aliasing should *only* apply to the built-in types.
The current hack only approximates this (IIRC, the translation happens
for any unqualified type name, independently of one's search path).

One possible approach is to remove the aliasing translation from the
grammar altogether, and add a notion of alias entries in pg_type that
would be found through normal lookup and then replaced by the underlying
type by parse analysis rather than by the grammar.  We could not do this
in the existing system because of the need to handle typmods for some of
the aliases ... but maybe it would work given generalized typmod
support.  There are still a few special cases though, like CHARACTER
VARYING, which seems like it more or less has to be wired into the
grammar.

BTW, the proposed refactoring sounds messier to me than does decreeing
type and function names equivalent ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Pre-allocated free space for row

2005-08-31 Thread Hannu Krosing
On K, 2005-08-31 at 10:33 -0400, Tom Lane wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
  On K, 2005-08-31 at 16:50 +0900, Satoshi Nagayasu wrote:
  VACUUM generates a huge load because it repaires all pages
  on the table file.
  
  I think (more light-weight) repairing on a single page
  is needed to maintain free space in the specific page.
 
  There have been plans floating around for adding a more lightweight
  vacuum, which uses something similar to FSM to keep track of pages which
  need vacuuming. And possibly integreated with background writer to make
  effective use of I/O resources.
 
  I guess it could be used for this case of cheap page cleanups as well.
 
 Pretty much all of these ideas fall down when you remember that you have
 to fix indexes too.  There's no such thing as a cheap page cleanup,
 except maybe in a table with no indexes.  Cleaning out the indexes
 efficiently requires a certain amount of batch processing, which leads
 straight back to VACUUM.

What I was aiming for here, is cases when bgwriter kicks in after it is
safe to do the cleanup but before the changed page and it's changed
index pages are flushed to disk. 

I think that for OLTP scenarios this is what happens quite often. 

Even more so if we consider that we do mark quaranteed-invisible pages
in index as well.

My wild guess is that deleting all index pointers for a removed index is
more-or-less the same cost as creating new ones for inserted/updated
page. If so, the max cost factor for doing so is 2X, but usually less,
as many of the needed pages are already in memory even at the time when
it is safe to remove old tuple, which in OLTP usage is a few seconds
(usually even less than a second) after the original delete is done.

It is often more agreeable to take a continuous up-to-2X performance hit
than an unpredictable hit at unknown (or even at a known) time.

-- 
Hannu Krosing [EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Pre-allocated free space for row

2005-08-31 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 My wild guess is that deleting all index pointers for a removed index is
 more-or-less the same cost as creating new ones for inserted/updated
 page.

Only if you are willing to make the removal process recalculate the
index keys from looking at the deleted tuple.  This opens up a ton of
gotchas for user-defined index functions, particularly for doing it in
the bgwriter which is not really capable of running transactions.
Removing index entries also requires writing WAL log records, which
is something we probably want to minimize in the bgwriter to avoid
contention issues.

 It is often more agreeable to take a continuous up-to-2X performance hit
 than an unpredictable hit at unknown (or even at a known) time.

Well, you can have that sort of tradeoff today, by running autovacuum
continuously with the right delay parameters.

The only vacuum optimization idea I've heard that makes any sense to me
is the one about keeping a bitmap of changed pages so that vacuum need
not read in pages that have not changed since last time.  Everything
else is just shuffling the same work around, and in most cases doing it
less efficiently than we do now and in more performance-critical places.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] 8.1 OUT params returning more than one row question

2005-08-31 Thread Tony Caduto

Hi,
I have been playing around with 8.1(it's very nice by the way) and was 
trying to get OUT params to return more than 1 row.


I came up with the function below, and it does work, however I had to 
declare another record to use in the FOR ..IN loop.


From my reading of the docs the out params create a record type 
automaticly and my question is how can I use this automaticly created 
record in the
FOR loop?  It does not seem right that I have to create another record 
type and then copy the row values to the out parms.


CREATE OR REPLACE FUNCTION test_func9(out firstname varchar, out 
lastname varchar)

RETURNS SETOF record AS
$BODY$
Declare
row record;
BEGIN
   for row in select null,description from common.common_groups
   loop
   firstname = row.description;
   lastname = '';
   RETURN NEXT;
   end loop;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

Thanks,

Tony

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tony Caduto
I saw in the release notes that 8.1 is supposed to do function syntax 
checking at create rather than run time, but with the first beta this 
does not seem to work.  check function bodies is on by default in the 
postgresql.conf file.  Is there a setting that didn't make it into the 
conf file?


Thanks,

Tony

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Alvaro Herrera
On Wed, Aug 31, 2005 at 11:39:48AM -0500, Tony Caduto wrote:
 I saw in the release notes that 8.1 is supposed to do function syntax 
 checking at create rather than run time, but with the first beta this 
 does not seem to work.  check function bodies is on by default in the 
 postgresql.conf file.  Is there a setting that didn't make it into the 
 conf file?

It works for me; care to submit an test case?

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
We are who we choose to be, sang the goldfinch
when the sun is high (Sandman)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] 8.1 OUT params returning more than one row question

2005-08-31 Thread Alvaro Herrera
On Wed, Aug 31, 2005 at 11:27:39AM -0500, Tony Caduto wrote:

Hi,

 I came up with the function below, and it does work, however I had to 
 declare another record to use in the FOR ..IN loop.
 
 From my reading of the docs the out params create a record type
 automaticly and my question is how can I use this automaticly created
 record in the FOR loop?  It does not seem right that I have to create
 another record type and then copy the row values to the out parms.

The record that you are not supposed to declare is the output record,
i.e. you don't have to use CREATE TYPE, use a table type, or describe
the anonymous record in the SELECT statement.  Of course, in PL/pgSQL
you need a variable to do the FOR ... LOOP.

Your example works fine for me.  What exactly were you expecting?

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
Escucha y olvidarás; ve y recordarás; haz y entenderás (Confucio)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tony Caduto

here is a case that does not work:

CREATE or REPLACE FUNCTION public.test_func9(out firstname varchar,out 
lastname varchar)

RETURNS SETOF pg_catalog.record AS
$BODY$
Declare
row record56;
BEGIN
   for $0 in select '',description from common.common_groups
   loop
  -- firstname = row.description;
  -- lastname = '';
   RETURN NEXT;
   end loop;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

notice the for in select, it's for sure wrong, but it raises no errors 
until I execute the function
also note the declaration for row, there is no record56 type, but it 
raises no errors at create.


here is my version string:

PostgreSQL 8.1beta1 on i686-pc-linux-gnu, compiled by GCC 
i686-pc-linux-gnu-gcc (GCC) 3.3.5-20050130 (Gentoo Linux 
3.3.5.20050130-r1, ssp-3.3.5.20050130-1, pie-8.7.7.1)


Alvaro Herrera wrote:


On Wed, Aug 31, 2005 at 11:39:48AM -0500, Tony Caduto wrote:
 

I saw in the release notes that 8.1 is supposed to do function syntax 
checking at create rather than run time, but with the first beta this 
does not seem to work.  check function bodies is on by default in the 
postgresql.conf file.  Is there a setting that didn't make it into the 
conf file?
   



It works for me; care to submit an test case?

 




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tom Lane
Tony Caduto [EMAIL PROTECTED] writes:
 notice the for in select, it's for sure wrong, but it raises no errors 
 until I execute the function
 also note the declaration for row, there is no record56 type, but it 
 raises no errors at create.

It's *syntax* checking, not an exhaustive check that the function is OK.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] On hardcoded type aliases and typmod for user types

2005-08-31 Thread Martijn van Oosterhout
On Wed, Aug 31, 2005 at 11:11:04AM -0400, Tom Lane wrote:
 IMHO, ideally the aliasing should *only* apply to the built-in types.
 The current hack only approximates this (IIRC, the translation happens
 for any unqualified type name, independently of one's search path).
 
 One possible approach is to remove the aliasing translation from the
 grammar altogether, and add a notion of alias entries in pg_type that
 would be found through normal lookup and then replaced by the underlying
 type by parse analysis rather than by the grammar.  We could not do this
 in the existing system because of the need to handle typmods for some of
 the aliases ... but maybe it would work given generalized typmod
 support.  There are still a few special cases though, like CHARACTER
 VARYING, which seems like it more or less has to be wired into the
 grammar.

Yeah, I was thinking about alias entries. I was thinking that domains
might already do a lot of the work. But then it's not really aliasing
anymore.

 BTW, the proposed refactoring sounds messier to me than does decreeing
 type and function names equivalent ...

Actually, it's not that bad. The non-terminals relating to types
collapse into about three with only xxx VARYING and DOUBLE
PRECISION and a few others remaining. The keywords are split into
three sets like I mentioned. Then you make the rules for GenericType
look enough like the expansion of func_expr, that bison can consider
the context after before deciding it's a function or a type.

The changes are not that great. The number of rules (according to the
-v output) goes from 1610 to 1601 and states from 2810 to 2777. OTOH, a
chunk of code moves from gram.y to the adt directory I guess. The
grammar is just the beginning of the work.

Though maybe the point is that we can take the easy way and implement
the slightly more difficult if it turns out the be necessary.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpiG2H0tfrQG.pgp
Description: PGP signature


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tony Caduto

Tom,
What exactly does it check then? What I pointed out is simple syntax 
checking in other languages.


From what I have seen it does not check anything in the body of the 
function, I can put gibberish in the body as long as it has a begin and end.


It does not seem to be doing anything differently than 8.0.x does with 
function syntax checking at create time, so why even mention it in the 
release notes?


the function below also raises no errors at create, but at run time it does.

If I run the below function I get this error:

PostgreSQL Error Code: (1)
ERROR:  type record44 does not exist

From what I read in the release notes I was expecting to see this 
raised at create time. 
Users coming from systems like Oracle and M$ SQL server are expecting 
this stuff to be caught at create not run time.


How difficult would it be to have the server just run the function at 
create time with null for any input params?  Of course a user could just 
do this but it is a annoying second step.


CREATE or REPLACE FUNCTION public.test_func9(out firstname varchar,out 
lastname varchar)

RETURNS SETOF pg_catalog.record AS
$BODY$
Declare
row record44;
BEGIN
   asfdfdfdfafdsfsdfsdf
   sdf bla bla
   sdf yada yada
   s
   df
   sd
   fsd
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

Thanks,

Tony

Tom Lane wrote:


Tony Caduto [EMAIL PROTECTED] writes:
 

notice the for in select, it's for sure wrong, but it raises no errors 
until I execute the function
also note the declaration for row, there is no record56 type, but it 
raises no errors at create.
   



It's *syntax* checking, not an exhaustive check that the function is OK.

regards, tom lane

 




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] On hardcoded type aliases and typmod for user types

2005-08-31 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 On Wed, Aug 31, 2005 at 11:11:04AM -0400, Tom Lane wrote:
 One possible approach is to remove the aliasing translation from the
 grammar altogether, and add a notion of alias entries in pg_type that
 would be found through normal lookup and then replaced by the underlying
 type by parse analysis rather than by the grammar.

 Yeah, I was thinking about alias entries. I was thinking that domains
 might already do a lot of the work. But then it's not really aliasing
 anymore.

Right, a domain isn't quite the same thing.  But you could probably use
domains temporarily for prototyping it.

One reason that I think a domain isn't the same thing is that I believe
domains don't have typmods.  Although you could imagine a domain passing
a typmod down to its base type, that's not what the spec expects AFAICS.
You're supposed to write
create domain mytype as varchar(4);
There's nothing like
create domain mytype(n) as varchar(n);
in the spec (and no I don't really wish to invent it...)

 Though maybe the point is that we can take the easy way and implement
 the slightly more difficult if it turns out the be necessary.

That seems fair to me.  Now that we have knowledge in the archives about
how to do it the hard way if needed, we can take the easy way until we
run into an actual need for the hard way.

I still like the idea of pushing the aliasing out of the grammar,
though.  Come to think of it, we could probably even handle the
multiple-word stuff that way: let the grammar convert CHARACTER VARYING
to character varying and have an alias with that name in the catalog.

One thing you'd need to look at is that format_type is aware of the
special properties of the alias names: at present they never need to be
schema-qualified, but this would no longer be certainly the case with
the aliasing approach.  A possible answer is for format_type to work by
replacing (say) INT4OID with the OID of the alias type that has the
desired spelling, and then use the same TypeIsVisible test as is applied
to any user type.  Another thing that is involved there is not
double-quoting the generated names ... we don't want it to emit
character varying but the user-type path would do that.

Hmm... actually there's a bit of an issue here, which is that it's not
clear whether schema qualification makes sense for the multi-word type
names.  For instance
pg_catalog.character varying
seems both ugly and syntactically ambiguous.  So maybe we need to stick
to the present solution for the multi-word type names: they are expanded
by the grammar to pre-qualified names, and so you cannot have a user
type selected by such a name, and format_type keeps its current special
case approach to generating them.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Matt Miller
On Wed, 2005-08-31 at 13:13 -0500, Tony Caduto wrote:
 the function below also raises no errors at create, but at run time it does.
 ...
 CREATE or REPLACE FUNCTION public.test_func9(out firstname varchar,out 
 lastname varchar)
 RETURNS SETOF pg_catalog.record AS
 $BODY$
 Declare
 row record44;
 BEGIN
 asfdfdfdfafdsfsdfsdf
 sdf bla bla
 sdf yada yada
 s
 df
 sd
 fsd
 END;
 $BODY$
 LANGUAGE 'plpgsql' VOLATILE;

When I execute this CREATE statement I get:

ERROR:  type record44 does not exist
CONTEXT:  compile of PL/pgSQL function test_func9 near line 2

So, it does seem to be working as advertised.

I'm running HEAD as of a few hours ago.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tom Lane
Tony Caduto [EMAIL PROTECTED] writes:
 It does not seem to be doing anything differently than 8.0.x does with 
 function syntax checking at create time, so why even mention it in the 
 release notes?

The checking is more extensive than it was in 8.0.  For example 8.0
didn't reject this at creation:

regression=# create function bla() returns int as 'begin zit; end' language 
plpgsql;
ERROR:  syntax error at or near zit at character 1
QUERY:  zit
CONTEXT:  SQL statement in PL/PgSQL function bla near line 1
LINE 1: zit
^
regression=#

because 8.0 didn't feed any apparent SQL statements down to the main
SQL grammar to see if they were sane according to the main grammar.

But it remains purely *syntactic*.  If the code gets through the
grammar then it's accepted.  What this boils down to is that we
don't apply any checking that depends on anything outside the
function itself (for example, whether something that is used as
a type name actually exists in pg_type).

 How difficult would it be to have the server just run the function at 
 create time with null for any input params?

What happens if the function (intentionally) errors out on null inputs?
Or goes into an infinite loop?  (If it's declared STRICT then the
programmer would be quite within his rights not to handle such a case.)
What happens if the function changes the database on the basis of the
bogus call?  How much would this actually prove, considering that null
inputs would be likely not to exercise many of the code paths within the
function?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tom Lane
Tony Caduto [EMAIL PROTECTED] writes:
 CREATE or REPLACE FUNCTION public.test_func9(out firstname varchar,out 
 lastname varchar)
 RETURNS SETOF pg_catalog.record AS
 $BODY$
 Declare
 row record44;
 BEGIN
 asfdfdfdfafdsfsdfsdf
 sdf bla bla
 sdf yada yada
 s
 df
 sd
 fsd
 END;
 $BODY$
 LANGUAGE 'plpgsql' VOLATILE;

[ looks at that again... ]  Wait, are you sure that you are talking to
an 8.1 server?  8.1 will certainly catch the garbage syntax in the
function body, whether or not it notices that the type name is bogus.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Michael Fuhr
On Wed, Aug 31, 2005 at 01:13:00PM -0500, Tony Caduto wrote:
 From what I have seen it does not check anything in the body of the 
 function, I can put gibberish in the body as long as it has a begin and end.
 
 It does not seem to be doing anything differently than 8.0.x does with 
 function syntax checking at create time, so why even mention it in the 
 release notes?

I see different behavior in 8.1 than in 8.0.  Are you *sure* you're
connected to an 8.1 system when you're running your tests?

Are you using a database that was restored from an earlier version
of PostgreSQL?  I wonder if you're not getting the lanvalidator
function.  What's the result of the following query?

SELECT lanname,
   lanplcallfoid, lanplcallfoid::regprocedure,
   lanvalidator, lanvalidator::regprocedure
FROM pg_language;

What happens if you create a fresh database and run createlang
plpgsql in it, and then run your tests?

 the function below also raises no errors at create, but at run time it does.

With the example you posted I get the following at create time:

ERROR:  type record44 does not exist
CONTEXT:  compile of PL/pgSQL function test_func9 near line 2

If I change record44 to record then I get the following (again,
at create time):

ERROR:  syntax error at or near asfdfdfdfafdsfsdfsdf at character 1
QUERY:  asfdfdfdfafdsfsdfsdf sdf bla bla sdf yada yada s df sd fsd END
CONTEXT:  SQL statement in PL/PgSQL function test_func9 near line 10
LINE 1: asfdfdfdfafdsfsdfsdf sdf bla bla sdf yada yada s df sd fsd E...
^

 From what I read in the release notes I was expecting to see this 
 raised at create time. 

Create-time checking works here.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Josh Berkus
Tony,

  From what I have seen it does not check anything in the body of the
 function, I can put gibberish in the body as long as it has a begin and
 end.

Nope:

stp=# create function bad_stuff ( x boolean ) returns boolean as $x$
stp$# begin
stp$# afasdfasdfasdf;
stp$# afasdfasdfa;
stp$# asdfasfasdf;
stp$# end;
stp$# $x$ language plpgsql;
ERROR:  syntax error at or near afasdfasdfasdf at character 1
QUERY:  afasdfasdfasdf
CONTEXT:  SQL statement in PL/PgSQL function bad_stuff near line 2
ERROR:  syntax error at or near afasdfasdfasdf at character 1
QUERY:  afasdfasdfasdf
CONTEXT:  SQL statement in PL/PgSQL function bad_stuff near line 2
LINE 1: afasdfasdfasdf

Are you sure you don't have check_function_bodies = Off?

There is a  difference between *syntax* errors and *sql* errors.If a 
table does not exist, we don't want to check for that and bounce the 
function; possibly the function will only be called in a context where the 
table does exist.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tony Caduto

Hi,

I did restore from a 8.0 dump.

here is the output from the query:

lanname  | lanplcallfoid | lanplcallfoid  | lanvalidator | 
lanvalidator
--+---++--+--
internal | 0 | -  | 2246 | 
fmgr_internal_validator(oid)
c| 0 | -  | 2247 | 
fmgr_c_validator(oid)
sql  | 0 | -  | 2248 | 
fmgr_sql_validator(oid)
plperlu  | 16392 | plperl_call_handler()  |0 | -
plpgsql  | 16394 | plpgsql_call_handler() |0 | -
(5 rows)


here is my version string:

PostgreSQL 8.1beta1 on i686-pc-linux-gnu, compiled by GCC i686-pc-linux-gnu-gcc 
(GCC) 3.3.5-20050130 (Gentoo Linux 3.3.5.20050130-r1, ssp-3.3.5.20050130-1, 
pie-8.7.7.1)

I am trying my tests on a new database with fresh language install now.

How can I get my restored databases to behave the same as a fresh one?

Thanks for your help on this.

Tony



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Josh Berkus
Matt,

 On Wed, 2005-08-31 at 11:59 -0700, Josh Berkus wrote:
  If a table does not exist, we don't want to check for that and bounce
  the function; possibly the function will only be called in a context
  where the table does exist.

 The Pl/pgSQL compiler should be able to dive into SQL statements, hit
 the catalog, and bounce a function because of invalid database object
 references.  Ideally this capability could be turned off on demand.

Well, that would certainly be nice to have as an *additional* capability.   
Patches welcome!

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tom Lane
Matt Miller [EMAIL PROTECTED] writes:
 On Wed, 2005-08-31 at 11:59 -0700, Josh Berkus wrote:
 If a table does not exist, we don't want to check for that and bounce
 the function; possibly the function will only be called in a context
 where the table does exist.

 I am thankful that Oracle's PL/SQL compiler checks these things for me.
 I don't remember the last time I intended to write code that referenced
 something that did not exist in the database.

Almost every day, people try to write stuff like

CREATE TEMP TABLE foo ... ;
INSERT INTO foo ... ;
etc etc
DROP TABLE foo ;

in plpgsql functions.  Now I know that that doesn't work very well,
but we should be going in the direction of fixing it to work well,
not installing error checks that are guaranteed to make it fail.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 Are you using a database that was restored from an earlier version
 of PostgreSQL?  I wonder if you're not getting the lanvalidator
 function.

Ah-hah, that sounds like a good theory.  He'd have had to have carried
the DB forward from 7.4 or before, though, since plpgsql had a validator
in 8.0.

We've had repeated problems with PL languages stemming from the fact
that pg_dump dumps them at a pretty low semantic level.  Aside from this
problem with adding a validator, we used to have issues with hardwired
paths to the shared libraries in the CREATE FUNCTION commands.  And in
8.1, whether the functions are in public or pg_catalog is going to
vary across installations depending on whether the language was restored
from a dump or not.

I wonder if we could change the dump representation to abstract out the
knowledge encapsulated in createlang.  I don't suppose this would
work:

\! createlang plpgsql dbname

but it'd be nice if the dump didn't know any more about the language
than its name, and didn't mention the implementation functions at all.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Matt Miller
On Wed, 2005-08-31 at 11:59 -0700, Josh Berkus wrote:
 If a table does not exist, we don't want to check for that and bounce
 the function; possibly the function will only be called in a context
 where the table does exist.

The Pl/pgSQL compiler should be able to dive into SQL statements, hit
the catalog, and bounce a function because of invalid database object
references.  Ideally this capability could be turned off on demand.

I am thankful that Oracle's PL/SQL compiler checks these things for me.
I don't remember the last time I intended to write code that referenced
something that did not exist in the database.  I agree,though, that some
developers might rely on such a capability in some circumstances.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Matt Miller
On Wed, 2005-08-31 at 15:29 -0400, Tom Lane wrote:
 Matt Miller [EMAIL PROTECTED] writes:
  I don't remember the last time I intended to write code that referenced
  something that did not exist in the database.
 
 Almost every day, people try to write stuff like
 
   CREATE TEMP TABLE foo ... ;
   INSERT INTO foo ... ;
   etc etc
   DROP TABLE foo ;

Point taken.

PL/SQL requires all DDL to be dynamic SQL.  For example:

execute immediate 'drop table foo';

The stuff inside the string is pretty-much ignored at compile time.

Maybe, then, my idealized PL/pgSQL compiler always allows DDL to
reference any object, but DML is checked against the catalog.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Michael Fuhr
On Wed, Aug 31, 2005 at 03:37:21PM -0400, Andrew Dunstan wrote:
 Tony Caduto wrote:
 How can I get my restored databases to behave the same as a fresh one?
 
 Run createlang plpgsql mydb before running your restore, and possibly 
 remove the bits that create them from the dump script, or they might 
 just fail benignly.

In an already-loaded database, I think the following should work:

UPDATE pg_language SET lanvalidator = 'plpgsql_validator'::regproc
WHERE lanname = 'plpgsql';

I'd recommend wrapping the update in a transaction and making sure
only one record was updated before committing.

Tom (or anybody else), are there any gotchas with updating pg_language
like this?  It works for me in simple tests.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Jim C. Nasby
On Wed, Aug 31, 2005 at 07:43:45PM +, Matt Miller wrote:
 On Wed, 2005-08-31 at 15:29 -0400, Tom Lane wrote:
  Matt Miller [EMAIL PROTECTED] writes:
   I don't remember the last time I intended to write code that referenced
   something that did not exist in the database.
  
  Almost every day, people try to write stuff like
  
  CREATE TEMP TABLE foo ... ;
  INSERT INTO foo ... ;
  etc etc
  DROP TABLE foo ;
 
 Point taken.
 
 PL/SQL requires all DDL to be dynamic SQL.  For example:
 
   execute immediate 'drop table foo';

BTW, the way you handled this case in DB2 was:

CREATE TEMP TABLE foo ...;
CREATE FUNCTION blah AS ...;
DROP TEMP TABLE foo;

This way the object you wanted did exist when you were creating the
function. Of course it would be better if plpgsql could just read the
DDL and deal with it... but I'd say that doing the CREATE TABLE outside
the statement is better than nothing.

Actually, I think you only had to do the CREATE TEMP TABLE outside the
function creation if the function didn't create the temp table itself.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tony Caduto
I just found out the databases on 8.0 where originally restored from a 
7.4 server, so it seems I have never had the

lanvalidator function even while running on 8.0 for the last 10 months :-(

So how can I update my restored databases, i tried dropping the 
language, but it wouldn't let me becasuse of dependent objects.


Thanks,

Tony

Are you using a database that was restored from an earlier version


of PostgreSQL?  I wonder if you're not getting the lanvalidator
function.  What's the result of the following query?

SELECT lanname,
  lanplcallfoid, lanplcallfoid::regprocedure,
  lanvalidator, lanvalidator::regprocedure
FROM pg_language;

What happens if you create a fresh database and run createlang
plpgsql in it, and then run your tests?

 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Jim C. Nasby
On Wed, Aug 31, 2005 at 11:59:47AM -0700, Josh Berkus wrote:
 There is a  difference between *syntax* errors and *sql* errors.If a 
 table does not exist, we don't want to check for that and bounce the 
 function; possibly the function will only be called in a context where the 
 table does exist.

It would still be nice to have, with a way to over-ride it, either via
an option to CREATE FUNCTION or with some directive to plpgsql itself
inside the function body (probably the most useful case since it allows
disabling error checking just where it's needed).
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Softwarehttp://pervasive.com512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Andrew Dunstan



Tony Caduto wrote:


Hi,

I did restore from a 8.0 dump.


[snip]


I am trying my tests on a new database with fresh language install now.

How can I get my restored databases to behave the same as a fresh one?




Run createlang plpgsql mydb before running your restore, and possibly 
remove the bits that create them from the dump script, or they might 
just fail benignly.


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tony Caduto

Tom,
I successfully updated my database to use the validator function without 
dropping it using:


CREATE FUNCTION plpgsql_validator (oid) RETURNS void AS 
'$libdir/plpgsql' LANGUAGE C;

UPDATE pg_language SET lanvalidator = 'plpgsql_validator'::regproc
WHERE lanname = 'plpgsql';

The create checking is *much* better now :-)

Thanks to everyone for helping me track this down, turned out it had 
nothing to do with 8.1 but I didn't know that.

Sorry about that.

Tony


That would not create a dependency from the language to the validator,
but in practice you probably don't care about that.  The bigger problem
for Tony is likely to be that plpgsql_validator() doesn't exist as a
function in his database; he'll have to create it (see createlang -e
for a reference) first.

regards, tom lane

 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Michael Fuhr
[Please don't top-post; it destroys the conversational flow.  I've
moved your comment below what you commented on.]

On Wed, Aug 31, 2005 at 03:13:02PM -0500, Tony Caduto wrote:
 In an already-loaded database, I think the following should work:
 
 UPDATE pg_language SET lanvalidator = 'plpgsql_validator'::regproc
 WHERE lanname = 'plpgsql';

 When I run this I get this error in the database:
 PostgreSQL Error Code: (1)
 ERROR:  function plpgsql_validator does not exist

Oops...createlang would ordinarily create that function, but since
you restored from another database the validator function was never
created.  Try adding this before the UPDATE (stolen from pg_dump):

CREATE FUNCTION pg_catalog.plpgsql_validator(oid) RETURNS void
AS '$libdir/plpgsql', 'plpgsql_validator'
LANGUAGE c;

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] On hardcoded type aliases and typmod for user types

2005-08-31 Thread Martijn van Oosterhout
On Wed, Aug 31, 2005 at 02:25:54PM -0400, Tom Lane wrote:
 I still like the idea of pushing the aliasing out of the grammar,
 though.  Come to think of it, we could probably even handle the
 multiple-word stuff that way: let the grammar convert CHARACTER VARYING
 to character varying and have an alias with that name in the catalog.

Currently, a user-defined type, function, table etc is one IDENT, I
don't see any point in changing that. The standard wants character
varying and we can support that but we don't need to extend that to
user-defined types. If people really want that they can quote it.
Besides, the character stuff is somewhat special as it has the
character set stuff so will need special grammer anyway.

 One thing you'd need to look at is that format_type is aware of the
 special properties of the alias names: at present they never need to be
 schema-qualified, but this would no longer be certainly the case with
 the aliasing approach.  A possible answer is for format_type to work by
 replacing (say) INT4OID with the OID of the alias type that has the
 desired spelling, and then use the same TypeIsVisible test as is applied
 to any user type.  Another thing that is involved there is not
 double-quoting the generated names ... we don't want it to emit
 character varying but the user-type path would do that.

I was thinking actually of setting the type searching code to search
pg_catalog before the normal search_path. The types being hardwired
into the grammer essentially implied this so I thought I would avoid
surprises. 

This ofcourse would mean that all built-in types would automatically
override user-defined ones, which actually sucks if PostgreSQL keeps
including more types by default. OTOH, then types in pg_catalog never
need to be qualified, making it easy for format_type.

Now i think about it it may not be a good idea, for all its benefits.
Perhaps only doing it for multiword types. Damn special casing.
 
 Hmm... actually there's a bit of an issue here, which is that it's not
 clear whether schema qualification makes sense for the multi-word type
 names.  For instance
   pg_catalog.character varying

It doesn't work. The current grammer, even now, treats anything schema
qualified as non-special. You can't schema qualify char(4) even if you
want to. Incidently, these typmod changes for user types would make
this work as a side-effect.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgprnwknlHPj7.pgp
Description: PGP signature


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 In an already-loaded database, I think the following should work:

 UPDATE pg_language SET lanvalidator = 'plpgsql_validator'::regproc
 WHERE lanname = 'plpgsql';

 Tom (or anybody else), are there any gotchas with updating pg_language
 like this?  It works for me in simple tests.

That would not create a dependency from the language to the validator,
but in practice you probably don't care about that.  The bigger problem
for Tony is likely to be that plpgsql_validator() doesn't exist as a
function in his database; he'll have to create it (see createlang -e
for a reference) first.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Tony Caduto

When I run this I get this error in the database:
PostgreSQL Error Code: (1)
ERROR:  function plpgsql_validator does not exist


In an already-loaded database, I think the following should work:

UPDATE pg_language SET lanvalidator = 'plpgsql_validator'::regproc
WHERE lanname = 'plpgsql';

I'd recommend wrapping the update in a transaction and making sure
only one record was updated before committing.

Tom (or anybody else), are there any gotchas with updating pg_language
like this?  It works for me in simple tests.

 




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread huaxin zhang
I am new to this hacker's job. What I was looking for was to record
the actual disk IO performed for arbituary query plan. I searched
in backend/executor but not sure if that was the right place to 
add a tracer. would the /backend/storage be the place that controls
the actual I/O? btw, is there a way to find the definitions of all variables
or functions defined? I tried cscope but it is not good for such a large 
framework.

thanks a lot


Re: [HACKERS] Pre-allocated free space for row

2005-08-31 Thread Simon Riggs
On Wed, 2005-08-31 at 16:50 +0900, Satoshi Nagayasu wrote:
 Simon Riggs wrote:
 4. Allow to repair fragmentation in each page.
 
 Because updates cause fragmentation in the page.
 
 So we need to keep large continuous free space in each page,
 if we want to get more effective on PCTFREE feature.
  
  
  ...doesn't VACUUM already do that?
 
 VACUUM generates a huge load because it repaires all pages
 on the table file.
 
 I think (more light-weight) repairing on a single page
 is needed to maintain free space in the specific page.

So PCTFREE is an OK idea, but lets drop #4, which is a separate idea and
not one that has gained agreeable consensus.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] On hardcoded type aliases and typmod for user types

2005-08-31 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 I was thinking actually of setting the type searching code to search
 pg_catalog before the normal search_path. The types being hardwired
 into the grammer essentially implied this so I thought I would avoid
 surprises.

That strikes me as an unnecessary reduction in flexibility.  As long as
we make the hardwired type names translate to qualified names (same as
they do now) we don't have to assume any such thing.

(What I might actually favor doing that for is operators, because the
schema-qualified syntax for operators is so gross.  But I don't see a
need for it for type names.)

 Hmm... actually there's a bit of an issue here, which is that it's not
 clear whether schema qualification makes sense for the multi-word type
 names.  For instance
 pg_catalog.character varying

 It doesn't work. The current grammer, even now, treats anything schema
 qualified as non-special. You can't schema qualify char(4) even if you
 want to. Incidently, these typmod changes for user types would make
 this work as a side-effect.

Right.  I think that
pg_catalog.varchar(n)
is reasonable and should be accepted, but I'm fine with decreeing that
character varying(n)
will always be special non-schema-qualified syntax (which the grammar
will in fact translate into the former).

The point about character sets is a bit distressing; here we are
designing a new general-purpose mechanism and we can already see
cases it doesn't handle.  Can we fix that?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] 8.1 and syntax checking at create time

2005-08-31 Thread Andrew Dunstan



huaxin zhang wrote:


I am new to this hacker's job. What I was looking for was to record
the actual disk IO performed for arbituary query plan. I searched
in backend/executor   but not sure if that was the right place to
add a tracer.  would the /backend/storage be the place that controls
the actual I/O? btw, is there a way to find the definitions of all 
variables
or functions defined?   I tried cscope but it is not good for such a 
large

framework.
 



What has this to do with syntax checking?

Please don't post irrelevant replies. Start a new topic instead.

thanks

andrew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Indexing dead tuples

2005-08-31 Thread Simon Riggs
During CREATE INDEX we include all tuples, even if they are already dead
when we build an index.

What purpose does this serve?

A pre-existing transaction can't see the index, so there is no danger
that it can use the index and unknowingly avoid touching a valid row.
(If it *can* see the index, is there some benefit in that behaviour?)

I suggest that we should not include dead rows in an index when it is
created, to increase the speed of index creation in certain cases. This
also simplifies a number of the index build routines, which actually
contain special logic for handling dead rows.

Comments?

Best Regards, Simon Riggs



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Procedural language definitions (was Re: [HACKERS] 8.1 and syntax checking at create time)

2005-08-31 Thread Tom Lane
I wrote:
 We've had repeated problems with PL languages stemming from the fact
 that pg_dump dumps them at a pretty low semantic level.  Aside from this
 problem with adding a validator, we used to have issues with hardwired
 paths to the shared libraries in the CREATE FUNCTION commands.  And in
 8.1, whether the functions are in public or pg_catalog is going to
 vary across installations depending on whether the language was restored
 from a dump or not.

 I wonder if we could change the dump representation to abstract out the
 knowledge encapsulated in createlang.  I don't suppose this would
 work:
   \! createlang plpgsql dbname
 but it'd be nice if the dump didn't know any more about the language
 than its name, and didn't mention the implementation functions at all.

I thought some more about this and came up with a sketch of a solution.
This would solve the problem of loading subtly-bogus language
definitions from existing dump files, and it also offers a possibility
of relaxing the rule that only superusers can create PLs.

The basic idea is to create a shared catalog that contains procedural
language templates.  This catalog would essentially replace the
knowledge that's now hardwired in the createlang program.  It's shared
because we need it to be already available in a new database; and
anyway, the information in it need not vary across databases of an
installation.  I'm envisioning a schema like

pg_pltemplate:
 lannamenamename of PL
 lantrusted boolean trusted?
 lanhandler textname of its call handler function
 lanvalidator   textname of its validator function, or NULL
 lanlibrary textpath of shared library, eg $libdir/plpgsql
 lanacl acl[]   see below

This could be filled in at initdb time with information about all the
languages available in the standard distribution (whether or not they've
actually been built) --- heck, we could include entries for all the PLs
we know of, whether shipped in the core or not.

Then we would change CREATE LANGUAGE so that it first takes the given
PL name and looks to see if there is an entry by that name in
pg_pltemplate.  If so, it *ignores the given parameters* (if any) and
uses what's in pg_pltemplate.  The logic would be identical to what
createlang does now: look to see if the functions already exist in the
current database, create them if not, then create the language entry.
(If the specified shared library does not actually exist in the
installation, we'd fail at the create functions step --- this is why
it's OK to have entries for languages not built in the distribution.)

The bit about ignoring the given parameters is needed to be able to have
the right things happen when loading an existing dump script from an
older PG version with different support functions for the language.
However, we would also simplify pg_dump to never dump the implementation
functions of a language in future, and to emit CREATE LANGUAGE as just
CREATE LANGUAGE plpgsql;
without decoration.  (createlang would reduce to that too.)

For languages that do not have a template in pg_pltemplate, CREATE
LANGUAGE would operate the same as now.  This case supports languages
that we don't know of.  It might also be worthwhile to create a command
like
CREATE LANGUAGE TEMPLATE ...
to simplify making new entries in pg_pltemplate.  (However, we could not
ask pg_dump to dump templates, else we've merely moved the obsolete-dump
problem over one space.  Not sure if anyone would see that as a fatal
objection to the scheme.  I think it's a pretty minor point as long as
we are liberal about including template entries in the standard distro,
so that you'd seldom need to add one by hand.)

Finally, you noticed I stuck an ACL column in there.  I am imagining
that the superuser could grant USAGE rights on a template to designated
people (eg, admins of individual databases), who could then issue CREATE
LANGUAGE using that template in their databases, without needing
superuser rights.  You'd still have to be superuser to muck with the
templates of course, but given a known-good template there's no reason
why a non-superuser shouldn't be allowed to instantiate the language
within his database.  (This might need a little more thought when it
comes to untrusted PLs, but the idea seems sound.)

It's a shame that we didn't think about this before feature freeze,
as the recent changes to create PL support functions in pg_catalog
have made both pg_dump and createlang noticeably uglier than before.
We could have dispensed with those hacks.  Oh well.

Comments?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Simon Riggs

For a while now, I've been seeking a method of reducing the impact of
VACUUM when run against a database where 80-95% of the data is
effectively read only and will not be deleted/updated again. This is the
situation in most Data Warehouses. When you get very large databases
(VLDB) the execution time of VACUUM becomes prohibitive.

I understand the need to run some form of VACUUM to avoid transaction id
wraparound, but I see that VACUUM does a lot of other things too.

A new command is proposed - VACUUM MINIMAL. The *sole* purpose of this
command is to do the absolute minimum required to avoid transaction id
wraparound. (Better names welcome)

This does the same thing as VACUUM except it:

1. does not VACUUM any table younger than 4 billion XIDs old
RATIONALE: If a table was created less than 4 billion XIDs ago, it
clearly can't have any tuples in it with an XID more than 4 billion XIDs
old, so we don't need to VACUUM it to avoid XID wraparound.
(Current VACUUM will scan everything, even if a table was created only a
few transactions ago).

2. does not VACUUM indexes
RATIONALE: Indexes don't contain XIDs so don't *need* to be vacuumed.

By taking those two steps, VACUUM MINIMAL will execute fairly quickly
even on large Data Warehouses. Those steps are fairly easy to implement
without change to the basic structure of VACUUM.

This command only makes sense when you *know* that tables don't need
vacuuming. That is the case when:
- you're using autovacuum, since all tables have dead-tuples removed
whenever this is required - and so indexes will have been covered also
- you are using table partitioning and the data retention period of your
data is less than 4 billion transactions. At 100 requests/sec that is a
whole year of data - and if you are using COPY to load the data, then
that comes out at hundreds of billions of rows, or Terabytes of data.
(Which is also the rationale, since you really *don't* want to VACUUM a
Terabyte of tables with indexes on them, ever).

The limit is set at 4 billion because with this command we are trying to
avoid doing work as long as possible. This makes the command faster,
which in turn allows the command to be run more regularly, probably
daily. Of course, you would get a somewhat longer running command once
table XIDs have been frozen but this is for the user to understand and
avoid, if they have problems with that.

Thus, the user has a choice of two ways of avoiding XID wraparound:
- VACUUM
- VACUUM MINIMAL
Each with their specific strengths and weaknesses.

We've discussed in the past the idea that VACUUM can be speeded-up by
using a bitmap to track which blocks to clean. That's a good idea and I
see that as somewhat orthogonal to the reason for this proposal. To be
of use in the circumstances I'm trying to optimise for, the vacuum
bitmaps would need to be non-lossy, persistent and recoverable to be of
use for xid wraparound use (I know the clog code could be used for
that), as well as only utilised for tables bigger than a certain
threshold, say 128 heap blocks or more - to avoid having too many
bitmaps when there are 1000s of tables. They also still don't help much
with VACUUMing big indexes in a DW context, since VACUUM still makes two
passes of each index even when there are no dead rows to remove from the
the index. That could be tuned somewhat, for which I also have a design
but why bother tuning VACUUM when you can just skip it?

Comments?

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Indexing dead tuples

2005-08-31 Thread Andrew - Supernews
On 2005-08-31, Simon Riggs [EMAIL PROTECTED] wrote:
 During CREATE INDEX we include all tuples, even if they are already dead
 when we build an index.

 What purpose does this serve?

 A pre-existing transaction can't see the index,

Yes, it can; the catalog is read in SnapshotNow rather than in the query
snapshot.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Indexing dead tuples

2005-08-31 Thread Tom Lane
Andrew - Supernews [EMAIL PROTECTED] writes:
 On 2005-08-31, Simon Riggs [EMAIL PROTECTED] wrote:
 During CREATE INDEX we include all tuples, even if they are already dead
 when we build an index.
 
 What purpose does this serve?
 
 A pre-existing transaction can't see the index,

 Yes, it can; the catalog is read in SnapshotNow rather than in the query
 snapshot.

In fact, it had better be able to, since once the CREATE INDEX commits,
pre-existing xacts are responsible to insert index entries for anything
they insert into the table.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 A new command is proposed - VACUUM MINIMAL. The *sole* purpose of this
 command is to do the absolute minimum required to avoid transaction id
 wraparound. (Better names welcome)

I do not see the point.  If you only need to run it every billion
transactions, saving a few cycles seems a bit pointless.

 This does the same thing as VACUUM except it:

 1. does not VACUUM any table younger than 4 billion XIDs old

So?  Your large tables are likely to be long-lived, so this isn't
actually going to save a thing in a DB that's been around long enough
to have an impending wrap problem.

 2. does not VACUUM indexes
 RATIONALE: Indexes don't contain XIDs so don't *need* to be vacuumed.

Nonstarter.  If you remove any tuples, you *must* vacuum indexes to
remove the corresponding index entries.  If you don't remove any tuples,
you don't scan the indexes anyway IIRC.

 By taking those two steps, VACUUM MINIMAL will execute fairly quickly
 even on large Data Warehouses.

I don't think this'll save a thing.

 This command only makes sense when you *know* that tables don't need
 vacuuming. That is the case when:
 - you're using autovacuum, since all tables have dead-tuples removed
 whenever this is required - and so indexes will have been covered also

If you're using autovacuum then the problem is already taken care of.
It will be taken care of better in 8.2, if we add per-table tracking
of XID wraparound risk, but it's handled now.  The only way that this
proposal makes any sense is if you are trying not to vacuum at all, ever.

 - you are using table partitioning and the data retention period of your
 data is less than 4 billion transactions.

Again, per-table tracking of wraparound horizon would take care of this
case, more elegantly and more safely.

 The limit is set at 4 billion because with this command we are trying to
 avoid doing work as long as possible.

You do realize that 2 billion is already the wrap horizon, and you can't
wait that long if you're doing this on a routine basis rather than
immediately-when-needed?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Indexing dead tuples

2005-08-31 Thread Simon Riggs
On Wed, 2005-08-31 at 19:06 -0400, Tom Lane wrote:
 Andrew - Supernews [EMAIL PROTECTED] writes:
  On 2005-08-31, Simon Riggs [EMAIL PROTECTED] wrote:
  During CREATE INDEX we include all tuples, even if they are already dead
  when we build an index.
  
  What purpose does this serve?
  
  A pre-existing transaction can't see the index,
 
  Yes, it can; the catalog is read in SnapshotNow rather than in the query
  snapshot.

Thanks Andrew, didn't see your post to me. I suspected that was the
case, but wasn't sure why... though Tom explains this.

 In fact, it had better be able to, since once the CREATE INDEX commits,
 pre-existing xacts are responsible to insert index entries for anything
 they insert into the table.

So would it be possible to have CREATE INDEX call GetOldestXmin, just as
VACUUM does, so it can work out which rows to ignore? The overhead of
that is fairly low and could actually speed up many index builds by
reducing the number of rows needing to be sorted/manipulated. (The call
to GetOldestXmin would only scan procs for the current databaseid).

Perhaps this could apply only for larger tables, where the sort cost is
likely to be pretty high? That way having the CREATE INDEX ignore dead
tuples would always be cheaper than doing a VACUUM + CREATE INDEX. Why
do two scans when we can do one?

Best Regards, Simon Riggs





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Simon Riggs
On Wed, 2005-08-31 at 19:24 -0400, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  A new command is proposed - VACUUM MINIMAL. The *sole* purpose of this
  command is to do the absolute minimum required to avoid transaction id
  wraparound. (Better names welcome)
 
 I do not see the point.  If you only need to run it every billion
 transactions, saving a few cycles seems a bit pointless.

It is not simply a few cycles, it could be days of grinding I/O. The
worse it is, the more it gets put off, which makes it worse etc..

...it kindof prevents embedded systems from working happily.

  This does the same thing as VACUUM except it:
 
  1. does not VACUUM any table younger than 4 billion XIDs old
 
 So?  Your large tables are likely to be long-lived, so this isn't
 actually going to save a thing in a DB that's been around long enough
 to have an impending wrap problem.

My point was, with table partitioning that the largest tables would
*not* be the longest lived. Any very large table has to be truncated
down to a manageable size at some point. Partitioning completely avoids
the need to DELETE or TRUNCATE, since you use CREATE TABLE to introduce
new partitions and DROP TABLE to remove them.

  2. does not VACUUM indexes
  RATIONALE: Indexes don't contain XIDs so don't *need* to be vacuumed.
 
 Nonstarter.  If you remove any tuples, you *must* vacuum indexes to
 remove the corresponding index entries.  

Yes, but if no tuples have been deleted or updated, there will be no
tuples removed.

 If you don't remove any tuples,
 you don't scan the indexes anyway IIRC.

No. Even if you remove *zero* tuples, an index is still scanned twice.
Once to not delete the rows and once to not delete the pages.

That isn't as mad as it sounds, since a VACUUM needs to mark pages as
HALF_DEAD on one VACUUM, then come back for a DELETED marker later
because of MVCC. But that behaviour isn't required at all in the
scenario I described, so I was seeking a way to get around that.

We could optimise that, as I said, but I was trying to avoid the problem
entirely. But, I'm easy, if you like the sound of that better I have the
details for that worked out also.

  By taking those two steps, VACUUM MINIMAL will execute fairly quickly
  even on large Data Warehouses.
 
 I don't think this'll save a thing.

I understand that the scenarios I describe seem highly atypical, but
there are a number people already running this kind of system. I've
worked with a couple and I've seen a half-dozen others on list, and
there are more coming.

  This command only makes sense when you *know* that tables don't need
  vacuuming. That is the case when:
  - you're using autovacuum, since all tables have dead-tuples removed
  whenever this is required - and so indexes will have been covered also
 
 If you're using autovacuum then the problem is already taken care of.

autovacuum will respond only to UPDATEs and DELETEs. In the scenario I
outline, these will *never* occur on the largest tables. A VACUUM would
still eventually be required to freeze long lived tuples and this would
not be performed by autovacuum.

 It will be taken care of better in 8.2, if we add per-table tracking
 of XID wraparound risk, but it's handled now.  

Thats a good idea, I must have missed the discussion on that. 

But what I am suggesting is for a certain class of table, which just
happens to be very large, we defer a VACUUM for as long as possible. The
hope is, and I believe that this could be very likely, that the table
would cease to exist before the table became eligible for VACUUM.

The current viewpoint is that most rows never live for 1 billion rows,
so never get frozen. Thats completely valid in something like a Stock
table, but its completely wrong in a TransactionHistory table.

For a historical data table I want to be able to defer the need for a
full table VACUUM for as long as possible, and when/should it ever
occur, I want to avoid scanning the indexes pointlessly.

 The only way that this
 proposal makes any sense is if you are trying not to vacuum at all, ever.

Yes, that is exactly what I'm trying to do, for the largest tables only.

I never want to VACUUM them because I know they don't need it to reclaim
rows and I have a *good* expectation that the table will be dropped, one
day.

For more normal tables, I'm happy to VACUUM them and don't want to
alter that behaviour at all.

 You do realize that 2 billion is already the wrap horizon, and you can't
 wait that long if you're doing this on a routine basis rather than
 immediately-when-needed?

OK, 4 billion was just a mistake...I just meant avoid VACUUM as long as
possible. I changed the number immediately before posting, to emphasise
the deferral aspect of this proposal.

Best Regards, Simon Riggs



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] TODO item: set proper permissions on non-system schemas

2005-08-31 Thread Jaime Casanova
Hi,

I see this TODO item:

* %Set proper permissions on non-system schemas during db creation

I think a quetion here is wich are non-system schemas?

i guess public is one of these... but in pg_namespace i have no way
to know that. Is there another way? or maybe we need an attribute to
know that?

-- 
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Alvaro Herrera
On Thu, Sep 01, 2005 at 01:57:02AM +0100, Simon Riggs wrote:

  If you're using autovacuum then the problem is already taken care of.
 
 autovacuum will respond only to UPDATEs and DELETEs. In the scenario I
 outline, these will *never* occur on the largest tables. A VACUUM would
 still eventually be required to freeze long lived tuples and this would
 not be performed by autovacuum.

Hum, I don't understand -- if you don't want to vacuum the table, why
run vacuum at all?  You can (as of 8.1) disable autovacuum for specific
tables.  The exception is that you are forced to run a database-wide
VACUUM once in a while (every billion-and-so), but this will hopefully
disappear in 8.2 too, leaving you effectively with the option of never
vacuuming a table.

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
You liked Linux a lot when he was just the gawky kid from down the block
mowing your lawn or shoveling the snow. But now that he wants to date
your daughter, you're not so sure he measures up. (Larry Greenemeier)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Remove xmin and cmin from frozen tuples

2005-08-31 Thread ITAGAKI Takahiro
Hi Hackers,

I think it would be a waste to retain xmin and cmin for frozen tuples
because their values represent only 'visible for all transactions'.
Additionally, most tuples in database can be frozen potentially.

I wrote a makeshift patch to compress xmin and cmin (8bytes) to
1-bit flag, using tuple overlaping.

Is this idea worth trying?

Also, it will be useful to combine it and more aggressive freeze vacuum,
for example, a freezer integrated with bgwriter.



(The following is test of the attached patch)

* Test query
  1. create table test (a int);
  2. insert into test select * from generate_series(1, 10);
  3. update test set a = a where a % 100 = 0; # to force defrag
  4. select * from pgstattuple('test');
  5. vacuum freeze test;
  6. select * from pgstattuple('test');

* Results of pgstattuple

-[ before vacuum ]-+
table_len  | 3645440
tuple_count| 10
tuple_len  | 320
tuple_percent  | 87.78
dead_tuple_count   | 1000
dead_tuple_len | 32000
dead_tuple_percent | 0.88
free_space | 536
free_percent   | 0.01

-[ 8.1beta1 orig ]-+
table_len  | 3645440
tuple_count| 10
tuple_len  | 320
tuple_percent  | 87.78
dead_tuple_count   | 0
dead_tuple_len | 0
dead_tuple_percent | 0
free_space | 30772   -- about 32byte * 1000 (dead tuples)
free_percent   | 0.84

-[ patched ]---+
table_len  | 3645440
tuple_count| 10
tuple_len  | 320
tuple_percent  | 87.78
dead_tuple_count   | 0
dead_tuple_len | 0
dead_tuple_percent | 0
free_space | 823628  -- + 8byte * 10 (whole tuples)
free_percent   | 22.59

---
ITAGAKI Takahiro
NTT Cyber Space Laboratories


xmincut.patch
Description: Binary data

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Indexing dead tuples

2005-08-31 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 On Wed, 2005-08-31 at 19:06 -0400, Tom Lane wrote:
 In fact, it had better be able to, since once the CREATE INDEX commits,
 pre-existing xacts are responsible to insert index entries for anything
 they insert into the table.

 So would it be possible to have CREATE INDEX call GetOldestXmin, just as
 VACUUM does, so it can work out which rows to ignore?

It already does.  See IndexBuildHeapScan().

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 On Wed, 2005-08-31 at 19:24 -0400, Tom Lane wrote:
 If you don't remove any tuples,
 you don't scan the indexes anyway IIRC.

 No. Even if you remove *zero* tuples, an index is still scanned twice.
 Once to not delete the rows and once to not delete the pages.

Yeah?  Well, that could probably be improved with a less intrusive fix,
that is, one that does it automatically instead of involving the user.

I really really do not like proposals to introduce still another kind
of VACUUM.  We have too many already; any casual glance through the
archives will show that most PG users don't have a grip on when to use
VACUUM FULL vs VACUUM.  Throwing in some more types will make that
problem exponentially worse.

 autovacuum will respond only to UPDATEs and DELETEs. In the scenario I
 outline, these will *never* occur on the largest tables. A VACUUM would
 still eventually be required to freeze long lived tuples and this would
 not be performed by autovacuum.

Yes, it will, at least as of 8.1.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Remove xmin and cmin from frozen tuples

2005-08-31 Thread Alvaro Herrera
On Thu, Sep 01, 2005 at 10:45:44AM +0900, ITAGAKI Takahiro wrote:

Hi,

 I think it would be a waste to retain xmin and cmin for frozen tuples
 because their values represent only 'visible for all transactions'.
 Additionally, most tuples in database can be frozen potentially.

I think this is an interesting idea.  I was thinking that when the tuple
needs to be obsoleted it would need to grow to accomodate the Xmax, but
you are not actually proposing to remove that, so it seems sensible.  In
fact, it is perfectly reasonable to remove Xmin and Cmin, because after
the tuple is frozen, the Xmin never changes again.

Now, one thing of note is that you need to compress the page in order
to actually be able to use the just-freed space.  VACUUM could do that,
but maybe it would be better to do it on-line -- the freezing process is
going to have to write the page regardless.  I wonder if with your patch
the page is compressed on the same VACUUM execution that freezes the
tuple?

One thing that comes to mind is that this makes somewhat easier to build
a tool to write pre-built tables, for bulk-loading purposes.  You just
construct the binary file with the HEAP_FROZEN bit set, and then attach
the file to a dummy table.  (Then again, you can do it today, using a
Xmin of FrozenTransactionId.  I wonder why the Bizgres people isn't
advocating a tool to do that.  It is very hard to do with user-defined
types, but for BI/DW you mostly don't need those, do you?)

-- 
Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com
Cuando no hay humildad las personas se degradan (A. Christie)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] TODO item: set proper permissions on non-system schemas

2005-08-31 Thread Tom Lane
Jaime Casanova [EMAIL PROTECTED] writes:
 I see this TODO item:
 * %Set proper permissions on non-system schemas during db creation
 I think a quetion here is wich are non-system schemas?

There's considerable feeling that that TODO item is bogus anyway.
It was pushed in by people who think that the current behavior is
wrong, but they haven't justified a change IMHO.  I think the first
part of working on this is to propose a behavior that everyone will
accept --- which schemas to touch is part of that.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Call for 7.5 feature completion

2005-08-31 Thread Andrew Dunstan



William ZHANG wrote:

- Original Message - 
From: Andrew Dunstan [EMAIL PROTECTED]

To: Dave Page dpage@vale-housing.co.uk
Cc: William ZHANG [EMAIL PROTECTED]; pgsql-hackers@postgresql.org
Sent: Wednesday, August 31, 2005 10:24 PM
Subject: Re: [HACKERS] Call for 7.5 feature completion


 


Dave Page wrote:
   


* Compile with MSVC on Win32 platforms. MySQL support it.
  
   


So what? It would take a major amount of work, with no useful benefits.
 

... and you can compile all the client and library stuff with MSVC - 
just not the server nor extensions. But the audience for compiling those 
is far smaller.
   



I think the most popular method to build a project on Win32 is using 
MSVC or Intel C. Intel C can be integrated with MSVC's IDE to help
developers increase their productivity. Actually I have tried to make 
the backend of pgsql-8.0.3 build with MSVC 6.0, and it works well.

Should I polish it and send it as a patch?

Having been a Win32 developer for several years, I think it is more 
convenient to use MSVC's IDE than CL.exe with NMAKE.exe.

Although I do not like Microsoft very much, and like to use MinGW
or Cygwin to do some small tests, MSVC is more suitable for 
native Win32 development. If pgsql want to be the first class citizen
on Windows, and want to compete with MySQL, I think supporting 
MSVC is important. I beleive there will be many contributions from 
the Win32 world.


 




You are missing the point. We are not prepared to support two completely 
different build systems. Our build system is very very heavily dependent 
on gmake. So if you want to change the build system you have to come up 
with something that works everywhere. COming up with a project file or 
an nmake file for Windows is not hard. Keeping them in step with 
everything else is very hard.


We currently have nmake files for the client libraries, but project 
files might be good to have too, and I don't think we have those. Why 
not start a pgfoundry project to publish some MSVC project files, at 
least for the client libs?


cheers

andrew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Christopher Kings-Lynne

I really really do not like proposals to introduce still another kind
of VACUUM.  We have too many already; any casual glance through the
archives will show that most PG users don't have a grip on when to use
VACUUM FULL vs VACUUM.  Throwing in some more types will make that
problem exponentially worse.


Yes, but if they're all under the control of autovacuum, then users 
don't have to worry...


Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Remove xmin and cmin from frozen tuples

2005-08-31 Thread Tom Lane
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
 I think it would be a waste to retain xmin and cmin for frozen tuples
 because their values represent only 'visible for all transactions'.

True, but the hard part is getting rid of the storage for them.

 I wrote a makeshift patch to compress xmin and cmin (8bytes) to
 1-bit flag, using tuple overlaping.
 Is this idea worth trying?

I think this is incredibly ugly :-(.  It eliminates a fairly basic
assumption which is that items on a page don't overlap.  The space
savings cannot be worth the loss in testability and reliability.
To take just one problem, it is no longer possible to check an item
offset for validity against pd_upper.  If we're going to do this,
we need a more invasive patch that changes the structure of heaptuple
headers in a more fundamental way, and avoids breaking the page layout
representation.  (Something like the way Oids are now handled might
work, although there are alignment issues to worry about, and it'd
take more work on VACUUM's part to convert a tuple to frozen state.)

I'm also less than enthused about using up our last infomask bit for
a relatively unimportant purpose.  We might need that for something
bigger someday... though I can't presently guess what.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Tom Lane
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 I really really do not like proposals to introduce still another kind
 of VACUUM.  We have too many already; any casual glance through the
 archives will show that most PG users don't have a grip on when to use
 VACUUM FULL vs VACUUM.  Throwing in some more types will make that
 problem exponentially worse.

 Yes, but if they're all under the control of autovacuum, then users 
 don't have to worry...

Well, if the proposal comes packaged with an algorithm by which
autovacuum will use it, that's a different story.  What's sticking in
my craw about this proposal is really that it's assuming detailed manual
management of vacuuming, which is exactly the thing we've been sweating
to get rid of.

BTW ... the original Berkeley papers on Postgres make frequent reference
to a vacuum daemon, which seems to be essentially what we're trying to
build with autovacuum.  Does anyone know if the Berkeley implementation
ever actually had auto vacuuming, or was that all handwaving?  If it did
exist, why was it removed?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Gavin Sherry
On Wed, 31 Aug 2005, Tom Lane wrote:

 Christopher Kings-Lynne [EMAIL PROTECTED] writes:
  I really really do not like proposals to introduce still another kind
  of VACUUM.  We have too many already; any casual glance through the
  archives will show that most PG users don't have a grip on when to use
  VACUUM FULL vs VACUUM.  Throwing in some more types will make that
  problem exponentially worse.

  Yes, but if they're all under the control of autovacuum, then users
  don't have to worry...

 Well, if the proposal comes packaged with an algorithm by which
 autovacuum will use it, that's a different story.  What's sticking in
 my craw about this proposal is really that it's assuming detailed manual
 management of vacuuming, which is exactly the thing we've been sweating
 to get rid of.

 BTW ... the original Berkeley papers on Postgres make frequent reference
 to a vacuum daemon, which seems to be essentially what we're trying to
 build with autovacuum.  Does anyone know if the Berkeley implementation
 ever actually had auto vacuuming, or was that all handwaving?  If it did
 exist, why was it removed?

Well, from my reading of some of the early papers, VACUUM was kind of
different to what it is now. The idea was that expired data would be moved
out the heap and stored else where. A timetravel mechanism could be used
to see previous versions of the row.

It makes sense that they would manage this with a daemon, but I never saw
one. Mind, I wasn't looking for one.

Thanks,

Gavin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Remove xmin and cmin from frozen tuples

2005-08-31 Thread Josh Berkus
Alvaro,

 One thing that comes to mind is that this makes somewhat easier to build
 a tool to write pre-built tables, for bulk-loading purposes.  You just
 construct the binary file with the HEAP_FROZEN bit set, and then attach
 the file to a dummy table.  (Then again, you can do it today, using a
 Xmin of FrozenTransactionId.  I wonder why the Bizgres people isn't
 advocating a tool to do that.  It is very hard to do with user-defined
 types, but for BI/DW you mostly don't need those, do you?)

Hmmm ... can you expand on this a little?  We'd discussed frozen partitions 
but hadn't thought to get around to them for a while, expecting the kind of 
issues which Tom just raised.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Josh Berkus
Gavin, Tom,

 Well, from my reading of some of the early papers, VACUUM was kind of
 different to what it is now. The idea was that expired data would be moved
 out the heap and stored else where. A timetravel mechanism could be used
 to see previous versions of the row.

And from talking to a couple of Stonebraker's former students at conferences, 
this mechanism was never build satisfactorily; it always existed on paper, at 
least at UCB.  Illustra might have finished it.  Elein around?

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Josh Berkus
Tom,

 If you're using autovacuum then the problem is already taken care of.
 It will be taken care of better in 8.2, if we add per-table tracking
 of XID wraparound risk, but it's handled now.  The only way that this
 proposal makes any sense is if you are trying not to vacuum at all, ever.

Hmmm ... the potential problem which Simon is mentioning is very real ... in 
large DWs, there may be tables/partitions which are never, ever vacuumed.  
Ever.

For example, at one client's site they load their data via ETL jobs that 
insert about 25 million rows a day and update about 100,000.   Given that 
updates are  5%, there is no reason from a data efficiency perspective to 
ever vacuum.So that's the plan ... the main fact table will never, ever 
be vacuumed.  (in that particular case, since the ETL uses large transaction 
batches, XID wraparound won't happen for about 20 years.  But with a 
different data load model, it could be a serious problem).

So, will per-table XID tracking allow us to avoid *ever* vacuuming some  
tables?   If not, what could?

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] TODO item: set proper permissions on non-system schemas

2005-08-31 Thread Andrew - Supernews
On 2005-09-01, Tom Lane [EMAIL PROTECTED] wrote:
 Jaime Casanova [EMAIL PROTECTED] writes:
 I see this TODO item:
 * %Set proper permissions on non-system schemas during db creation
 I think a quetion here is wich are non-system schemas?

 There's considerable feeling that that TODO item is bogus anyway.
 It was pushed in by people who think that the current behavior is
 wrong, but they haven't justified a change IMHO.  I think the first
 part of working on this is to propose a behavior that everyone will
 accept --- which schemas to touch is part of that.

The issue that I've seen is that currently, allowing non-superusers to
create databases in a useful manner requires all sorts of hoop-jumping
to allow the database owner to end up owning the public schema. The
simplest solution at present is to remove public from template1 and have
the new db owner create it themselves, which of course interferes with
the (pre 8.1) defaults for createlang etc. (Since the new db owner can't,
for example, install plpgsql themselves, it has to be installed in
template1 in some schema _other_ than public.)

(Another wart that could do with looking into is that such a non-superuser
database owner can't prevent xid wrap in his database regardless of how
often he vacuums it.)

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Minimally avoiding Transaction Wraparound in VLDBs

2005-08-31 Thread Andrew - Supernews
On 2005-09-01, Alvaro Herrera [EMAIL PROTECTED] wrote:
 On Thu, Sep 01, 2005 at 01:57:02AM +0100, Simon Riggs wrote:

  If you're using autovacuum then the problem is already taken care of.
 
 autovacuum will respond only to UPDATEs and DELETEs. In the scenario I
 outline, these will *never* occur on the largest tables. A VACUUM would
 still eventually be required to freeze long lived tuples and this would
 not be performed by autovacuum.

 Hum, I don't understand -- if you don't want to vacuum the table, why
 run vacuum at all?  You can (as of 8.1) disable autovacuum for specific
 tables.  The exception is that you are forced to run a database-wide
 VACUUM once in a while (every billion-and-so), but this will hopefully
 disappear in 8.2 too,

Wishful thinking, or do you have a concrete plan to achieve it?

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] broken configure, broken makefile?

2005-08-31 Thread Robert Treat
running vanilla configure, i see the following toward the end, and my makefile 
is incomplete as well:

checking alignment of int... (cached) 4
checking alignment of long... (cached) 4
checking alignment of long long int... (cached) 4
checking alignment of double... (cached) 4
checking for POSIX signal interface... yes
checking for tclsh... (cached) /usr/bin/tclsh
creating ./config.status
creating GNUmakefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating Makefile.global
sed: file conftest.s1 line 35: Unterminated `s' command
creating backend/port/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating backend/catalog/genbki.sh
sed: file conftest.s1 line 35: Unterminated `s' command
creating backend/utils/Gen_fmgrtab.sh
sed: file conftest.s1 line 35: Unterminated `s' command
creating bin/pg_dump/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating bin/pg_version/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating bin/pgtclsh/mkMakefile.tcldefs.sh
sed: file conftest.s1 line 35: Unterminated `s' command
creating bin/pgtclsh/mkMakefile.tkdefs.sh
sed: file conftest.s1 line 35: Unterminated `s' command
creating bin/psql/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating include/version.h
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/libpq/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/ecpg/lib/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/ecpg/preproc/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/libpq++/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/libpgeasy/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/libpgtcl/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/odbc/GNUmakefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating interfaces/odbc/Makefile.global
sed: file conftest.s1 line 35: Unterminated `s' command
creating pl/plpgsql/src/Makefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating pl/plpgsql/src/mklang.sql
sed: file conftest.s1 line 35: Unterminated `s' command
creating pl/tcl/mkMakefile.tcldefs.sh
sed: file conftest.s1 line 35: Unterminated `s' command
creating test/regress/GNUmakefile
sed: file conftest.s1 line 35: Unterminated `s' command
creating include/config.h
include/config.h is unchanged
linking ./backend/port/dynloader/linux.c to backend/port/dynloader.c
linking ./backend/port/dynloader/linux.h to include/dynloader.h
linking ./include/port/linux.h to include/os.h
linking ./makefiles/Makefile.linux to Makefile.port
linking ./backend/port/tas/dummy.s to backend/port/tas.s
linking ./include/port to interfaces/odbc/port
linking ./makefiles to interfaces/odbc/makefiles
linking ./template to interfaces/odbc/template
linking ./include/config.h to interfaces/odbc/config.h

I couldnt seem to find a conftest.s1 file nor the sed command being called, 
can someone point me in the right direction?

oh... this is compiling 7.0.3 on slackware 10.1, lmk if you need more info.  
TIA

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Remove xmin and cmin from frozen tuples

2005-08-31 Thread ITAGAKI Takahiro
Alvaro Herrera [EMAIL PROTECTED] wrote:

 Now, one thing of note is that you need to compress the page in order
 to actually be able to use the just-freed space.  VACUUM could do that,
 but maybe it would be better to do it on-line -- the freezing process is
 going to have to write the page regardless.

I agree. I think an good position of freezer is on bgwriter.
My idea is:
  1. Just before bgwriter writes an dirty page in LRU order,
  2. Freeze tuples in the page and repair fragmentation.
  3. (Replace the fsm page that has least freespace.)
  4. Flush the page.


 I wonder if with your patch
 the page is compressed on the same VACUUM execution that freezes the tuple?

Yes, defragmentation is performed after freezing, but the page has at least
one dead tuple. In current VACUUM implementation, pages that have no dead
tuples will not be defraged. So you cannot compress just after bulk-load.

---
ITAGAKI Takahiro
NTT Cyber Space Laboratories



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] broken configure, broken makefile?

2005-08-31 Thread Gavin Sherry
On Thu, 1 Sep 2005, Robert Treat wrote:

 running vanilla configure, i see the following toward the end, and my makefile
 is incomplete as well:


Old problem.
http://archives.postgresql.org/pgsql-ports/2003-04/msg00015.php.


 I couldnt seem to find a conftest.s1 file nor the sed command being called,
 can someone point me in the right direction?

 oh... this is compiling 7.0.3 on slackware 10.1, lmk if you need more info.

And this is the reason why it is old. You're asking for pain if you're
using 7.0.3 and this is just the start.

Gavin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Remove xmin and cmin from frozen tuples

2005-08-31 Thread ITAGAKI Takahiro
Tom Lane [EMAIL PROTECTED] wrote:

 I think this is incredibly ugly :-(.

Yes, I think so, too :-(My patch is product of the thought that
I don't want to modify codes widely. So if we want to do it more cool way,
lots of changes are needed as you said.


 I'm also less than enthused about using up our last infomask bit for
 a relatively unimportant purpose.  We might need that for something
 bigger someday... though I can't presently guess what.

I think it is not a problem, because the header still has rooms for several
bits. I assume that the combination of HEAP_XMIN_COMMITTED + HEAP_XMIN_INVALID
has currently no meaning, right? If so, HEAP_FROZEN can be assigned here.
Also, t_natts is currently 16-bits, but it can be cut to 11-bits
because MaxTupleAttributeNumber is 1664  2^11.

---
ITAGAKI Takahiro
NTT Cyber Space Laboratories



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster