Re: [HACKERS] Fwd: [PATCHES] Auto Partitioning Patch - WIP version 1

2008-12-16 Thread Emmanuel Cecchet

ITAGAKI Takahiro wrote:

The chained triggers would have better flexibilty, and the auto expanding
trigger would have better usability. I'm not sure about performance
because expanding child partitions is not always faster than chained
calls of triggers.

I think chained triggers are hard to maintain. If we drop one of partition
tables, we need to reconnect the single-linked-list of the triggers.
  
When you drop one child table, you would also have to drop the trigger 
that has the same name on the parent table.  This does not seem too hard 
but I may be missing something.

server says INSERT 0 row though rows are inserted into child tables.
  
Technically this is correct since 0 rows were inserted in the parent 
table.


Yes, but users expect non-0 result normally. Some O/R mapping tools
also checks the result exactly and raises errors (it could be turned
off, but default is on).
  
If the O/R mapping tool is also creating the table it should be aware of 
the semantics specifics to partition. But your comment is well taken, 
this seems counterintuitive and against most API semantics to return 0 
when the number of inserted rows is expected. This would certainly 
require some additional hooks to return the proper value.


Best regards,
Emmanuel

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
  


--
Emmanuel Cecchet
FTO @ Frog Thinker 
Open Source Development  Consulting

--
Web: http://www.frogthinker.org
email: m...@frogthinker.org
Skype: emmanuel_cecchet


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] How to insert the values of .sql file (dump file) into postgresql 8.1

2008-12-16 Thread sai srujan
 Hi
  This is srujan and I am beginner of postgresql 8.1 and I have
just stopped using mysql and i would like to use the dump (.sql file)
created in it. I would like transfer the databases in my sql to postgresql
8.1. Please do reply. I am waiting for it.
Thanks in Advance


Re: [HACKERS] How to insert the values of .sql file (dump file) into postgresql 8.1

2008-12-16 Thread A. Kretschmer
In response to sai srujan :
  Hi
   This is srujan and I am beginner of postgresql 8.1 and I have
 just stopped using mysql and i would like to use the dump (.sql file)
 created in it. I would like transfer the databases in my sql to postgresql 
 8.1.
 Please do reply. I am waiting for it.
 Thanks in Advance

You can try to import your dump with psql -f your_file, but maybe you
need to edit the file to change the syntax to postgresql-compatible.

You can also use some tools to convert from mysql to postgresql, read
the links below:

09:44  pg_docbot_adz http://sql-info.de/mysql/gotchas.html :: 
http://www.raditha.com/mysql/mysql2pgsql.php ::
   http://www.in-nomine.org/~asmodai/mysql-to-pgsql.html :: 
http://pgfoundry.org/projects/mysql2pgsql/ ::
   http://www.metatrontech.com/wpapers/mysql2postgresql.pdf
09:44  pg_docbot_adz 
http://www.data-conversions.net/products.php?prod_num=5dest=MENUID=200 :: 
http://forums.mysql.com/read.php?25,93181,93181 ::
   http://www.postgresql.org/docs/techdocs.83 :: 
http://www.scribd.com/doc/2575733/The-future-of-MySQL-The-Project
09:44  pg_docbot_adz http://pgfoundry.org/projects/mysqlcompat :: 
http://www.xaprb.com/blog/2007/05/31/why-is-null-doesnt-always-work-in-mysql/ ::
   
http://weblogs.asp.net/alex_papadimoulis/archive/2005/10/26/428527.aspx
09:44  pg_docbot_adz 
http://mike.kruckenberg.com/archives/2006/07/jim_starkey_int_1.html

Some of the links are appropriate for you, it is the output from the
irc-docbot about mysql.


HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [BUG?] UPDATE with RETURNING tableoid

2008-12-16 Thread KaiGai Kohei
I found a strange behavior on the recent v8.4devel with/without
SE-PostgreSQL patch set. Can you reproduce the following behavior?

When I use UPDATE statement with RETURNING clause which contains
references to tableoid system column, it returns InvalidOid.
(The correct valus is 16384 in this case.)
However, RETURNING clause with INSERT/DELETE statement works well.

--
postgres=# CREATE TABLE t1 (a int, b text);
CREATE TABLE
postgres=# INSERT INTO t1 VALUES (1, 'aaa'), (2, 'bbb'), (3, 'ccc');
INSERT 0 3
postgres=# SELECT tableoid, * FROM t1;
 tableoid | a |  b
--+---+-
16384 | 1 | aaa
16384 | 2 | bbb
16384 | 3 | ccc
(3 rows)

postgres=# BEGIN;
BEGIN
postgres=# UPDATE t1 SET b = 'abc' RETURNING tableoid, *;
 tableoid | a |  b
--+---+-
0 | 1 | abc
0 | 2 | abc
0 | 3 | abc
(3 rows)

UPDATE 3
postgres=# ABORT; BEGIN;
ROLLBACK
BEGIN
postgres=# INSERT INTO t1 VALUES (4, 'ddd') RETURNING tableoid, *;
 tableoid | a |  b
--+---+-
16384 | 4 | ddd
(1 row)

INSERT 0 1
postgres=# ABORT; BEGIN;
ROLLBACK
BEGIN
postgres=# DELETE FROM t1 RETURNING tableoid, *;
 tableoid | a |  b
--+---+-
16384 | 1 | aaa
16384 | 2 | bbb
16384 | 3 | ccc
(3 rows)

DELETE 3

-- 
OSS Platform Development Division, NEC
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep: First Thoughts on Code

2008-12-16 Thread Simon Riggs

On Tue, 2008-12-16 at 12:36 +0900, Fujii Masao wrote:

  So from my previous list
 
  1. We sent the message to standby (A)
  2. We received the message on standby
  3. We wrote the WAL to the WAL file (B)
  4. We fsync'd the WAL file (C)
  5. We CRC checked the WAL commit record
  6. We applied the WAL commit record
 
  Please could you also add an option #4, i.e. add the *option* to fsync
  the WAL to disk at commit time also. That requires us to add a third
  option to synchronous_replication parameter.
 
 The above option should be configured on the primary? or standby?
 The primary is suitable to vary it from transaction to transaction. On
 the other hand, it should be configured on the standby in order to
 choose it for every standby (in the future).
 
 I prefer the latter, and thought that it should be added into recovery.conf.
 I mean, synchronous_replication identifies only whether commit waits for
 replication (if the name is confusing, I would rename it). The above
 options (#1-#6) are chosen in recovery.conf. What is your opion?

No, we've been through that loop already a few months back:
Transaction-controlled robustness.

It should be up to the client on the primary to decide how much waiting
they would like to perform in order to provide a guarantee. A change of
setting on the standby should not be allowed to alter the performance or
durability on the primary.

My perspective is that synchronous_replication specifies how long to
wait. Current settings are off (don't wait) or on (meaning wait
until point #3). So I think we should change this to a list of options
to allow people to more carefully select how much waiting is required.

This feature is then analogous to the way synchronous_commit works. It
also provides a level of application control not seen in any other RDBMS
in the industry, which makes it very suitable for large and important
applications that need a fine mix of robustness and performance.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] non unique value error... pgsql,,help needed.

2008-12-16 Thread Shri
Dear friends,
I am trying to import db from oracle to pgsql,

meanwhile facing a difficulty during importing oracle numeric(38) to pgsql
numeric(38) column, Because in oracle that column is unique, but PGsql not
able to correctly identify different and unique values with numeric(38)
field.

for eg.

column: a numeric(38) primary

giving me error of non unique value in following two data.
1. 
2. 9997  (- both are different.)

When i enter first data into column (using EMS SQL MANAGER),it converts it
to 1.111444E35
and same way second data is converted to the same, so both got matches and
it gives me error of non unique rows.

can somebody please advise how to tackle with this,, as this column will be
used to join to another table,, i cant convert it to varchar,

is this happening due to some fault in pgsql ? EMS SQL MANAGER ? or  ME :-)


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Robert Lor

Peter Eisentraut wrote:

Robert Lor wrote:


The attached patch contains a couple of fixes in the existing probes 
and includes a few new ones.


- Fixed compilation errors on OS X for probes that use typedefs


Could you explain what these errors are about?  I don't see any errors 
on my machine.


In the current probes.d, the following probe definitions are commented 
out because they cause compilation errors on OS X.


* probe lock__wait__start(unsigned int, LOCKMODE);
* probe lock__wait__done(unsigned int, LOCKMODE);
* probe buffer__read__start(BlockNumber, Oid, Oid, Oid, bool);
* probe buffer__read__done(BlockNumber, Oid, Oid, Oid, bool, bool);

The problem was fixed by making the changes below.  probes.d is 
preprocessed with cpp and as such only  macros get expanded.


From:

typedef unsigned int LocalTransactionId;
typedef int LWLockId;
typedef int LWLockMode;
typedef int LOCKMODE;
typedef unsigned int BlockNumber;
typedef unsigned int Oid;
typedef int ForkNumber;


To:

#define LocalTransactionId unsigned int
#define LWLockId int
#define LWLockMode int
#define LOCKMODE int
#define BlockNumber unsigned int
#define Oid unsigned int
#define ForkNumber int





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function with defval returns error

2008-12-16 Thread Pavel Stehule
2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:


 On Tue, Dec 16, 2008 at 5:35 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
 
  When we find the (pathpos  prevResult-pathpos) into
  FuncnameGetCandidates(), we just replacing the prevResult with the
  newResult.
 
  While replacing the previous with new we do not replace the args. I
  think in
  case of default we need to take care for the args as well.
 

 personally I prefer raise exception, when I find similar function, we
 don't need emulate Oracle behave.

 Raise exception when find similar function, do you mean similar function
 with different pathpos ? Or similar function with defval ?

I mean similar with defval

Pavel




 Regards
 Pavel Stehule

  Thanks,
  Rushabh
 
  On Tue, Dec 16, 2008 at 12:26 PM, Pavel Stehule
  pavel.steh...@gmail.com
  wrote:
 
  Hello
 
  I'll write patch that block creating all ambiguous overloading.
 
  Regards
  Pavel Stehule
 
  2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
  
   Another issue found on CVS head 
  
   CREATE USER test WITH PASSWORD 'test';
   CREATE SCHEMA AUTHORIZATION test;
  
   CREATE OR REPLACE FUNCTION f_test(x in numeric) RETURNS numeric as $$
   BEGIN
   RETURN x;
   END;
   $$ language plpgsql;
  
   select f_test(10);
  
   \c postgres test;
  
   select f_test(10);
  
   CREATE OR REPLACE FUNCTION f_test(x in numeric, y in varchar default
   'Local
   Function with parameters') RETURNs numeric as $$
   BEGIN
   RETURN x+1;
   END;
   $$ language plpgsql;
  
   postgres= select f_test(10);
   ERROR:  cache lookup failed for type 2139062142
  
  
  
  
   On Tue, Dec 16, 2008 at 2:07 AM, Peter Eisentraut pete...@gmx.net
   wrote:
  
   On Monday 15 December 2008 15:43:00 Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
 Rushabh Lathia wrote:
 I think this should not return error as the input args here is
 timestamp... inputs?

 In theory yes, but it's currently not that smart.
   
This is truly horrid.  Was that patch *really* ready to commit?
I noticed some comments added to polymorphism.sql that certainly
look like there's still a lot of half-bakedness in it.
  
   There is that one case where a call that could be allowed is
   overly-cautiously
   rejected.  That only happens if you have a mix of overloading and
   default
   parameters.  It's not really half-baked in the sense that it is not
   digestible; it's just not the greatest cake yet.  It's
   improvement-compatible.
  
  
  
   --
   Rushabh Lathia
  
 
 
 
  --
  Rushabh Lathia
 



 --
 Rushabh Lathia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DROP ROLE dependency tracking ...

2008-12-16 Thread Alvaro Herrera
Bruce Momjian wrote:
 Alvaro Herrera wrote:
  Hans-Juergen Schoenig wrote:
  
   when thinking of REASSIGNED OWNED people tend to think about tables  
   rather than about CONNECT rights.
   i would suggest to make DROP ROLE just kill the role unless there is a  
   real object depending on it.
   i would not see a permission to be an object. what do you think?
  
  Yes, this might make some sense.  (Keep in mind that db CONNECT
  privileges were invented after REASSIGN OWNED).  Perhaps we could make
  exceptions -- in which case it would be good to investigate which
  exceptions we need (i.e. for all object types that we support, which
  ones we should be caring about and which ones we should ignore).
  
  I'm stuck in Canuckistan for a week still, so I expect your detailed
  proposal by when I get back home ;-)
 
 Was there any progress on this?

Not from my side --- and Hans-Juergen never got back to me either ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function with defval returns error

2008-12-16 Thread Pavel Stehule
2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:

 When we find the (pathpos  prevResult-pathpos) into
 FuncnameGetCandidates(), we just replacing the prevResult with the
 newResult.

 While replacing the previous with new we do not replace the args. I think in
 case of default we need to take care for the args as well.


personally I prefer raise exception, when I find similar function, we
don't need emulate Oracle behave.

Regards
Pavel Stehule

 Thanks,
 Rushabh

 On Tue, Dec 16, 2008 at 12:26 PM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 Hello

 I'll write patch that block creating all ambiguous overloading.

 Regards
 Pavel Stehule

 2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
 
  Another issue found on CVS head 
 
  CREATE USER test WITH PASSWORD 'test';
  CREATE SCHEMA AUTHORIZATION test;
 
  CREATE OR REPLACE FUNCTION f_test(x in numeric) RETURNS numeric as $$
  BEGIN
  RETURN x;
  END;
  $$ language plpgsql;
 
  select f_test(10);
 
  \c postgres test;
 
  select f_test(10);
 
  CREATE OR REPLACE FUNCTION f_test(x in numeric, y in varchar default
  'Local
  Function with parameters') RETURNs numeric as $$
  BEGIN
  RETURN x+1;
  END;
  $$ language plpgsql;
 
  postgres= select f_test(10);
  ERROR:  cache lookup failed for type 2139062142
 
 
 
 
  On Tue, Dec 16, 2008 at 2:07 AM, Peter Eisentraut pete...@gmx.net
  wrote:
 
  On Monday 15 December 2008 15:43:00 Tom Lane wrote:
   Peter Eisentraut pete...@gmx.net writes:
Rushabh Lathia wrote:
I think this should not return error as the input args here is
timestamp... inputs?
   
In theory yes, but it's currently not that smart.
  
   This is truly horrid.  Was that patch *really* ready to commit?
   I noticed some comments added to polymorphism.sql that certainly
   look like there's still a lot of half-bakedness in it.
 
  There is that one case where a call that could be allowed is
  overly-cautiously
  rejected.  That only happens if you have a mix of overloading and
  default
  parameters.  It's not really half-baked in the sense that it is not
  digestible; it's just not the greatest cake yet.  It's
  improvement-compatible.
 
 
 
  --
  Rushabh Lathia
 



 --
 Rushabh Lathia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DROP ROLE dependency tracking ...

2008-12-16 Thread Bruce Momjian

Added to TODO:

Allow dropping of a role that has connection rights

* http://archives.postgresql.org/pgsql-hackers/2008-05/msg00736.php 

---

Hans-Juergen Schoenig wrote:
 good morning,
 
 some days ago i have fallen over an issue which feels more or less like 
 a bug. consider:
 
 
 test=# create role xy LOGIN;
 CREATE ROLE
 
 test=# grant connect on database test to xy;
 GRANT
 
 test=# drop role xy;
 ERROR:  role xy cannot be dropped because some objects depend on it
 DETAIL:  access to database test
 
 this is a totally fresh instance --- all i did was creating a db called 
 test.
 failing would make sense if i would the owner of an object but i fact i 
 don't own anything.
 
 test=# SELECT version();
  
 version 
 --
  PostgreSQL 8.4devel on x86_64-unknown-linux-gnu, compiled by GCC gcc 
 (GCC) 4.2.3 (Ubuntu 4.2.3-2ubuntu7)
 (1 row)
 
 is this a known issue?
 
 many thanks,
 
hans
 
 
 -- 
 Cybertec Sch?nig  Sch?nig GmbH
 PostgreSQL Solutions and Support
 Gr?hrm?hlgasse 26, A-2700 Wiener Neustadt
 Tel: +43/1/205 10 35 / 340
 www.postgresql-support.de, www.postgresql-support.com
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [PATCHES] Auto Partitioning Patch - WIP version 1

2008-12-16 Thread Grzegorz Jaskiewicz


On 2008-12-16, at 07:58, ITAGAKI Takahiro wrote:



server says INSERT 0 row though rows are inserted into child  
tables.

Technically this is correct since 0 rows were inserted in the parent
table.


Yes, but users expect non-0 result normally. Some O/R mapping tools
also checks the result exactly and raises errors (it could be turned
off, but default is on).


this is a general problem with triggers on inserts/updates/deletes. To  
be honest, I would love to see someone fixing it in 8.4, cos it is  
quite annoying - that developer is unable to figure out number of rows  
affected - just because there's trigger on that table.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Coding TODO for 8.4: Synch Rep

2008-12-16 Thread Alvaro Herrera
Fujii Masao escribió:

 Since there are many TODO items, I'm worried about the deadline.
 When is the deadline of this commit fest? December 31st? first half
 of January? ...etc?

November 1st was the deadline.  We're now in feature freeze.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function with defval returns error

2008-12-16 Thread Rushabh Lathia
On Tue, Dec 16, 2008 at 5:35 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
 
  When we find the (pathpos  prevResult-pathpos) into
  FuncnameGetCandidates(), we just replacing the prevResult with the
  newResult.
 
  While replacing the previous with new we do not replace the args. I think
 in
  case of default we need to take care for the args as well.
 

 personally I prefer raise exception, when I find similar function, we
 don't need emulate Oracle behave.


Raise exception when find similar function, do you mean similar function
with different pathpos ? Or similar function with defval ?



 Regards
 Pavel Stehule

  Thanks,
  Rushabh
 
  On Tue, Dec 16, 2008 at 12:26 PM, Pavel Stehule pavel.steh...@gmail.com
 
  wrote:
 
  Hello
 
  I'll write patch that block creating all ambiguous overloading.
 
  Regards
  Pavel Stehule
 
  2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
  
   Another issue found on CVS head 
  
   CREATE USER test WITH PASSWORD 'test';
   CREATE SCHEMA AUTHORIZATION test;
  
   CREATE OR REPLACE FUNCTION f_test(x in numeric) RETURNS numeric as $$
   BEGIN
   RETURN x;
   END;
   $$ language plpgsql;
  
   select f_test(10);
  
   \c postgres test;
  
   select f_test(10);
  
   CREATE OR REPLACE FUNCTION f_test(x in numeric, y in varchar default
   'Local
   Function with parameters') RETURNs numeric as $$
   BEGIN
   RETURN x+1;
   END;
   $$ language plpgsql;
  
   postgres= select f_test(10);
   ERROR:  cache lookup failed for type 2139062142
  
  
  
  
   On Tue, Dec 16, 2008 at 2:07 AM, Peter Eisentraut pete...@gmx.net
   wrote:
  
   On Monday 15 December 2008 15:43:00 Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
 Rushabh Lathia wrote:
 I think this should not return error as the input args here is
 timestamp... inputs?

 In theory yes, but it's currently not that smart.
   
This is truly horrid.  Was that patch *really* ready to commit?
I noticed some comments added to polymorphism.sql that certainly
look like there's still a lot of half-bakedness in it.
  
   There is that one case where a call that could be allowed is
   overly-cautiously
   rejected.  That only happens if you have a mix of overloading and
   default
   parameters.  It's not really half-baked in the sense that it is not
   digestible; it's just not the greatest cake yet.  It's
   improvement-compatible.
  
  
  
   --
   Rushabh Lathia
  
 
 
 
  --
  Rushabh Lathia
 




-- 
Rushabh Lathia


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Bruce Momjian
Robert Lor wrote:
 Bruce Momjian wrote:
  Should I apply this or hold it for 8.5?
 
 

 I think it should go into 8.4 as it also fixes existing problems.

I am seeing the following error when linking the backend on a BSD machine:

./../src/port/libpgport_srv.a -lssl -lcrypto -lgetopt -ldl -lutil -lm -o
postgres
storage/buffer/bufmgr.o: In function `ReadBuffer_common':
storage/buffer/bufmgr.o(.text+0x4e2): undefined reference to
`TRACE_POSTGRESQL_BUFFER_READ_DONE'
storage/smgr/md.o: In function `mdread':
storage/smgr/md.o(.text+0x8a7): undefined reference to
`TRACE_POSTGRESQL_SMGR_MD_READ_DONE'
storage/smgr/md.o: In function `mdwrite':
storage/smgr/md.o(.text+0xab6): undefined reference to
`TRACE_POSTGRESQL_SMGR_MD_WRITE_DONE'
gmake[2]: *** [postgres] Error 1
gmake[2]: Leaving directory
`/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/backend'
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory
`/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src'


-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] non unique value error... pgsql,,help needed.

2008-12-16 Thread Tom Lane
Shri shripa...@gmail.com writes:
 When i enter first data into column (using EMS SQL MANAGER),it converts it
 to 1.111444E35

I think you need to complain to EMS.

regards, tom lane

PS: this is not a hacker-grade question.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] non unique value error... pgsql,,help needed.

2008-12-16 Thread A. Kretschmer
In response to Shri :
 Dear friends,
 
 I am trying to import db from oracle to pgsql,
 
 meanwhile facing a difficulty during importing oracle numeric(38) to pgsql
 numeric(38) column, Because in oracle that column is unique, but PGsql not 
 able
 to correctly identify different and unique values with numeric(38) field.
 
 for eg.
 
 column: a numeric(38) primary
 
 giving me error of non unique value in following two data.
 1. 
 2. 9997  (- both are different.)
 
 When i enter first data into column (using EMS SQL MANAGER),it converts it
 to 1.111444E35

Can you try to insert such values with plain psql? I have tried that and
it works without trouble:

test=# create table foo (id serial, n numeric(38));
NOTICE:  CREATE TABLE will create implicit sequence foo_id_seq for serial 
column foo.id
CREATE TABLE
test=*# insert into foo (n) values ();
INSERT 0 1
test=*# select * from foo;
 id |  n
+--
  1 | 
(1 row)

test=*# insert into foo (n) values (9997);
INSERT 0 1
test=*# select * from foo;
 id |  n
+--
  1 | 
  2 | 9997
(2 rows)

test=*# 


Which version do you have?

I think, it's a problem with EMS SQL MANAGER, but i don't know that tool.



Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUG?] UPDATE with RETURNING tableoid

2008-12-16 Thread Tom Lane
KaiGai Kohei kai...@ak.jp.nec.com writes:
 I found a strange behavior on the recent v8.4devel with/without
 SE-PostgreSQL patch set. Can you reproduce the following behavior?

 When I use UPDATE statement with RETURNING clause which contains
 references to tableoid system column, it returns InvalidOid.
 (The correct valus is 16384 in this case.)
 However, RETURNING clause with INSERT/DELETE statement works well.

Confirmed here, all the way back to 8.2.  I concur it's a bug.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Hot standby

2008-12-16 Thread Simon Riggs

On Fri, 2008-11-28 at 12:45 -0500, Tom Lane wrote:
 Simon Riggs si...@2ndquadrant.com writes:
  On Fri, 2008-11-28 at 11:44 -0500, Tom Lane wrote:
  I hadn't been following the discussion closely enough to know what the
  problem is. 
 
  When we replay an AccessExclusiveLock on the standby we need to kick off
  any current lock holders, after a configurable grace period. Current
  lock holders may include some read-only backends that are
  idle-in-transaction. SIGINT, which is what the current patch uses, is
  not sufficient to dislodge the idle backends.
 
 Hm.  People have complained of that fact from time to time in normal
 usage as well.  Should we simply change SIGINT handling to allow it to
 cancel an idle transaction?

I'm looking at allowing SIGINT cancel an idle transaction.

Just edit StatementCancelHandler() in postgres.c, so that it doesn't
ignore a signal when DoingCommandRead at line 2577.

This patch does actually do what I wanted, but it has some unintended
consequences as well. These mask the fact that it does actually work,
which is confusing and has taken me a while to understand.

The backend accepts the signal and throws an error which then cancels
the transaction. The ERROR appears in the log immediately. However, a
psql client does not respond in any way when this occurs and only when a
new request is sent do we then generate the ERROR message on the client.
pg_stat_activity continues to show IDLE in transaction, even after
global xmin is higher than the xid of the cancelled backend.

Then afterwards the client gets out of sync with the server and starts
putting replies on the wrong messages. Wow.

I'm not sure why recv() doesn't return with EINTR, but I guess I'm about
to find out. Hopefully?

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support
Index: src/backend/tcop/postgres.c
===
RCS file: /home/sriggs/pg/REPOSITORY/pgsql/src/backend/tcop/postgres.c,v
retrieving revision 1.562
diff -c -r1.562 postgres.c
*** src/backend/tcop/postgres.c	13 Dec 2008 02:29:21 -	1.562
--- src/backend/tcop/postgres.c	16 Dec 2008 17:02:05 -
***
*** 2569,2580 
  		QueryCancelPending = true;
  
  		/*
! 		 * If it's safe to interrupt, and we're waiting for a lock, service
! 		 * the interrupt immediately.  No point in interrupting if we're
! 		 * waiting for input, however.
  		 */
! 		if (ImmediateInterruptOK  InterruptHoldoffCount == 0 
! 			CritSectionCount == 0  !DoingCommandRead)
  		{
  			/* bump holdoff count to make ProcessInterrupts() a no-op */
  			/* until we are done getting ready for it */
--- 2569,2580 
  		QueryCancelPending = true;
  
  		/*
! 		 * Service the interrupt immediately if we are waiting for input,
! 		 * or if it's safe to interrupt, and we're waiting for a lock.
  		 */
! 		if (DoingCommandRead || 
! 			(ImmediateInterruptOK  InterruptHoldoffCount == 0 
! CritSectionCount == 0))
  		{
  			/* bump holdoff count to make ProcessInterrupts() a no-op */
  			/* until we are done getting ready for it */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Alvaro Herrera
Bruce Momjian wrote:

 I am seeing the following error when linking the backend on a BSD machine:
 
   ./../src/port/libpgport_srv.a -lssl -lcrypto -lgetopt -ldl -lutil -lm -o
   postgres
   storage/buffer/bufmgr.o: In function `ReadBuffer_common':
   storage/buffer/bufmgr.o(.text+0x4e2): undefined reference to
   `TRACE_POSTGRESQL_BUFFER_READ_DONE'
   storage/smgr/md.o: In function `mdread':
   storage/smgr/md.o(.text+0x8a7): undefined reference to

The reason is that Gen_dummy_probes.sed handles only up to 6 args, and
this function has 7.  Just add one more line to that file.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Alvaro Herrera
Robert Lor wrote:
 Peter Eisentraut wrote:
 Robert Lor wrote:

 The attached patch contains a couple of fixes in the existing probes  
 and includes a few new ones.

 - Fixed compilation errors on OS X for probes that use typedefs

 Could you explain what these errors are about?  I don't see any errors  
 on my machine.

 In the current probes.d, the following probe definitions are commented  
 out because they cause compilation errors on OS X.

Yeah, this was something we agreed to fix when we committed the previous
DTrace patch.  The current code was said to be a stopgap.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Robert Lor

Bruce Momjian wrote:

Should I apply this or hold it for 8.5?


  

I think it should go into 8.4 as it also fixes existing problems.

-Robert


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Peter Eisentraut

Robert Lor wrote:


The attached patch contains a couple of fixes in the existing probes and 
includes a few new ones.


- Fixed compilation errors on OS X for probes that use typedefs


Could you explain what these errors are about?  I don't see any errors 
on my machine.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Robert Lor

Bruce Momjian wrote:


I am seeing the following error when linking the backend on a BSD machine:


  
The updated patch attached should fix this problem. My bad for 
overlooking this.


-Robert


Index: src/backend/access/transam/xlog.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/xlog.c,v
retrieving revision 1.322
diff -u -3 -p -r1.322 xlog.c
--- src/backend/access/transam/xlog.c   9 Nov 2008 17:51:15 -   1.322
+++ src/backend/access/transam/xlog.c   16 Dec 2008 19:46:08 -
@@ -48,6 +48,7 @@
 #include utils/builtins.h
 #include utils/guc.h
 #include utils/ps_status.h
+#include pg_trace.h
 
 
 /* File path names (all relative to $PGDATA) */
@@ -486,6 +487,8 @@ XLogInsert(RmgrId rmid, uint8 info, XLog
if (info  XLR_INFO_MASK)
elog(PANIC, invalid xlog info mask %02X, info);
 
+   TRACE_POSTGRESQL_XLOG_INSERT(rmid, info);
+
/*
 * In bootstrap mode, we don't actually log anything but XLOG resources;
 * return a phony record pointer.
@@ -914,6 +917,8 @@ begin:;
XLogwrtRqst FlushRqst;
XLogRecPtr  OldSegEnd;
 
+   TRACE_POSTGRESQL_XLOG_SWITCH();
+
LWLockAcquire(WALWriteLock, LW_EXCLUSIVE);
 
/*
@@ -1313,12 +1318,14 @@ AdvanceXLInsertBuffer(bool new_segment)
 * Have to write buffers while holding insert 
lock. This is
 * not good, so only write as much as we 
absolutely must.
 */
+   TRACE_POSTGRESQL_WAL_BUFFER_WRITE_START();
WriteRqst.Write = OldPageRqstPtr;
WriteRqst.Flush.xlogid = 0;
WriteRqst.Flush.xrecoff = 0;
XLogWrite(WriteRqst, false, false);
LWLockRelease(WALWriteLock);
Insert-LogwrtResult = LogwrtResult;
+   TRACE_POSTGRESQL_WAL_BUFFER_WRITE_DONE();
}
}
}
@@ -5904,6 +5911,8 @@ CreateCheckPoint(int flags)
if (log_checkpoints)
LogCheckpointStart(flags);
 
+   TRACE_POSTGRESQL_CHECKPOINT_START(flags);
+
/*
 * Before flushing data, we must wait for any transactions that are
 * currently in their commit critical sections.  If an xact inserted its
@@ -6069,6 +6078,11 @@ CreateCheckPoint(int flags)
if (log_checkpoints)
LogCheckpointEnd();
 
+TRACE_POSTGRESQL_CHECKPOINT_DONE(CheckpointStats.ckpt_bufs_written,
+NBuffers, CheckpointStats.ckpt_segs_added,
+CheckpointStats.ckpt_segs_removed,
+CheckpointStats.ckpt_segs_recycled);
+
LWLockRelease(CheckpointLock);
 }
 
Index: src/backend/storage/buffer/bufmgr.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/storage/buffer/bufmgr.c,v
retrieving revision 1.242
diff -u -3 -p -r1.242 bufmgr.c
--- src/backend/storage/buffer/bufmgr.c 19 Nov 2008 10:34:52 -  1.242
+++ src/backend/storage/buffer/bufmgr.c 16 Dec 2008 19:46:11 -
@@ -203,8 +203,7 @@ ReadBuffer_common(SMgrRelation smgr, boo
if (isExtend)
blockNum = smgrnblocks(smgr, forkNum);
 
-   TRACE_POSTGRESQL_BUFFER_READ_START(blockNum, smgr-smgr_rnode.spcNode,
-   smgr-smgr_rnode.dbNode, smgr-smgr_rnode.relNode, isLocalBuf);
+   TRACE_POSTGRESQL_BUFFER_READ_START(forkNum, blockNum, 
smgr-smgr_rnode.spcNode, smgr-smgr_rnode.dbNode, smgr-smgr_rnode.relNode, 
isLocalBuf);
 
if (isLocalBuf)
{
@@ -253,7 +252,7 @@ ReadBuffer_common(SMgrRelation smgr, boo
if (VacuumCostActive)
VacuumCostBalance += VacuumCostPageHit;
 
-   TRACE_POSTGRESQL_BUFFER_READ_DONE(blockNum,
+   TRACE_POSTGRESQL_BUFFER_READ_DONE(forkNum, blockNum,
smgr-smgr_rnode.spcNode,
smgr-smgr_rnode.dbNode,
smgr-smgr_rnode.relNode, isLocalBuf, found);
@@ -380,9 +379,9 @@ ReadBuffer_common(SMgrRelation smgr, boo
if (VacuumCostActive)
VacuumCostBalance += VacuumCostPageMiss;
 
-   TRACE_POSTGRESQL_BUFFER_READ_DONE(blockNum, smgr-smgr_rnode.spcNode,
-   smgr-smgr_rnode.dbNode, smgr-smgr_rnode.relNode,
-   isLocalBuf, found);
+   TRACE_POSTGRESQL_BUFFER_READ_DONE(forkNum, blockNum,
+   smgr-smgr_rnode.spcNode, smgr-smgr_rnode.dbNode,
+   smgr-smgr_rnode.relNode, isLocalBuf, found);
 
return BufferDescriptorGetBuffer(bufHdr);
 }
@@ 

[HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
Consider

create function foo(f1 int, f2 int = 42, f2 int = 43) ...
create view v1 as select foo(11);

In CVS HEAD this gives

regression=# \d v1
   View public.v1
 Column |  Type   | Modifiers 
+-+---
 foo| integer | 
View definition:
 SELECT foo(11, 42, 43) AS foo;

which is an accurate representation of the truth: if you change the
defaults for function foo, v1 will keep on calling it with the old
default values.

Does anyone think this is either unsurprising or desirable?

I'm not sure we can do much to fix it, though.  It'd probably be
possible to have the rewriter or planner insert the default expressions,
instead of the parser; but that creates its own issues.  Suppose I had
v1 defined as above and then did

create or replace function foo(f1 int, f2 int, f2 int = 43) ...

ie, remove one or more default expressions.  *This function definition
no longer matches the original call*.  If we did plan-time insertion of
defaults we'd have little choice but to fail when v1 is executed,
because there'd be no principled way to insert a default for f2.
Treating the defaults as being inserted at parse time at least ensures
that v1's call to foo still works.

This at least needs documentation, I think.

Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread David E. Wheeler

On Dec 16, 2008, at 9:25 PM, Tom Lane wrote:


Consider

create function foo(f1 int, f2 int = 42, f2 int = 43) ...
create view v1 as select foo(11);

In CVS HEAD this gives

regression=# \d v1
  View public.v1
Column |  Type   | Modifiers
+-+---
foo| integer |
View definition:
SELECT foo(11, 42, 43) AS foo;

which is an accurate representation of the truth: if you change the
defaults for function foo, v1 will keep on calling it with the old
default values.


Ooh. Ow.


Does anyone think this is either unsurprising or desirable?


Not I!


I'm not sure we can do much to fix it, though.  It'd probably be
possible to have the rewriter or planner insert the default  
expressions,

instead of the parser; but that creates its own issues.


Would the same thing happen for a prepared statement that calls the  
function? Or another function?



Suppose I had v1 defined as above and then did

create or replace function foo(f1 int, f2 int, f2 int = 43) ...

ie, remove one or more default expressions.  *This function definition
no longer matches the original call*.  If we did plan-time insertion  
of

defaults we'd have little choice but to fail when v1 is executed,
because there'd be no principled way to insert a default for f2.


That seems like it'd be the reasonable thing to do.


Treating the defaults as being inserted at parse time at least ensures
that v1's call to foo still works.


That leads to mysterious action-at-a-distance bugs, though. Too weird.


This at least needs documentation, I think.

Comments?


Documentation at least, yes, but it'd be better, I think, if the  
planner inserted the defaults.


Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Elide null updates

2008-12-16 Thread James Mansion
I saw on a summary for 8.4 that there's a generic function for use as a 
before row trigger that will elide null updates (ie replacement with an 
identical row).


I can see that this is great - but I was wondering if it should be more 
integrated and turned on in the db schema.


Trivially, doing so would mean that there is less of an issue trying to 
integrate with a custom user before trigger, although that's really no 
more than an inconvenience.


I understand that there is an argument for not making it the default 
behaviour given that trigger execution and locking are both affected if 
we do elide the update completely, but it occured to me that while I 
might want the trigger and locking behaviour, I probably never want the 
actual database image copy to happen. Doing so will needlessly bloat the 
database file and give the vacuum procedure work to do - and it seems 
interfere with the new optimisations relating to pages that are all 
visible in all transactions.


Would it be possible to determine a null update cheaply and retain the 
locking and trigger execution, but elide the actual row copy - and in 
particular the associated impact in terms of setting status flags etc?


I guess this would need to be handled at a lower level than the trigger 
approach - and would need an option that is integrated into the schema, 
so we can elide the copy, and optionally the trigger execution, and 
optionally the lock.


James




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
David E. Wheeler da...@kineticode.com writes:
 On Dec 16, 2008, at 9:25 PM, Tom Lane wrote:
 I'm not sure we can do much to fix it, though.  It'd probably be
 possible to have the rewriter or planner insert the default  
 expressions, instead of the parser; but that creates its own issues.

 Would the same thing happen for a prepared statement that calls the  
 function? Or another function?

Prepared statements and functions don't have such a problem, because
they don't have a long-lived parsetree representation.  If you change
the defaults attached to an existing function, that will result in an
invalidation on all plans referencing the function (thanks to some
changes that went in a commitfest or two back), and things should
pretty much just work.  A view or rule is a bigger problem because
it will have a hard binding to a particular function OID.

 Suppose I had v1 defined as above and then did
 
 create or replace function foo(f1 int, f2 int, f2 int = 43) ...
 
 ie, remove one or more default expressions.  *This function definition
 no longer matches the original call*.  If we did plan-time insertion of
 defaults we'd have little choice but to fail when v1 is executed,
 because there'd be no principled way to insert a default for f2.

 That seems like it'd be the reasonable thing to do.

I'm not too thrilled about it.  One thing to consider is that with the
default gone, it might be that there is some other function that matches
the textual call better than this one.  But we can't really change the
view to reference that other function.  So it's going to work
differently than the replan-from-source case, no matter what.

In some ways this is analogous to the problem of when do you expand *
in a SELECT list?.  The SQL spec is clear that it's expanded at CREATE
VIEW time; but there are certainly lots of cases where people wish it
didn't work like that.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-16 Thread tmp

You could add it to here -- note that if we decide it isn't worth it it'll
just get removed.


Which category would you recommend? Optimizer / Executor?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread David E. Wheeler

On Dec 16, 2008, at 10:15 PM, Tom Lane wrote:


Would the same thing happen for a prepared statement that calls the
function? Or another function?


Prepared statements and functions don't have such a problem, because
they don't have a long-lived parsetree representation.  If you change
the defaults attached to an existing function, that will result in an
invalidation on all plans referencing the function (thanks to some
changes that went in a commitfest or two back), and things should
pretty much just work.  A view or rule is a bigger problem because
it will have a hard binding to a particular function OID.


Well, that's good, except that the behavior then becomes inconsistent  
in rules and views vs. everything else. I understand the underlying  
reasons for this (thanks to your reply here), but those not familiar  
with the architecture will find it mystifying. Documentation, at the  
very least, is certainly in order.



That seems like it'd be the reasonable thing to do.


I'm not too thrilled about it.  One thing to consider is that with the
default gone, it might be that there is some other function that  
matches

the textual call better than this one.  But we can't really change the
view to reference that other function.  So it's going to work
differently than the replan-from-source case, no matter what.


Bleh.


In some ways this is analogous to the problem of when do you expand *
in a SELECT list?.  The SQL spec is clear that it's expanded at  
CREATE

VIEW time; but there are certainly lots of cases where people wish it
didn't work like that.


Yeah, function default values aren't specified by the spec, are they?

Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
David E. Wheeler da...@kineticode.com writes:
 On Dec 16, 2008, at 10:15 PM, Tom Lane wrote:
 I'm not too thrilled about it.  One thing to consider is that with the
 default gone, it might be that there is some other function that  
 matches
 the textual call better than this one.  But we can't really change the
 view to reference that other function.  So it's going to work
 differently than the replan-from-source case, no matter what.

 Bleh.

I haven't really thought through the consequences of this, but one thing
we could consider doing to tamp down the damage is to prohibit changing
the number of defaultable parameters of an existing function, ie treat
pronargdefaults as not allowed to change during CREATE OR REPLACE
FUNCTION.  The point here would be to ensure that function replacement
couldn't change the parser's decisions about whether a function matches
a call or not; which is the case in existing releases, but has been
falsified by this patch.

If that's acceptable, then we could insert default expressions at plan
time with confidence that no defaults we need have disappeared under us.

(It might be enough to not allow pronargdefaults to decrease.  Not sure
about that though.  It's certainly possible that adding a default could
make a call ambiguous when it was not before.)

There's another slightly annoying issue here, which is that in at least
some cases, inserting defaults at plan time would require an additional
traversal of the parsetree.  This isn't a huge deal probably, but it
would result in some performance loss in long-but-simple queries.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WIP: pre-upgrade page reservation

2008-12-16 Thread Zdenek Kotala
I attached contrib module which is base of preupgrade script. It should be part 
of 8.4, but it will be required for 8.4-8.5 upgrade.


This part contains space reservation for heap/toast relations. The idea is that 
relation is read and each block is checked if there is enough free space. Tuples 
which will not be visible after upgrade are not count. If there is no space, 
then simple_heap_update on tuple(s) until we release enough space.



BTree space reservation is more complicated. I plan to use _bt_split and split 
page to two half pages with following code:


firstright = _bt_findsplitloc(rel, page, InvalidOffsetNumber, 0,newitemonleft);
 _bt_split(rel, buffer, firstright, InvalidOffsetNumber, 0, NULL,newitemonleft);
_bt_insert_parent(rel, buffer, rbuffer, stack, is_root, is_only);

Because both functions (_bt_findsplintloc, _bt_split) expect that we want to 
insert new item, It will requires modification to accept InvalidOffsetNumber.


Another problem is to build stack which require to use deep tree scan. I hope 
that it will not require exclusive lock on index.



I'm not yet look on hash, gist and gin. I think that hash index should be easy, 
because index tuples can be moved into new bucket page. (Note: general problem 
with hash index is still bitmap pages).


I guess solution for Gist index should be similar to BTree, but I don't have any 
 idea about GIN.


Comments, ideas, better solutions?

thanks Zdenek

PS: This patch requires previous patch which implemented space reservation 
functionality.




diff -r 84e2e9c42ef7 contrib/pg_upgrade/Makefile
--- /dev/null	Thu Jan 01 00:00:00 1970 +
+++ b/contrib/pg_upgrade/Makefile	Fri Dec 12 11:32:03 2008 +0100
@@ -0,0 +1,24 @@
+#-
+#
+# pg_upgrade Makefile
+#
+# $PostgreSQL:  $
+#
+#-
+
+MODULE_big	= pg_upgrade
+OBJS		= pg_upgrade.o rs_heap.o rs_nbtree.o
+DATA_built	= pg_upgrade.sql
+#DATA  	= uninstall_pgstattuple.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/pg_upgrade
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
+
diff -r 84e2e9c42ef7 contrib/pg_upgrade/pg_upgrade.c
--- /dev/null	Thu Jan 01 00:00:00 1970 +
+++ b/contrib/pg_upgrade/pg_upgrade.c	Fri Dec 12 11:32:03 2008 +0100
@@ -0,0 +1,133 @@
+#include postgres.h
+
+#include fmgr.h
+#include funcapi.h
+#include access/clog.h
+#include access/heapam.h
+#include access/htup.h
+#include catalog/namespace.h
+#include storage/bufmgr.h
+#include storage/bufpage.h
+#include storage/relfilenode.h
+#include utils/builtins.h
+#include utils/rel.h
+
+#include miscadmin.h
+
+
+PG_MODULE_MAGIC;
+
+
+PG_FUNCTION_INFO_V1(rscheck_rel_by_oid);
+PG_FUNCTION_INFO_V1(rscheck_rel_by_name);
+
+/* 
+ * Return list of visible items, its count and size. We don't need count
+ * dead and deleted tuples.
+ * List is ended with zeroes 
+ */
+HeapTuple
+page_get_visible_items(Page page, BlockNumber block, int *items, Size *size, bool isHeap)
+{
+	OffsetNumber 	i;
+	OffsetNumber 	maxoff;
+	ItemId			iid;
+	HeapTupleHeader tph;
+	HeapTuple		vi_list = NULL;
+	intcount = 0;
+	Size			occupied = 0;
+
+	maxoff = PageGetMaxOffsetNumber(page);
+
+	if(isHeap)
+		vi_list = palloc0(sizeof(HeapTupleData)*(maxoff+1));
+		
+	for (i = FirstOffsetNumber; i = maxoff; i = OffsetNumberNext(i))
+	{
+		iid = PageGetItemId(page, i);
+		if(ItemIdIsNormal(iid))
+		{
+			if(isHeap)
+			{
+XLogRecPtr lsn;
+tph = (HeapTupleHeader) PageGetItem(page, iid);
+
+/* we need to count only really visible tuples. Other should be removed during page conversion */
+if( TRANSACTION_STATUS_COMMITTED != TransactionIdGetStatus(HeapTupleHeaderGetXmin(tph), lsn) ||
+	TRANSACTION_STATUS_COMMITTED == TransactionIdGetStatus(HeapTupleHeaderGetXmax(tph), lsn) )
+continue;
+
+vi_list[count].t_data = (HeapTupleHeader) PageGetItem(page, iid);
+vi_list[count].t_len = ItemIdGetLength(iid);
+ItemPointerSet((vi_list[count].t_self), block, i);
+			}	
+			occupied += MAXALIGN(ItemIdGetLength(iid));
+			count++;
+		}
+	}
+
+	if(items != NULL)
+		*items = count;
+	if(size != NULL)
+		*size = occupied;
+
+	return vi_list;
+}
+
+int
+rscheck_rel(Relation rel)
+{
+	uint32		blkcnt;
+	uint32 		modified = 0;  
+
+	blkcnt = RelationGetNumberOfBlocks(rel);
+//	elog(NOTICE,Start reserved space check - relation %s,rel-rd_rel-relname);
+	elog(NOTICE,Total blocks for processing: %i, blkcnt);
+
+	switch(rel-rd_rel-relkind)
+	{
+		case 'r' :
+		case 't' : rs_heap(rel);
+   break;
+		default : elog(ERROR, Cannot reserve a space. Unsupported relation kind.);
+	}
+	elog(NOTICE,Total modified blocks: %i, modified);
+
+	return 0;
+}
+
+Datum
+rscheck_rel_by_name(PG_FUNCTION_ARGS)
+{
+	text   *relname = 

Re: [HACKERS] Elide null updates

2008-12-16 Thread Andrew Dunstan



James Mansion wrote:
I saw on a summary for 8.4 that there's a generic function for use as 
a before row trigger that will elide null updates (ie replacement with 
an identical row).


I can see that this is great - but I was wondering if it should be 
more integrated and turned on in the db schema.


Trivially, doing so would mean that there is less of an issue trying 
to integrate with a custom user before trigger, although that's really 
no more than an inconvenience.


I understand that there is an argument for not making it the default 
behaviour given that trigger execution and locking are both affected 
if we do elide the update completely, but it occured to me that while 
I might want the trigger and locking behaviour, I probably never want 
the actual database image copy to happen. Doing so will needlessly 
bloat the database file and give the vacuum procedure work to do - and 
it seems interfere with the new optimisations relating to pages that 
are all visible in all transactions.


Would it be possible to determine a null update cheaply and retain the 
locking and trigger execution, but elide the actual row copy - and in 
particular the associated impact in terms of setting status flags etc?


I guess this would need to be handled at a lower level than the 
trigger approach - and would need an option that is integrated into 
the schema, so we can elide the copy, and optionally the trigger 
execution, and optionally the lock.





I don't follow what you're saying.

If an update is skipped by a trigger, nothing new is written to disk, 
and there should be nothing to vacuum from it. That's why this trigger 
can speed up certain update queries enormously.



cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread David E. Wheeler

On Dec 16, 2008, at 10:36 PM, Tom Lane wrote:

I haven't really thought through the consequences of this, but one  
thing
we could consider doing to tamp down the damage is to prohibit  
changing

the number of defaultable parameters of an existing function, ie treat
pronargdefaults as not allowed to change during CREATE OR REPLACE
FUNCTION.  The point here would be to ensure that function replacement
couldn't change the parser's decisions about whether a function  
matches

a call or not; which is the case in existing releases, but has been
falsified by this patch.

If that's acceptable, then we could insert default expressions at plan
time with confidence that no defaults we need have disappeared under  
us.


Wouldn't you still have the problem if you still allow the default  
values to be changed? And I would hope that they could be changed!


There's another slightly annoying issue here, which is that in at  
least
some cases, inserting defaults at plan time would require an  
additional

traversal of the parsetree.  This isn't a huge deal probably, but it
would result in some performance loss in long-but-simple queries.


Yes, and it avoids the principal of least surprise.

Best,

David


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
David E. Wheeler da...@kineticode.com writes:
 On Dec 16, 2008, at 10:36 PM, Tom Lane wrote:
 ... The point here would be to ensure that function replacement
 couldn't change the parser's decisions about whether a function matches
 a call or not; which is the case in existing releases, but has been
 falsified by this patch.
 
 If that's acceptable, then we could insert default expressions at plan
 time with confidence that no defaults we need have disappeared under  
 us.

 Wouldn't you still have the problem if you still allow the default  
 values to be changed? And I would hope that they could be changed!

No, you could change the *values* of the default expressions.  What
you'd not be allowed to do is remove a default entirely.  (Or, perhaps,
add one; I'm less sure about that.)  The point here is that adding or
removing a default changes the set of calls a function could possibly
match, just as changing the list of parameter types changes what it
can match.  We don't allow the latter and I'm thinking we shouldn't
allow the former either.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Elide null updates

2008-12-16 Thread James Mansion

Andrew Dunstan wrote:

I don't follow what you're saying.

If an update is skipped by a trigger, nothing new is written to disk, 
and there should be nothing to vacuum from it. That's why this trigger 
can speed up certain update queries enormously.

OK I'll try again.

Suppose we do an update.

This will:
- write the new image and do the MVCC housekeeping
- logically lock the updated row
- fire update triggers

Suppose the pre-update trigger elides the update - ALL of the above is 
removed.


Now, one of the objections to making it the default behaviour is that 
the side effects (such as the lock and the trigger) might be desirable, 
or at least that removing them is a change in behaviour.


I'm wondering whether it would be possible to remove the physical update 
but retain the logical side effects,, so this argument about changed 
semantics is removed, and the only issue is whether the cost of 
identifying the noop update is worthwhile given the savings achieved, 
which will be application dependent.


James

(I suspect that if you step back from the implementation of the SQL 
engine as a series of procedural steps on rows - and think of it in 
terms of relational set algebra, then it is entirely defensible to elide 
such an update as a matter of course and that it SHOULD happen - but 
then there is always fun and games around inserting duplicates too, and 
I suspect most of us don't think in algebra terms)



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Elide null updates

2008-12-16 Thread Tom Lane
James Mansion ja...@mansionfamily.plus.com writes:
 I saw on a summary for 8.4 that there's a generic function for use as a 
 before row trigger that will elide null updates (ie replacement with an 
 identical row).

 I can see that this is great - but I was wondering if it should be more 
 integrated and turned on in the db schema.

How about we wait a few releases and see if anyone uses the trigger,
rather than wasting time now on an argument about integrating an
as-yet-unproven feature?

(You *are* wasting our time, btw, because we already had this
discussion.  Until there's some field experience there is no new
evidence available to change the conclusion.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
And while we're on the subject ... as the patch stands, it lets you
provide defaults for polymorphic parameters, as in

regression=# create function eq (f1 anyelement, f2 anyelement default 42) 
returns bool as 'select $1 = $2' language sql;
CREATE FUNCTION
regression=# select eq(now(),now());
 eq 

 t
(1 row)

regression=# select eq(now());  
ERROR:  arguments declared anyelement are not all alike
DETAIL:  timestamp with time zone versus integer
regression=# select eq(11,12);
 eq 

 f
(1 row)

regression=# select eq(11);   
 eq 

 f
(1 row)

regression=# select eq(42);
 eq 

 t
(1 row)

The reason this is pretty wacko is that changing the default can change the
set of calls the function can match:

regression=# create or replace function eq (f1 anyelement, f2 anyelement 
default now()) returns bool as 'select $1 = $2' language sql;
CREATE FUNCTION
regression=# select eq(now());  
 eq 

 t
(1 row)

regression=# select eq(42);
ERROR:  arguments declared anyelement are not all alike
DETAIL:  integer versus timestamp with time zone

In fact, it's worse than that: changing the default can change the
resolved output type of the function.

regression=# create function dupl (f1 anyelement default 42) returns anyelement 

as 'select $1' language sql;
CREATE FUNCTION
regression=# select dupl();
 dupl 
--
   42
(1 row)

regression=# create or replace function dupl (f1 anyelement default now()) 
returns anyelement
as 'select $1' language sql;
CREATE FUNCTION
regression=# select dupl();
 dupl  
---
 2008-12-16 17:39:41.418412-05
(1 row)

Isn't *that* special.  Needless to say, this would utterly break any
view or rule referencing the function.

I'm inclined to think we should forbid defaults for polymorphic
parameters, and wondered if anyone can come up with an actually useful
use-case that'd require it.  If we were going to allow it, we'd at least
have to restrict things enough so that the resolved output type couldn't
change.

(The reason I stumbled across this was that the current behavior is an
artifact of inserting the default expressions at parse time; in fact,
before resolving polymorphic types.  It would get very much more painful
to support any sort of behavior along this line if we don't do that.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Josh Berkus

Tom Lane wrote:

Consider

create function foo(f1 int, f2 int = 42, f2 int = 43) ...
create view v1 as select foo(11);

In CVS HEAD this gives

regression=# \d v1
   View public.v1
 Column |  Type   | Modifiers 
+-+---
 foo| integer | 
View definition:

 SELECT foo(11, 42, 43) AS foo;

which is an accurate representation of the truth: if you change the
defaults for function foo, v1 will keep on calling it with the old
default values.

Does anyone think this is either unsurprising or desirable?


Huh?  Shouldn't changing a function which a view depends on require a 
rebuild of the view?


That is, I think we should treat changing the defaults the same as we 
would changing the number and type of parameters; it kicks off a 
dependency check and requires a CASCADE.


--Josh

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 Huh?  Shouldn't changing a function which a view depends on require a 
 rebuild of the view?

There is no such concept as a rebuild of the view.

 That is, I think we should treat changing the defaults the same as we 
 would changing the number and type of parameters; it kicks off a 
 dependency check and requires a CASCADE.

Dream on ... there is no such facility in Postgres and we are not going
to build one in the 8.4 timeframe.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Gregory Stark
Tom Lane t...@sss.pgh.pa.us writes:

 Josh Berkus j...@agliodbs.com writes:

 That is, I think we should treat changing the defaults the same as we 
 would changing the number and type of parameters; it kicks off a 
 dependency check and requires a CASCADE.

 Dream on ... there is no such facility in Postgres and we are not going
 to build one in the 8.4 timeframe.

Well there is this:

postgres=#  create or replace function foo (text) returns text as 'select 1' 
language sql;
ERROR:  42P13: cannot change return type of existing function
HINT:  Use DROP FUNCTION first.
LOCATION:  ProcedureCreate, pg_proc.c:366

We could say that changing the type of a default argument for a polymorphic
argument isn't allowed just like changing the return value.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
Gregory Stark st...@enterprisedb.com writes:
 We could say that changing the type of a default argument for a polymorphic
 argument isn't allowed just like changing the return value.

The point I was trying to make is that allowing defaults for polymorphic
args at all is going to cause a very significant amount of work and
complication, of which enforcing the above check is just a small part.
I wanted to see a plausible use-case for it before expending that work.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Tom Lane
Oh, and another thing --- should variadic parameters be defaultable?
The current patch doesn't allow it but it looks more like an oversight
than anything that was thought through.  The boundary case for variadic
parameters is a bit weird already:

regression=# create function fv (f1 int, f2 variadic int[]) returns int
regression-# as 'select $1' language sql;
CREATE FUNCTION
regression=# select fv(1,2);
 fv 

  1
(1 row)

regression=# select fv(1,2,3);
 fv 

  1
(1 row)

regression=# select fv(1);
ERROR:  function fv(integer) does not exist
LINE 1: select fv(1);
   ^
HINT:  No function matches the given name and argument types. You might need to 
add explicit type casts.

ISTM one could make a pretty good argument that this last case should
succeed, producing an empty-array argument.  If you buy that, then it
is sensible to forbid defaults for variadics, because a default would
conflict with this behavior (ie, there would be sort of a
system-supplied default of an empty array).  On the other hand, if we
don't want to change this behavior, then I'd like to be able to specify
array[]::int[] as the variadic's default so that I can make the corner
case go away when I want to.  Either way the current behavior seems
unsatisfactory.

A related point is that, because the current code forbids a default
for a variadic, you can't do something like

create function foo (f1 int, f2 int default = 42, f3 variadic int[] = 
array[]::int[])

ie there's no way to have defaults on the preceding parameters either.
I don't know how useful that is, but maybe it's an argument for adopting
the second solution where you can explicitly specify a default for a
variadic.

Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Gregory Stark
Tom Lane t...@sss.pgh.pa.us writes:

 A related point is that, because the current code forbids a default
 for a variadic, you can't do something like

 create function foo (f1 int, f2 int default = 42, f3 variadic int[] = 
 array[]::int[])

 ie there's no way to have defaults on the preceding parameters either.
 I don't know how useful that is, but maybe it's an argument for adopting
 the second solution where you can explicitly specify a default for a
 variadic.

 Comments?

Well if you adopt the implicit empty array argument for missing variadic
parameters then you can just allow defaults for trailing parameters before the
final parameter.

I'm inclined to think an implicit empty array makes the most sense. If a
function-writer wants to enforce a minimum number of arguments they can check
and throw an error.

The question arises though whether it's useful to have any default other than
an empty array. I don't see a compelling reason.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Josh Berkus

Tom Lane wrote:

Oh, and another thing --- should variadic parameters be defaultable?
The current patch doesn't allow it but it looks more like an oversight
than anything that was thought through.  The boundary case for variadic
parameters is a bit weird already:


From a user perspective, if we just told people polymorphic and 
variadic parameters do not accept defaults, and give people an error 
message, I can't imagine anyone caring.Then we can support them 
later if someone wants to troubleshoot the corner cases.


--Josh

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Coding TODO for 8.4: Synch Rep

2008-12-16 Thread Robert Treat
On Tuesday 16 December 2008 08:01:49 Alvaro Herrera wrote:
 Fujii Masao escribió:
  Since there are many TODO items, I'm worried about the deadline.
  When is the deadline of this commit fest? December 31st? first half
  of January? ...etc?

 November 1st was the deadline.  We're now in feature freeze.


November 1st was when the commitfest started, I think he was wondering when 
the commitfest was going to end. This being the last commitfest, it runs 
differently than others; as Alvaro mentioned, we're now in feature freeze, 
but when that will end is still undetermined. In other words, if you have a 
patch for 8.4 that is already submitted but not committed, keep hacking!

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Tom Lane
Gregory Stark st...@enterprisedb.com writes:
 I'm inclined to think an implicit empty array makes the most sense. If a
 function-writer wants to enforce a minimum number of arguments they can check
 and throw an error.

 The question arises though whether it's useful to have any default other than
 an empty array. I don't see a compelling reason.

I'm not sure if that's useful either.  However, I think there are
probably quite a lot of cases where an empty array *isn't* desirable,
and so letting the current behavior alone seems okay, so long as there's
a way to override that and specify default = empty array when you do
want it to be possible.

The other way seems to boil down to a variadic parameter has an
implicit default that you're not allowed to override, which doesn't
seem tremendously attractive.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Gregory Stark
Tom Lane t...@sss.pgh.pa.us writes:

 Gregory Stark st...@enterprisedb.com writes:
 We could say that changing the type of a default argument for a polymorphic
 argument isn't allowed just like changing the return value.

 The point I was trying to make is that allowing defaults for polymorphic
 args at all is going to cause a very significant amount of work and
 complication, of which enforcing the above check is just a small part.
 I wanted to see a plausible use-case for it before expending that work.

Well honestly I don't see a terribly compelling use case for default arguments
altogether. Obviously they're just a programmer convenience and don't really
let anyone do anything they couldn't do without them. 

So it's not like any use case for default polymorphic arguments is going to be
especially compelling either. But I don't see any reason it'll be any less
useful for polymorphic arguments than any other type.

The fundamental problem with polymorphic parameters and default arguments is
that the type of the argument determines the type of the return value. And the
type of the return value can't change within an existing parse tree -- so it
seems to me that barring changing the type of a default argument for a
polymorphic parameter is precisely targeted to the source and should cover all
problems it causes.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Josh Berkus

Greg,


Well honestly I don't see a terribly compelling use case for default arguments
altogether. Obviously they're just a programmer convenience and don't really
let anyone do anything they couldn't do without them. 


The primary use case is for people who are porting applications from 
other DBMSes.  Which don't support polymorphic arguments in the first place.


--Josh

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Brendan Jurd
On Wed, Dec 17, 2008 at 11:07 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 ISTM one could make a pretty good argument that this last case should
 succeed, producing an empty-array argument.  If you buy that, then it
 is sensible to forbid defaults for variadics,

Yep, +1 for this approach.  I would intuitively expect that, if I omit
variadic argument(s) when calling a function, that the function ends
up getting an empty array of the appropriate type.

And when writing a variadic function, I wouldn't find it all
surprising or annoying if I was not allowed to override this default.
In the unlikely case I want my function to do something special when
it gets an empty array, I can write that behaviour into the function
body.

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Tom Lane
Brendan Jurd dire...@gmail.com writes:
 On Wed, Dec 17, 2008 at 11:07 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 ISTM one could make a pretty good argument that this last case should
 succeed, producing an empty-array argument.  If you buy that, then it
 is sensible to forbid defaults for variadics,

 Yep, +1 for this approach.  I would intuitively expect that, if I omit
 variadic argument(s) when calling a function, that the function ends
 up getting an empty array of the appropriate type.

Actually, I just realized that there's another fly in the ointment:
the current variadic code allows variadic anyarray, which is
equivalent to an appropriate number of anyelement arguments.  If we
allow defaulting then there's a big problem: no principled way to
decide what type the empty array is.

The explicit-default solution would work around that, by making the
user say what type he wants.  However it puts us right back into the
situation of having a default for a polymorphic argument, which I
was hoping to avoid.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Bruce Momjian

Thanks, applied.

---

Robert Lor wrote:
 Bruce Momjian wrote:
 
  I am seeing the following error when linking the backend on a BSD machine:
 
 

 The updated patch attached should fix this problem. My bad for 
 overlooking this.
 
 -Robert
 
 

 Index: src/backend/access/transam/xlog.c
 ===
 RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/xlog.c,v
 retrieving revision 1.322
 diff -u -3 -p -r1.322 xlog.c
 --- src/backend/access/transam/xlog.c 9 Nov 2008 17:51:15 -   1.322
 +++ src/backend/access/transam/xlog.c 16 Dec 2008 19:46:08 -
 @@ -48,6 +48,7 @@
  #include utils/builtins.h
  #include utils/guc.h
  #include utils/ps_status.h
 +#include pg_trace.h
  
  
  /* File path names (all relative to $PGDATA) */
 @@ -486,6 +487,8 @@ XLogInsert(RmgrId rmid, uint8 info, XLog
   if (info  XLR_INFO_MASK)
   elog(PANIC, invalid xlog info mask %02X, info);
  
 + TRACE_POSTGRESQL_XLOG_INSERT(rmid, info);
 +
   /*
* In bootstrap mode, we don't actually log anything but XLOG resources;
* return a phony record pointer.
 @@ -914,6 +917,8 @@ begin:;
   XLogwrtRqst FlushRqst;
   XLogRecPtr  OldSegEnd;
  
 + TRACE_POSTGRESQL_XLOG_SWITCH();
 +
   LWLockAcquire(WALWriteLock, LW_EXCLUSIVE);
  
   /*
 @@ -1313,12 +1318,14 @@ AdvanceXLInsertBuffer(bool new_segment)
* Have to write buffers while holding insert 
 lock. This is
* not good, so only write as much as we 
 absolutely must.
*/
 + TRACE_POSTGRESQL_WAL_BUFFER_WRITE_START();
   WriteRqst.Write = OldPageRqstPtr;
   WriteRqst.Flush.xlogid = 0;
   WriteRqst.Flush.xrecoff = 0;
   XLogWrite(WriteRqst, false, false);
   LWLockRelease(WALWriteLock);
   Insert-LogwrtResult = LogwrtResult;
 + TRACE_POSTGRESQL_WAL_BUFFER_WRITE_DONE();
   }
   }
   }
 @@ -5904,6 +5911,8 @@ CreateCheckPoint(int flags)
   if (log_checkpoints)
   LogCheckpointStart(flags);
  
 + TRACE_POSTGRESQL_CHECKPOINT_START(flags);
 +
   /*
* Before flushing data, we must wait for any transactions that are
* currently in their commit critical sections.  If an xact inserted its
 @@ -6069,6 +6078,11 @@ CreateCheckPoint(int flags)
   if (log_checkpoints)
   LogCheckpointEnd();
  
 +TRACE_POSTGRESQL_CHECKPOINT_DONE(CheckpointStats.ckpt_bufs_written,
 +NBuffers, CheckpointStats.ckpt_segs_added,
 +CheckpointStats.ckpt_segs_removed,
 +CheckpointStats.ckpt_segs_recycled);
 +
   LWLockRelease(CheckpointLock);
  }
  
 Index: src/backend/storage/buffer/bufmgr.c
 ===
 RCS file: /projects/cvsroot/pgsql/src/backend/storage/buffer/bufmgr.c,v
 retrieving revision 1.242
 diff -u -3 -p -r1.242 bufmgr.c
 --- src/backend/storage/buffer/bufmgr.c   19 Nov 2008 10:34:52 -  
 1.242
 +++ src/backend/storage/buffer/bufmgr.c   16 Dec 2008 19:46:11 -
 @@ -203,8 +203,7 @@ ReadBuffer_common(SMgrRelation smgr, boo
   if (isExtend)
   blockNum = smgrnblocks(smgr, forkNum);
  
 - TRACE_POSTGRESQL_BUFFER_READ_START(blockNum, smgr-smgr_rnode.spcNode,
 - smgr-smgr_rnode.dbNode, smgr-smgr_rnode.relNode, isLocalBuf);
 + TRACE_POSTGRESQL_BUFFER_READ_START(forkNum, blockNum, 
 smgr-smgr_rnode.spcNode, smgr-smgr_rnode.dbNode, smgr-smgr_rnode.relNode, 
 isLocalBuf);
  
   if (isLocalBuf)
   {
 @@ -253,7 +252,7 @@ ReadBuffer_common(SMgrRelation smgr, boo
   if (VacuumCostActive)
   VacuumCostBalance += VacuumCostPageHit;
  
 - TRACE_POSTGRESQL_BUFFER_READ_DONE(blockNum,
 + TRACE_POSTGRESQL_BUFFER_READ_DONE(forkNum, blockNum,
   smgr-smgr_rnode.spcNode,
   smgr-smgr_rnode.dbNode,
   smgr-smgr_rnode.relNode, isLocalBuf, found);
 @@ -380,9 +379,9 @@ ReadBuffer_common(SMgrRelation smgr, boo
   if (VacuumCostActive)
   VacuumCostBalance += VacuumCostPageMiss;
  
 - TRACE_POSTGRESQL_BUFFER_READ_DONE(blockNum, smgr-smgr_rnode.spcNode,
 - smgr-smgr_rnode.dbNode, smgr-smgr_rnode.relNode,
 - isLocalBuf, found);
 + TRACE_POSTGRESQL_BUFFER_READ_DONE(forkNum, blockNum,
 + smgr-smgr_rnode.spcNode, 

Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
Gregory Stark st...@enterprisedb.com writes:
 So it's not like any use case for default polymorphic arguments is going to be
 especially compelling either. But I don't see any reason it'll be any less
 useful for polymorphic arguments than any other type.

Well, the reason it seems funny to me is that the point of a polymorphic
argument is to be datatype-agnostic, and as soon as you put in a default
value of a particular type you're letting go of that principle.

The only case I can imagine being actually useful is where the default
is an untyped null, and you have a non-defaultable anyelement argument
that drives determination of what the defaulted one is, ie something
like

create function array2 (f1 anyelement, f2 anyelement = null)
returns anyarray
as 'select array[$1,$2]' language sql;

Although this seems well-defined in principle, I'm not sure if it
actually works with the current patch; and it'd be even trickier
if we try to put in some code that says the default's type can't
change.  A null constant has the same type as an unmarked string
literal (namely UNKNOWN) but IIRC they don't really behave the same
for type resolution purposes.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUG?] UPDATE with RETURNING tableoid

2008-12-16 Thread KaiGai Kohei
Tom Lane wrote:
 KaiGai Kohei kai...@ak.jp.nec.com writes:
 I found a strange behavior on the recent v8.4devel with/without
 SE-PostgreSQL patch set. Can you reproduce the following behavior?
 
 When I use UPDATE statement with RETURNING clause which contains
 references to tableoid system column, it returns InvalidOid.
 (The correct valus is 16384 in this case.)
 However, RETURNING clause with INSERT/DELETE statement works well.
 
 Confirmed here, all the way back to 8.2.  I concur it's a bug.

http://git.postgresql.org/?p=postgresql.git;a=commitdiff;h=ab7c31031eb4c9fa73d92c19d55a8c7f8bbf4196

Thanks, I confirmed the update eliminates the bug.
-- 
OSS Platform Development Division, NEC
KaiGai Kohei kai...@ak.jp.nec.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Robert Haas
On Tue, Dec 16, 2008 at 3:25 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Does anyone think this is either unsurprising or desirable?

That's horrible.

I wonder whether the whole architecture is wrong here.  Perhaps when a
function is created with N arguments of which M have default values,
we should actually create M entries in pg_proc: one for each possible
number of arguments from N-M up through N.  The one with N arguments
would be the main entry, and the rest would be dependent entries that
would get dropped if the main entry were dropped (similar to the way
sequences for serial columns depend on the parent table).  If one of
the dependent entries conflicted with an existing entry, then CREATE
FUNCTION would simply fail.

I think this would kill all of the problems reported thus far at one
blow.  There wouldn't be any need for code to resolve ambiguous
function calls because there wouldn't be any ambiguous function calls,
or at least no more than what already exists in 8.3.  The problem you
report here would go away because the view definition would match one
of the dummy entries.  Removing a necessary default value would remove
that dummy entry, which would be caught by the existing dependency
stuff.  Changing a default would amount to changing the definition of
a dummy entry as if by CREATE OR REPLACE FUNCTION, which would work
just as expected.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DTrace probes patch

2008-12-16 Thread Fujii Masao
Hi,

On Wed, Dec 17, 2008 at 4:53 AM, Robert Lor robert@sun.com wrote:
 @@ -1313,12 +1318,14 @@ AdvanceXLInsertBuffer(bool new_segment)
 * Have to write buffers while holding insert
 lock. This is
 * not good, so only write as much as we
 absolutely must.
 */
 +   TRACE_POSTGRESQL_WAL_BUFFER_WRITE_START();
WriteRqst.Write = OldPageRqstPtr;
WriteRqst.Flush.xlogid = 0;
WriteRqst.Flush.xrecoff = 0;
XLogWrite(WriteRqst, false, false);
LWLockRelease(WALWriteLock);
Insert-LogwrtResult = LogwrtResult;
 +   TRACE_POSTGRESQL_WAL_BUFFER_WRITE_DONE();

Why is TRACE_POSTGRESQL_WAL_BUFFER_WRITE_START/DONE called
only in AdvanceXLInsertBuffer? We can trace only a part of WAL buffer write?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 I wonder whether the whole architecture is wrong here.  Perhaps when a
 function is created with N arguments of which M have default values,
 we should actually create M entries in pg_proc: one for each possible
 number of arguments from N-M up through N.

That's been considered and rejected before, in the context of the
variadic-function patch which has a lot of the same issues.  What it
mostly does is bloat pg_proc.

 I think this would kill all of the problems reported thus far at one
 blow.

No, it doesn't resolve any of them ... particularly not the ones
associated with defaults for polymorphics.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Partitioning wiki page

2008-12-16 Thread Emmanuel Cecchet

Hi all,

I have put a first draft of the wiki page dedicated to the table 
partitioning development at 
http://wiki.postgresql.org/wiki/Table_partitioning


The page is still incomplete and need much more work but it should be a 
good starting point for discussions and a way to move forward.


Thanks,
Emmanuel

--
Emmanuel Cecchet
Aster Data Systems
Web: http://www.asterdata.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Function with defval returns error

2008-12-16 Thread Tom Lane
Pavel Stehule pavel.steh...@gmail.com writes:
 I'll write patch that block creating all ambiguous overloading.

Don't bother --- it's a useless solution because you can't guarantee
that concurrent insertions into pg_proc won't create an ambiguous
situation.  You have to define the resolution rules to cope, or else
generate an ambiguous-function error on the fly when the currently
seen contents of pg_proc create an ambiguous situation.

In which connection, there's yet another thing I don't like about
the current patch behavior.  Given

create function foo (f1 int, f2 int = 42)
create function foo (f1 int, f2 numeric = 42.0)

select foo(10)

I accept that there's nothing much we can do except throw an ambiguous
function error.  However, the patch also throws error for

create function foo (f1 int, f2 int = 42)
create function foo (f1 int, f2 int = 42, f2 int = 43)

select foo(10)

It seems to me that we could usefully consider that the function with
fewer defaulted arguments takes precedence.  In particular, the limiting
case of that is that a function with no defaulted arguments takes
precedence over those with some.  This case *must* work:

create function foo (f1 int)
create function foo (f1 int, f2 int = 42)

select foo(10)

and it seems like just an arbitrary exception if you don't have a rule
about preferring fewer defaults over more.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep: First Thoughts on Code

2008-12-16 Thread Fujii Masao
Hi,

On Tue, Dec 16, 2008 at 7:21 PM, Simon Riggs si...@2ndquadrant.com wrote:

 On Tue, 2008-12-16 at 12:36 +0900, Fujii Masao wrote:

  So from my previous list
 
  1. We sent the message to standby (A)
  2. We received the message on standby
  3. We wrote the WAL to the WAL file (B)
  4. We fsync'd the WAL file (C)
  5. We CRC checked the WAL commit record
  6. We applied the WAL commit record
 
  Please could you also add an option #4, i.e. add the *option* to fsync
  the WAL to disk at commit time also. That requires us to add a third
  option to synchronous_replication parameter.

 The above option should be configured on the primary? or standby?
 The primary is suitable to vary it from transaction to transaction. On
 the other hand, it should be configured on the standby in order to
 choose it for every standby (in the future).

 I prefer the latter, and thought that it should be added into recovery.conf.
 I mean, synchronous_replication identifies only whether commit waits for
 replication (if the name is confusing, I would rename it). The above
 options (#1-#6) are chosen in recovery.conf. What is your opion?

 No, we've been through that loop already a few months back:
 Transaction-controlled robustness.

 It should be up to the client on the primary to decide how much waiting
 they would like to perform in order to provide a guarantee. A change of
 setting on the standby should not be allowed to alter the performance or
 durability on the primary.

OK. I will extend synchronous_replication, make walsender send XLOG
with synchronization mode flag and make walreceiver perform according
to the flag.


 My perspective is that synchronous_replication specifies how long to
 wait. Current settings are off (don't wait) or on (meaning wait
 until point #3). So I think we should change this to a list of options
 to allow people to more carefully select how much waiting is required.

In the latest patch, off keeps us waiting for replication in some
cases, e.g. forceSyncCommit = true. This is analogous to the way
synchronous_commit works. When off keeps us waiting for
replication, which option (#1-#6) should we choose? Should it be
user-configurable (though the parameter values are doubled)?
hardcode #3? off always should not keep us waiting for
replication?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Looking for someone with MinGW

2008-12-16 Thread ITAGAKI Takahiro

Andrew Dunstan and...@dunslane.net wrote:

  Thanks. The backtrace is kind of strange, but I might have found it.
  Could you please update from CVS and re-run?
 
 same result ;-(

Hi, I found the cause.

Segfault comes from the following lines.
[ecpg/test/connect/test1.pgc]
exec sql connect to tcp:postgresql://localhost/ user connectdb;
exec sql disconnect;

- ECPGdisconnect()
- ecpg_finish()
- ecpg_log(ecpg_finish: connection %s closed\n, act-name);
HERE  - vfprintf(debugstream, fmt, ap);

Actual error occurs in vfprintf() because act-name can be NULL.
sprintf(..., %s, NULL) could work on some platform (the result is '(null)'),
but it crashes on Windows (msvcrt). We need to avoid passing NULLs as
arguments to %s format for printf families.

The attached patch fixes the segfault. Regression tests can finish
successfully but there is still a difference. The diff seems to be
trivial and come from error message changes.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



regression.diffs
Description: Binary data


ecpg-fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Looking for someone with MinGW

2008-12-16 Thread Tom Lane
ITAGAKI Takahiro itagaki.takah...@oss.ntt.co.jp writes:
 Hi, I found the cause.
 ...
 Actual error occurs in vfprintf() because act-name can be NULL.
 sprintf(..., %s, NULL) could work on some platform (the result is '(null)'),
 but it crashes on Windows (msvcrt). We need to avoid passing NULLs as
 arguments to %s format for printf families.

Hmm, Windows is hardly the only platform where that would crash.
I'm surprised we don't have more buildfarm members complaining about
this.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Looking for someone with MinGW

2008-12-16 Thread Gregory Stark
Tom Lane t...@sss.pgh.pa.us writes:

 ITAGAKI Takahiro itagaki.takah...@oss.ntt.co.jp writes:
 Hi, I found the cause.
 ...
 Actual error occurs in vfprintf() because act-name can be NULL.
 sprintf(..., %s, NULL) could work on some platform (the result is 
 '(null)'),
 but it crashes on Windows (msvcrt). We need to avoid passing NULLs as
 arguments to %s format for printf families.

 Hmm, Windows is hardly the only platform where that would crash.
 I'm surprised we don't have more buildfarm members complaining about
 this.

Actually I thought the behaviour of spitting out (null) was unique to glibc.

Don't we have plenty of BSD and other implementations?


-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Brendan Jurd
On Wed, Dec 17, 2008 at 12:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Actually, I just realized that there's another fly in the ointment:
 the current variadic code allows variadic anyarray, which is
 equivalent to an appropriate number of anyelement arguments.  If we
 allow defaulting then there's a big problem: no principled way to
 decide what type the empty array is.

I see your point.  Polymorphic + variadic is tricky.

Because Postgres cannot determine the default type for the empty
variadic anyarray argument, I think it makes sense to throw an error
in this case.

So if I had these two functions ...

var1(a int, b variadic int[])
var2(a int, b variadic anyarray)

... it would be okay to write var1(8), which resolves as var1(8,
array[]::int[]).  But if I tried to write var2(8) I'd get an error.
Maybe something like cannot determine type of missing variadic
arguments.

NB I have no idea whether such an approach would be practical to
implement, but I think it's the least astonishing set of behaviours.

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Robert Haas
On Tue, Dec 16, 2008 at 9:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 I wonder whether the whole architecture is wrong here.  Perhaps when a
 function is created with N arguments of which M have default values,
 we should actually create M entries in pg_proc: one for each possible
 number of arguments from N-M up through N.

 That's been considered and rejected before, in the context of the
 variadic-function patch which has a lot of the same issues.  What it
 mostly does is bloat pg_proc.

Only if you have a large number of functions with a large number of
optional arguments each.  That's possible, I suppose, but it hardly
seems likely, or worth worrying about.

 I think this would kill all of the problems reported thus far at one
 blow.

 No, it doesn't resolve any of them ... particularly not the ones
 associated with defaults for polymorphics.

I think that's hyperbole.  You would probably still need to forbid
non-polymorphic defaults for polymorphic parameters (you might be able
to make NULL work, and maybe the empty array for anyarray...  not
sure), but I think that most of the other issues you raised would be
addressed by my proposal.  You may hate it anyway; I'm OK with that.
:-)

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Variadic parameters vs parameter defaults

2008-12-16 Thread Tom Lane
Brendan Jurd dire...@gmail.com writes:
 So if I had these two functions ...

 var1(a int, b variadic int[])
 var2(a int, b variadic anyarray)

 ... it would be okay to write var1(8), which resolves as var1(8,
 array[]::int[]).  But if I tried to write var2(8) I'd get an error.
 Maybe something like cannot determine type of missing variadic
 arguments.

Well, we could unify these behaviors if we insisted on an explicit
default to omit the argument in both cases.

var1(a int, b variadic int[] default '{}'::int[])
var2(a int, b variadic anyarray default '{}'::text[]) -- perhaps

In both cases, supplying a single argument would be legal if and only
if you provided a default for the variadic parameter.  As soon as you
give two arguments, the default isn't relevant anymore.  This method
eliminates the discrepancy between anyarray and other types of variadic
parameters, and it leaves the door open for someone to use something
besides an empty array as the default.  (Who are we to say that such a
thing is never useful?  NULL seems like a possibly useful default for
instance.)  I think it also makes the variadic and default features
a bit more orthogonal.

This still leaves us with the annoyance of having to prevent changes in
the actual datatype of a default associated with a polymorphic parameter;
but that's just some implementation tedium, and I'm beginning to find
it more attractive than the alternatives.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Another issue in default-values patch: defaults expanded too soon

2008-12-16 Thread Pavel Stehule
2008/12/17 Robert Haas robertmh...@gmail.com:
 On Tue, Dec 16, 2008 at 9:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 I wonder whether the whole architecture is wrong here.  Perhaps when a
 function is created with N arguments of which M have default values,
 we should actually create M entries in pg_proc: one for each possible
 number of arguments from N-M up through N.

 That's been considered and rejected before, in the context of the
 variadic-function patch which has a lot of the same issues.  What it
 mostly does is bloat pg_proc.

 Only if you have a large number of functions with a large number of
 optional arguments each.  That's possible, I suppose, but it hardly
 seems likely, or worth worrying about.

 I think this would kill all of the problems reported thus far at one
 blow.

 No, it doesn't resolve any of them ... particularly not the ones
 associated with defaults for polymorphics.

 I think that's hyperbole.  You would probably still need to forbid
 non-polymorphic defaults for polymorphic parameters (you might be able
 to make NULL work, and maybe the empty array for anyarray...  not
 sure), but I think that most of the other issues you raised would be
 addressed by my proposal.  You may hate it anyway; I'm OK with that.
 :-)

it's not good solution, problem is anywhere else - we raise exception
about ambigonous call to early, when we don't have all knowleadges.

Pavel


 ...Robert

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: pre-upgrade page reservation

2008-12-16 Thread Heikki Linnakangas

Zdenek Kotala wrote:
BTree space reservation is more complicated. 


Do you need to pre-reserve the space for b-tree? I think you can just 
split it at upgrade, in the new version. The problem with doing that for 
heaps is that to move a heap tuple you need to update the index 
pointers, but for indexes there's no such restriction.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers