[HACKERS] Range Types - representation and alignment

2011-02-09 Thread Jeff Davis
After some significant prior discussion:


Here is what I've found:

Doing the simple thing is extremely wasteful. Let's take TSRANGE, for
instance:
   4 bytes type oid
   1 flag byte
   8 bytes lower bound
   8 bytes upper bound

But when constructing the value itself, it starts off with VARHDRSZ
bytes. It may later be compacted to a short (1 byte) header, but it
starts off as 4. So that means:

  4 bytes VARHDRSZ
  4 bytes type oid
  1 flag byte
  7 pad bytes to get back on a 'd' align boundary
  8 bytes lower bound
  8 bytes upper bound

Total: 32 bytes. When compacted into the tuple, it might be 29. We can't
skip those pad bytes, because we need to honor the subtype's alignment.

If we move the flag byte to the end, the representation works out much
better:

  4 bytes VARHDRSZ
  4 bytes type oid
  8 bytes lower bound
  8 bytes upper bound
  1 flag byte

Total: 25 bytes, turns into about 22 bytes when compacted into the
tuple. It's a little awkward to read that way, but the savings are worth
it. The flag byte is necessary to know whether there are lower and/or
upper bounds, so we need to peek ahead to length - 1, and then continue
scanning forward through the attributes.

So, I'll implement this approach. 22 bytes represents 37.5% overhead
above the good ol' PERIOD data type (a lean 16 bytes), but we can make
up some of that if using unbounded ranges. For instance, a half-open
range like [5, INF) would only take 14 bytes.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,

2011-02-09 Thread Magnus Hagander
On Wed, Feb 9, 2011 at 07:22, Fujii Masao masao.fu...@gmail.com wrote:
 On Wed, Feb 9, 2011 at 2:02 PM, Simon Riggs si...@2ndquadrant.com wrote:
 Why did you change the default to on? This would surprise people who are
 used to PITR.

 You pointed out that the code did not match the documented default. So I
 made them match according to the docs.

 Well, I meant changing the docs rather than the code.

 Making it pause at target by default is more natural behaviour, even if
 it is a change of behaviour. It can waste a lot of time if it leaves
 recovery at the wrong point so I don't see the change as a bad one? Only
 PITR is affected, not replication or standalone operation.

 I agree that new option is useful to reduce the waste of time as you 
 described.
 But I'm still not sure that the change of default behavior is better.

FWIW, I like the change of behavior. We obviously need to put it
prominently in the release notes, but it makes life significantly
easier.

 Because I can
 easily imagine the case where a user feels confused about the pause of PITR
 when he starts PITR as he did in previous version. It would take some time for
 him to learn what to do in that situation (i.e., execute 
 pg_xlog_replay_resume).

 On the second thought, I think it's useful to emit the NOTICE message when
 recovery reaches the pause point, as follows.

    NOTICE: Recovery will not complete until pg_xlog_replay_resume() is called.

Combined with this, yes.

I was also worried about the non-hot-standby case, but I see that the
patch makes sure you can't enable pause when not in hot standby mode.
Which in itself might be surprising - perhaps we need a NOTICE for
when that happens as well?

And it definitely needs to be mentioned in the docs for
pause_at_recovery_target that it only works in hot standby.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types

2011-02-09 Thread Jeff Davis
Updated patch.

Changes:
  * Addressed Erik's review comments.
  * Fixed issue with range @ elem found by Erik.
  * Merged with latest HEAD
  * Changed representation to be more efficient and more robust
(could use some testing though, because I just did this tonight)

TODO:
  * send/recv -- just noticed this tonight, no reason not to do it

Open Items:
  * Maybe typmod
  * grammar -- ask for btree opclass, or compare function?
  * catalog -- store btree opclass, or compare function?
  * should non-superusers be able to create range types?
  * constructor issues I just posted about
  * SQL length function --immutable/stable/volatile?

As always, my repo is here:

http://git.postgresql.org/gitweb?p=users/jdavis/postgres.git;a=log;h=refs/heads/rangetypes

Regards,
Jeff Davis


rangetypes-20110208.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] pgbench to the MAXINT

2011-02-09 Thread Greg Smith
Attached is an updated 64-bit pgbench patch that works as expected for 
all of the most common pgbench operations, including support for scales 
above the previous boundary of just over 21,000.  Here's the patched 
version running against a 303GB database with a previously unavailable 
scale factor:


$ pgbench -T 300 -j 2 -c 4 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 25000
query mode: simple
number of clients: 4
number of threads: 2
duration: 300 s
number of transactions actually processed: 21681
tps = 72.24 (including connections establishing)
tps = 72.250610 (excluding connections establishing)

And some basic Q/A that the values it touched were in the right range:

$ psql -d pgbench -c select min(aid),max(aid) from pgbench_accounts;

min |max
-+

  1 | 25

$ psql -d pgbench -c select min(aid),max(aid),count(*) from 
pgbench_accounts where abalance!=0 


 min  |max | count
---++---
51091 | 2499989587 | 21678

(This system was doing 300MB/s on reads while executing that count, and 
it still took 19 minutes)


The clever way Euler updated the patch, you don't pay for the larger 
on-disk data (bigint columns) unless you use a range that requires it, 
which greatly reduces the number of ways the test results can suffer 
from this change.  I felt the way that was coded was a bit more 
complicated than it needed to be though, as it made where that switch 
happened at get computed at runtime based on the true size of the 
integers.  I took that complexity out and just put a hard line in there 
instead:  if scale=2, you get bigints.  That's not very different 
from the real limit, and it made documenting when the switch happens 
easy to write and to remember.


The main performance concern with this change was whether using int64 
more internally for computations would slow things down on a 32-bit 
system.  I thought I'd test that on my few years old laptop.  It turns 
out that even though I've been running an i386 Linux on here, it's 
actually a 64-bit CPU.  (I think that it has a 32-bit install may be an 
artifact of Adobe Flash install issues, sadly)  So this may not be as 
good of a test case as I'd hoped.  Regardless, running a test aimed to 
stress simple SELECTs, the thing I'd expect to suffer most from 
additional CPU overhead, didn't show any difference in performance:


$ createdb pgbench
$ pgbench -i -s 10 pgbench
$ psql -c show shared_buffers
shared_buffers

256MB
(1 row)
$ pgbench -S -j 2 -c 4 -T 60 pgbench

i386x86_64
69326924   
69236926   
69236922   
66886772   
69146791   
69026916   
69176909   
69436837   
66896744   
  
66886744min

69436926max
68706860average

Given the noise level of pgbench tests, I'm happy saying that is the 
same speed.  I suspect the real overhead in pgbench's processing relates 
to how it is constantly parsing text to turn them into statements, and 
that how big the integers it uses are is barley detectable over that.


So...where does that leave this patch?  I feel that pgbench will become 
less relevant very quickly in 9.1 unless something like this is 
committed.  And there don't seem to be significant downsides to this in 
terms of performance.  There are however a few rough points left in here 
that might raise concern:


1) A look into the expected range of the rand() function suggests the 
glibc implementation normally proves 30 bits of resolution, so about 1 
billion numbers.  You'll have 1B rows in a pgbench database once the 
scale goes over 10,000.  So without a major overhaul of how random 
number generation is treated here, people can expect the distribution of 
rows touched by a test run to get less even once the database scale gets 
very large.  I added another warning paragraph to the end of the docs in 
this update to mention this.  Long-term, I suspect we may need to adopt 
a superior 64-bit RNG approach, something like a Mersenne Twister 
perhaps.  That's a bit more than can be chewed on during 9.1 development 
though.


2) I'd rate odds are good there's one or more corner-case bugs in 
\setrandom or \setshell I haven't found yet, just from the way that code 
was converted.  Those have some changes I haven't specifically tested 
exhaustively yet.  I don't see any issues when running the most common 
two pgbench tests, but that's doesn't mean every part of that 32 - 64 
bit conversion was done correctly.


Given how I use pgbench, for data generation and rough load testing, I'd 
say neither of those concerns outweights the need to expand the size 
range of this program.  I would be happy to see this go in, followed by 
some alpha and beta testing aimed to see if any of the rough spots I'm 
concerned about actually appear.  Unfortunately I can't fit all of those 
tests in right now, as throwing around one of these 300GB data sets is 
painful--when you're only 

Re: [HACKERS] postponing some large patches to 9.2

2011-02-09 Thread Jeff Davis
On Tue, 2011-02-08 at 15:10 -0500, Chris Browne wrote:
 It's more than a bit sad...  The RangeType change has the massive merit
 of enabling some substantial development changes, where we can get rid
 of whole classes of comparison clauses, and hopefully whole classes of
 range errors.  That was my favorite would-be feature for 9.1.

I appreciate the support.

If you take the feature for a quick spin before the next commitfest,
that would be a big help. If I get it in the first commitfest of 9.2
that may mean some follow-up features, like RANGE KEYs/FKs, and maybe
even RANGE JOIN might have a chance for 9.2 as well. Or, maybe some
other features might find it useful, like partitioning or audit logs.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python do not delete function arguments

2011-02-09 Thread Jan Urbański
On 09/02/11 04:52, Hitoshi Harada wrote:
 2010/12/31 Jan Urbański wulc...@wulczer.org:
 (continuing the flurry of patches)

 Here's a patch that stops PL/Python from removing the function's
 arguments from its globals dict after calling it. It's
 an incremental patch on top of the plpython-refactor patch sent in
 http://archives.postgresql.org/message-id/4d135170.3080...@wulczer.org.

 Git branch for this patch:
 https://github.com/wulczer/postgres/tree/dont-remove-arguments

 Apart from being useless, as the whole dict is unreffed and thus freed
 in PLy_procedure_delete, removing args actively breaks things for
 recursive invocation of the same function. The recursive callee after
 returning will remove the args from globals, and subsequent access to
 the arguments in the caller will cause a NameError (see new regression
 test in patch).
 
 I've reviewed this. The patch is old enough to be rejected by patch
 command, but I manged to apply it by hand.
 It compiles clean. Added tests pass.
 I created fibonacci function similar to recursion_test in the patch
 and confirmed the recursion raises error on 9.0 but not on 9.1.
 Doc is not with the patch since this change is to remove unnecessary
 optimization internally.
 
 Ready for Committer

Thanks,

patch merged with HEAD attached.

Jan
diff --git a/src/pl/plpython/expected/plpython_spi.out b/src/pl/plpython/expected/plpython_spi.out
index 7f4ae5c..cb11f60 100644
*** a/src/pl/plpython/expected/plpython_spi.out
--- b/src/pl/plpython/expected/plpython_spi.out
*** CONTEXT:  PL/Python function result_nro
*** 133,135 
--- 133,163 
   2
  (1 row)
  
+ --
+ -- check recursion with same argument does not clobber globals
+ --
+ CREATE FUNCTION recursion_test(n integer) RETURNS integer
+ AS $$
+ if n in (0, 1):
+ return 1
+ 
+ return n * plpy.execute(select recursion_test(%d) as result % (n - 1))[0][result]
+ $$ LANGUAGE plpythonu;
+ SELECT recursion_test(5);
+  recursion_test 
+ 
+ 120
+ (1 row)
+ 
+ SELECT recursion_test(4);
+  recursion_test 
+ 
+  24
+ (1 row)
+ 
+ SELECT recursion_test(1);
+  recursion_test 
+ 
+   1
+ (1 row)
+ 
diff --git a/src/pl/plpython/plpython.c b/src/pl/plpython/plpython.c
index fff7de7..61ba793 100644
*** a/src/pl/plpython/plpython.c
--- b/src/pl/plpython/plpython.c
*** static Datum PLy_function_handler(Functi
*** 318,324 
  static HeapTuple PLy_trigger_handler(FunctionCallInfo fcinfo, PLyProcedure *);
  
  static PyObject *PLy_function_build_args(FunctionCallInfo fcinfo, PLyProcedure *);
- static void PLy_function_delete_args(PLyProcedure *);
  static PyObject *PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *,
  	   HeapTuple *);
  static HeapTuple PLy_modify_tuple(PLyProcedure *, PyObject *,
--- 318,323 
*** PLy_function_handler(FunctionCallInfo fc
*** 1036,1049 
  			 */
  			plargs = PLy_function_build_args(fcinfo, proc);
  			plrv = PLy_procedure_call(proc, args, plargs);
- 			if (!proc-is_setof)
- 			{
- /*
-  * SETOF function parameters will be deleted when last row is
-  * returned
-  */
- PLy_function_delete_args(proc);
- 			}
  			Assert(plrv != NULL);
  		}
  
--- 1035,1040 
*** PLy_function_handler(FunctionCallInfo fc
*** 1101,1108 
  Py_XDECREF(plargs);
  Py_XDECREF(plrv);
  
- PLy_function_delete_args(proc);
- 
  if (has_error)
  	ereport(ERROR,
  			(errcode(ERRCODE_DATA_EXCEPTION),
--- 1092,1097 
*** PLy_function_build_args(FunctionCallInfo
*** 1310,1329 
  	return args;
  }
  
- 
- static void
- PLy_function_delete_args(PLyProcedure *proc)
- {
- 	int			i;
- 
- 	if (!proc-argnames)
- 		return;
- 
- 	for (i = 0; i  proc-nargs; i++)
- 		if (proc-argnames[i])
- 			PyDict_DelItemString(proc-globals, proc-argnames[i]);
- }
- 
  /*
   * Decide whether a cached PLyProcedure struct is still valid
   */
--- 1299,1304 
diff --git a/src/pl/plpython/sql/plpython_spi.sql b/src/pl/plpython/sql/plpython_spi.sql
index 7f8f6a3..3b65f95 100644
*** a/src/pl/plpython/sql/plpython_spi.sql
--- b/src/pl/plpython/sql/plpython_spi.sql
*** else:
*** 105,107 
--- 105,123 
  $$ LANGUAGE plpythonu;
  
  SELECT result_nrows_test();
+ 
+ 
+ --
+ -- check recursion with same argument does not clobber globals
+ --
+ CREATE FUNCTION recursion_test(n integer) RETURNS integer
+ AS $$
+ if n in (0, 1):
+ return 1
+ 
+ return n * plpy.execute(select recursion_test(%d) as result % (n - 1))[0][result]
+ $$ LANGUAGE plpythonu;
+ 
+ SELECT recursion_test(5);
+ SELECT recursion_test(4);
+ SELECT recursion_test(1);

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python invalidate functions with composite arguments

2011-02-09 Thread Jan Urbański
On 27/01/11 22:42, Jan Urbański wrote:
 On 23/12/10 14:50, Jan Urbański wrote:
 Here's a patch implementing properly invalidating functions that have
 composite type arguments after the type changes, as mentioned in
 http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
 an incremental patch on top of the plpython-refactor patch sent eariler.
 
 Updated to master.

Again.

diff --git a/src/pl/plpython/expected/plpython_types.out b/src/pl/plpython/expected/plpython_types.out
index e74a400..d5f2c70 100644
*** a/src/pl/plpython/expected/plpython_types.out
--- b/src/pl/plpython/expected/plpython_types.out
*** SELECT * FROM test_type_conversion_array
*** 603,608 
--- 603,660 
  ERROR:  return value of function with array return type is not a Python sequence
  CONTEXT:  while creating return value
  PL/Python function test_type_conversion_array_error
+ ---
+ --- Composite types
+ ---
+ CREATE TABLE employee (
+ name text,
+ basesalary integer,
+ bonus integer
+ );
+ INSERT INTO employee VALUES ('John', 100, 10), ('Mary', 200, 10);
+ CREATE OR REPLACE FUNCTION test_composite_table_input(e employee) RETURNS integer AS $$
+ return e['basesalary'] + e['bonus']
+ $$ LANGUAGE plpythonu;
+ SELECT name, test_composite_table_input(employee.*) FROM employee;
+  name | test_composite_table_input 
+ --+
+  John |110
+  Mary |210
+ (2 rows)
+ 
+ ALTER TABLE employee DROP bonus;
+ SELECT name, test_composite_table_input(employee.*) FROM employee;
+ ERROR:  KeyError: 'bonus'
+ CONTEXT:  PL/Python function test_composite_table_input
+ ALTER TABLE employee ADD bonus integer;
+ UPDATE employee SET bonus = 10;
+ SELECT name, test_composite_table_input(employee.*) FROM employee;
+  name | test_composite_table_input 
+ --+
+  John |110
+  Mary |210
+ (2 rows)
+ 
+ CREATE TYPE named_pair AS (
+ i integer,
+ j integer
+ );
+ CREATE OR REPLACE FUNCTION test_composite_type_input(p named_pair) RETURNS integer AS $$
+ return sum(p.values())
+ $$ LANGUAGE plpythonu;
+ SELECT test_composite_type_input(row(1, 2));
+  test_composite_type_input 
+ ---
+  3
+ (1 row)
+ 
+ ALTER TYPE named_pair RENAME TO named_pair_2;
+ SELECT test_composite_type_input(row(1, 2));
+  test_composite_type_input 
+ ---
+  3
+ (1 row)
+ 
  --
  -- Prepared statements
  --
diff --git a/src/pl/plpython/expected/plpython_types_3.out b/src/pl/plpython/expected/plpython_types_3.out
index 577c1ff..ca81b08 100644
*** a/src/pl/plpython/expected/plpython_types_3.out
--- b/src/pl/plpython/expected/plpython_types_3.out
*** SELECT * FROM test_type_conversion_array
*** 603,608 
--- 603,660 
  ERROR:  return value of function with array return type is not a Python sequence
  CONTEXT:  while creating return value
  PL/Python function test_type_conversion_array_error
+ ---
+ --- Composite types
+ ---
+ CREATE TABLE employee (
+ name text,
+ basesalary integer,
+ bonus integer
+ );
+ INSERT INTO employee VALUES ('John', 100, 10), ('Mary', 200, 10);
+ CREATE OR REPLACE FUNCTION test_composite_table_input(e employee) RETURNS integer AS $$
+ return e['basesalary'] + e['bonus']
+ $$ LANGUAGE plpython3u;
+ SELECT name, test_composite_table_input(employee.*) FROM employee;
+  name | test_composite_table_input 
+ --+
+  John |110
+  Mary |210
+ (2 rows)
+ 
+ ALTER TABLE employee DROP bonus;
+ SELECT name, test_composite_table_input(employee.*) FROM employee;
+ ERROR:  KeyError: 'bonus'
+ CONTEXT:  PL/Python function test_composite_table_input
+ ALTER TABLE employee ADD bonus integer;
+ UPDATE employee SET bonus = 10;
+ SELECT name, test_composite_table_input(employee.*) FROM employee;
+  name | test_composite_table_input 
+ --+
+  John |110
+  Mary |210
+ (2 rows)
+ 
+ CREATE TYPE named_pair AS (
+ i integer,
+ j integer
+ );
+ CREATE OR REPLACE FUNCTION test_composite_type_input(p named_pair) RETURNS integer AS $$
+ return sum(p.values())
+ $$ LANGUAGE plpython3u;
+ SELECT test_composite_type_input(row(1, 2));
+  test_composite_type_input 
+ ---
+  3
+ (1 row)
+ 
+ ALTER TYPE named_pair RENAME TO named_pair_2;
+ SELECT test_composite_type_input(row(1, 2));
+  test_composite_type_input 
+ ---
+  3
+ (1 row)
+ 
  --
  -- Prepared statements
  --
diff --git a/src/pl/plpython/plpython.c b/src/pl/plpython/plpython.c
index fff7de7..3c0f35b 100644
*** a/src/pl/plpython/plpython.c
--- b/src/pl/plpython/plpython.c
*** typedef int Py_ssize_t;
*** 101,106 
--- 101,107 
  #include 

Re: [HACKERS] pl/python tracebacks

2011-02-09 Thread Jan Urbański
On 06/02/11 20:12, Jan Urbański wrote:
 On 27/01/11 22:58, Jan Urbański wrote:
 On 23/12/10 14:56, Jan Urbański wrote:
 Here's a patch implementing traceback support for PL/Python mentioned in
 http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
 an incremental patch on top of the plpython-refactor patch sent eariler.

 Updated to master.
 
 Updated to master again.

Once more.
diff --git a/src/pl/plpython/expected/plpython_do.out b/src/pl/plpython/expected/plpython_do.out
index a21b088..fb0f0e5 100644
*** a/src/pl/plpython/expected/plpython_do.out
--- b/src/pl/plpython/expected/plpython_do.out
*** NOTICE:  This is plpythonu.
*** 3,6 
--- 3,9 
  CONTEXT:  PL/Python anonymous code block
  DO $$ nonsense $$ LANGUAGE plpythonu;
  ERROR:  NameError: global name 'nonsense' is not defined
+ DETAIL:  Traceback (most recent call last):
+   PL/Python anonymous code block, line 1, in module
+ nonsense 
  CONTEXT:  PL/Python anonymous code block
diff --git a/src/pl/plpython/expected/plpython_error.out b/src/pl/plpython/expected/plpython_error.out
index 7597ca7..08b6ba4 100644
*** a/src/pl/plpython/expected/plpython_error.out
--- b/src/pl/plpython/expected/plpython_error.out
*** SELECT sql_syntax_error();
*** 35,40 
--- 35,43 
  ERROR:  plpy.SPIError: syntax error at or near syntax
  LINE 1: syntax error
  ^
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function sql_syntax_error, line 1, in module
+ plpy.execute(syntax error)
  QUERY:  syntax error
  CONTEXT:  PL/Python function sql_syntax_error
  /* check the handling of uncaught python exceptions
*** CREATE FUNCTION exception_index_invalid(
*** 45,50 
--- 48,56 
  	LANGUAGE plpythonu;
  SELECT exception_index_invalid('test');
  ERROR:  IndexError: list index out of range
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function exception_index_invalid, line 1, in module
+ return args[1]
  CONTEXT:  PL/Python function exception_index_invalid
  /* check handling of nested exceptions
   */
*** SELECT exception_index_invalid_nested();
*** 57,62 
--- 63,71 
  ERROR:  plpy.SPIError: function test5(unknown) does not exist
  LINE 1: SELECT test5('foo')
 ^
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function exception_index_invalid_nested, line 1, in module
+ rv = plpy.execute(SELECT test5('foo'))
  HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
  QUERY:  SELECT test5('foo')
  CONTEXT:  PL/Python function exception_index_invalid_nested
*** return None
*** 75,80 
--- 84,92 
  	LANGUAGE plpythonu;
  SELECT invalid_type_uncaught('rick');
  ERROR:  plpy.SPIError: type test does not exist
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function invalid_type_uncaught, line 3, in module
+ SD[plan] = plpy.prepare(q, [ test ])
  CONTEXT:  PL/Python function invalid_type_uncaught
  /* for what it's worth catch the exception generated by
   * the typo, and return None
*** return None
*** 121,126 
--- 133,141 
  	LANGUAGE plpythonu;
  SELECT invalid_type_reraised('rick');
  ERROR:  plpy.Error: type test does not exist
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function invalid_type_reraised, line 6, in module
+ plpy.error(str(ex))
  CONTEXT:  PL/Python function invalid_type_reraised
  /* no typo no messing about
   */
*** SELECT valid_type('rick');
*** 140,145 
--- 155,255 
   
  (1 row)
  
+ /* error in nested functions to get a traceback
+ */
+ CREATE FUNCTION nested_error() RETURNS text
+ 	AS
+ 'def fun1():
+ 	plpy.error(boom)
+ 
+ def fun2():
+ 	fun1()
+ 
+ def fun3():
+ 	fun2()
+ 
+ fun3()
+ return not reached
+ '
+ 	LANGUAGE plpythonu;
+ SELECT nested_error();
+ ERROR:  plpy.Error: boom
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function nested_error, line 10, in module
+ fun3()
+   PL/Python function nested_error, line 8, in fun3
+ fun2()
+   PL/Python function nested_error, line 5, in fun2
+ fun1()
+   PL/Python function nested_error, line 2, in fun1
+ plpy.error(boom)
+ CONTEXT:  PL/Python function nested_error
+ /* raising plpy.Error is just like calling plpy.error
+ */
+ CREATE FUNCTION nested_error_raise() RETURNS text
+ 	AS
+ 'def fun1():
+ 	raise plpy.Error(boom)
+ 
+ def fun2():
+ 	fun1()
+ 
+ def fun3():
+ 	fun2()
+ 
+ fun3()
+ return not reached
+ '
+ 	LANGUAGE plpythonu;
+ SELECT nested_error_raise();
+ ERROR:  plpy.Error: boom
+ DETAIL:  Traceback (most recent call last):
+   PL/Python function nested_error_raise, line 10, in module
+ fun3()
+   PL/Python function nested_error_raise, line 8, in fun3
+ fun2()
+   PL/Python function nested_error_raise, line 5, in fun2
+ fun1()
+   PL/Python function nested_error_raise, line 2, in fun1
+ raise plpy.Error(boom)
+ CONTEXT:  PL/Python function 

Re: [HACKERS] Visual Studio 2010/Windows SDK 7.1 support

2011-02-09 Thread Magnus Hagander
On Sun, Feb 6, 2011 at 23:06, Brar Piening b...@gmx.de wrote:
 On Sun, 30 Jan 2011 21:26:22 +0100, Magnus Hagander mag...@hagander.net
 wrote:

 it's not something we should hold up the CF / release for.

 I agree.
 At least it should get some more testing besides mine.

 I've set up virtual machines with VS 2003, VS 2005 Express, VS 2008 Express
 (+ my PC with VS 2010) for testing purposes but I didn't test all possible
 build paths with respect to the external libraries to include.

Yeah, the external libraries are really the biggest thing.

 While I didn't change much of the existing VS 2005/8 code I currently can't
 guarantee that the VS 2010 build will work for every possible external
 library one could include (yet I didn't stumble into any failure while
 testing) and still I could have broken some VS 2005/8 build path too.
 The patch could also be extended to automatically support building libpq
 when VS 2003 is detected or support other desireable features that aren't
 really in the context of supporting VS 2010.

 Being somewhat short of time in the next weeks I'm at least willing to
 rebase the patch on request and do some more testing or fix issues someone
 else has detected before the next release (9.2?) goes beta.

Sounds good.


 If there's some pressure to support VS 2010 asap - please let me know and
 I'll see what I can do.

I don't think there is, really. It's a nice to have, but if it comes
in 9.2 instead of 9.1, I don't think that's a problem. 99.9% of all
Win32 users don't build from source in the first place, and I'm sure
Dave is happy not to have to dela with another version ;)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-09 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Like ALTER THING SET SCHEMA, ALTER THING SET EXTENSION is implicitly
 assuming that there can be only one owning extension for an object.

Yes, I worked from the SET SCHEMA variant and mentally mapped SET
EXTENSION there, if looked like the same idea applied to another
property of the object.

 Furthermore, it's not really intended for *removal* of an object from an
 extension (a concept that doesn't even exist for SET SCHEMA).  We could
 take a page from COMMENT ON and use SET EXTENSION NULL for that, but
 that's surely more of a hack than anything else.

 In contrast, ALTER EXTENSION ADD doesn't presuppose that you couldn't
 add the object to multiple extensions; and it has a natural inverse,

Well I wouldn't want to get there.  I'm not seeing what use case we
would solve by having more than one extension install the same object, I
would rather have a common extension that the others depend on.

 ALTER EXTENSION DROP.  I am not necessarily suggesting that we will ever
 allow either of those things, but I do suggest that we should pick a
 syntax that doesn't look like it's being forced to conform if we ever
 want to do it.  The DROP case at least seems like it might be wanted
 in the relatively near future.

I didn't think of that case because I would think the upgrade script
will just DROP OBJECT instead.  But in some cases I can see extension
authors wanting to ALTER EXTENSION DROP OBJECT in their upgrade script
and provide a second-stage script or procedure to clean up the database
once upgraded.  Only when you don't need the object anymore you can drop
it entirely.

I'm not sure how contrived the use case is here, but I agree that being
prepared for it makes sense.

Adding more that one object in one command is not of a great value I
think, because you still have to lock each object individually, and
that's transaction bound.  Unlike ALTER TABLE … ADD COLUMN where it's a
huge benefit to be able to lock and update the table only once for a
number of columns (add and drops).

But at the same time once the work is done, adding some syntax
flexibility and a loop or two doesn't look too bad if you wanted to get
there.  Well no strong opinion as I'm not doing the work :)

As far as upgrade script for contrib extensions are concerned, we will
be able to produce them from SQL, right?  The trick is to install the
extension first, of course.

  CREATE EXTENSION foo;

  CREATE SCHEMA empty_place;
  SET search_path TO empty_place;

  SELECT 'ALTER EXTENSION ' || E.extname || ' ADD '
  || replace(pg_describe_object(classid, objid, 0),
 N.nspname, '@extschema@')
  || ';'
FROM pg_depend D
 JOIN pg_extension E ON D.refobjid = E.oid
AND D.refclassid = E.tableoid
 JOIN pg_namespace N ON E.extnamespace = N.oid
  WHERE deptype = 'e' AND E.extname = 'foo';

I think it would be a good idea to have that in the documentation to
help authors prepare their first upgrade script.  Well to the extend
that a previous form of it is included in the docs I've put in the
upgrade patch :)

So replacing those scripts I've been working on to switch to the new
syntax would be a matter of running a shell script.  The time consuming
part is definitely the testing, but that too can be scripted.

  DROP EXTENSION foo;
  \i path/to/share/contrib/foo.sql
  create wrapper extension foo;
  alter extension foo upgrade;
  \dx foo

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries

2011-02-09 Thread Thom Brown
On 9 February 2011 02:11, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Feb 8, 2011 at 8:30 PM, Andrew Dunstan and...@dunslane.net wrote:
 Quite right, but the commitfest manager isn't meant to be a substitute for
 one. Bug fixes aren't subject to the same restrictions of feature changes.

 Another option would be to add this here:

 http://wiki.postgresql.org/wiki/PostgreSQL_9.1_Open_Items

I've removed it from the commitfest because it really doesn't belong
there, and I've added it to the open items list.

Thanks

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Move WAL warning

2011-02-09 Thread Magnus Hagander
On Thu, Feb 3, 2011 at 11:19, Magnus Hagander mag...@hagander.net wrote:
 On Wed, Feb 2, 2011 at 18:00, Magnus Hagander mag...@hagander.net wrote:
 On Wed, Feb 2, 2011 at 17:43, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 On 02.02.2011 16:36, Magnus Hagander wrote:

 When running pg_basebackup with -x to include all transaction log, the
 server will still throw a warning about xlog archiving if it's not
 enabled - that is completely irrelevant since pg_basebackup has
 included it already (and if it was gone, the base backup step itself
 will fail - actual error and not warning).

 This patch moves the warning from do_pg_base_backup to pg_base_backup,
 so it still shows when using the explicit function calls, but goes
 away when using pg_basebackup.

 For the sake of consistency, how about moving the pg_stop_backup complete,
 all required WAL segments have been archived notice too?

 Well, it goes out as a NOTICE, so by default it doesn't show.. But
 yeah, for code-consistency it makes sense. Like so, then.

 Thinking some more about it, I realized this is not going to be enough
 - we need to be able to turn off the waiting for WAL segment as well,
 in the case when you're streaming the log. Thus, it needs to be
 controllable from the backup client, and we can't just assume the
 default is ok.

 Attached is an updated patch that adds a NOWAIT option to BASE_BACKUP,
 that turns off the waiting. If it's set, it also doesn't warn about
 not being able to wait in the case when there is nothing to wait for,
 so this is a replacement for the previous patch.

Applied.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] git commitmessages url format change

2011-02-09 Thread Magnus Hagander
Hi!

I've changed the format of the URLs in the git commit messages so they
no longer contain a semicolon, since a number of people reported that
made them stop working for users of gmail (which is a fair amount of
users..) They'll now go to /pg/commitdiff/hash instead which will
redirect back to the full gitweb URL.

Thus, if you're a gmail user, you can now expect the commit messages
to work again.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] git commitmessages url format change

2011-02-09 Thread Pavel Stehule
2011/2/9 Magnus Hagander mag...@hagander.net:
 Hi!

 I've changed the format of the URLs in the git commit messages so they
 no longer contain a semicolon, since a number of people reported that
 made them stop working for users of gmail (which is a fair amount of
 users..) They'll now go to /pg/commitdiff/hash instead which will
 redirect back to the full gitweb URL.

 Thus, if you're a gmail user, you can now expect the commit messages
 to work again.


Thank you very much

see you early

Pavel

 --
  Magnus Hagander
  Me: http://www.hagander.net/
  Work: http://www.redpill-linpro.com/

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Heikki Linnakangas

On 09.02.2011 00:04, Kevin Grittner wrote:

(1)  When HTABs are created, there is the max_size, which is what
the PredicateLockShmemSize function must use in its calculations,
and the init_size, which is what will initially be allocated (and
so, is probably what you see in the usage at the end of the
InitPredLocks function).  That's normally set to half the maximum.


Oh, I see.


(2)  The predicate lock and lock target initialization code was
initially copied and modified from the code for heavyweight locks.
The heavyweight lock code adds 10% to the calculated maximum size.
So I wound up doing that for PredicateLockTargetHash and
PredicateLockHash, but didn't do it for SerializableXidHassh.
Should I eliminate this from the first two, add it to the third, or
leave it alone?


I'm inclined to eliminate it from the first two. Even in 
LockShmemSize(), it seems a bit weird to add a safety margin, the sizes 
of the lock and proclock hashes are just rough estimates anyway.



So if the space was all in HTABs, you might expect shmemsize to be
110% of the estimated maximum, and actual (at the end of the init
function) to be 50% of the estimated maximum.  So the shmemsize
would be (2.2 * actual) at that point.  The difference isn't that
extreme because the list-based pools now used for some structures
are allocated at full size without padding.

In addition to the omission of the RWConflictPool (which is a
biggie), the OldSerXidControlData estimate was only for a *pointer*
to it, not the structure itself.  The attached patch should correct
the shmemsize numbers.


The actual and estimated shmem sizes still didn't add up, I still saw 
actual usage much higher than estimated size, with max_connections=1000 
and max_predicate_locks_per_transaction=10. It turned out to be because:


* You missed that RWConflictPool is sized five times as large as 
SerializableXidHash, and


* The allocation for RWConflictPool elements was wrong, while the 
estimate was correct.


With these changes, the estimated and actual sizes match closely, so 
that actual hash table sizes are 50% of the estimated size as expected.


I fixed those bugs, but this doesn't help with the buildfarm members 
with limited shared memory yet.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Fujii Masao
On Wed, Feb 9, 2011 at 3:53 AM, Robert Haas robertmh...@gmail.com wrote:
 That having been said, there is at least one part of this patch which
 looks to be in pretty good shape and seems independently useful
 regardless of what happens to the rest of it, and that is the code
 that sends replies from the standby back to the primary.  This allows
 pg_stat_replication to display the write/flush/apply log positions on
 the standby next to the sent position on the primary, which as far as
 I am concerned is pure gold.  Simon had this set up to happen only
 when synchronous replication or XID feedback in use, but I think
 people are going to want it even with plain old asynchronous
 replication, because it provides a FAR easier way to monitor standby
 lag than anything we have today.  I've extracted this portion of the
 patch, cleaned it up a bit, written docs, and attached it here.

What about also sending back the timestamp of the last applied
transaction? That's more user-friendly than the apply location
when we calculate the lag of replication, I think.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Blocking Issue

2011-02-09 Thread Sander, Ingo (NSN - DE/Munich)
Hi, 
I have create the following tables: 
1. rnc table
CREATE TABLE act_rnc(rnc_id integer NOT NULL PRIMARY KEY, rnc_data BYTEA);
2. rncgen table
CREATE TABLE act_rncgen(rnc_id integer NOT NULL PRIMARY KEY, rncsubObj_Cnt 
integer, rncgen_data BYTEA);
3. iuo table which has a foreign key reference to rnc table
CREATE TABLE act_iuo(iuo_id integer NOT NULL primary key, rnc_id integer NOT 
NULL, iuo_data BYTEA, FOREIGN KEY(rnc_id) references act_rnc(rnc_id) on delete 
cascade);

Now i open two transactions (separate session with psql). In the first 
transaction I give the following sql sequence: 
begin; 
update act_rnc set rnc_data='rnc_data' where rnc_id=1;

The transaction will be open. 

In a second transaction i give the following sql sequence: 
begin; 
insert into act_iuo values (1,1,'iuo_data');

-- now the second transaction is blocked. I work with PostgreSQL 9.0. 

Some outputs: 
select * from pg_locks;
   locktype| database | relation | page | tuple | virtualxid | 
transactionid | classid | objid | objsubid | virtualtransaction |  pid  |   
mode   | granted
---+--+--+--+---++---+-+---+--++---+--+-
 tuple |16385 |16427 |0 | 8 ||  
 | |   |  | 3/80   |  9230 | ShareLock| 
t
 relation  |16385 |10985 |  |   ||  
 | |   |  | 4/247  | 16535 | AccessShareLock  | 
t
 virtualxid|  |  |  |   | 4/247  |  
 | |   |  | 4/247  | 16535 | ExclusiveLock| 
t
 relation  |16385 |16443 |  |   ||  
 | |   |  | 3/80   |  9230 | RowExclusiveLock | 
t
 transactionid |  |  |  |   ||   
584 | |   |  | 3/80   |  9230 | ExclusiveLock   
 | t
 virtualxid|  |  |  |   | 3/80   |  
 | |   |  | 3/80   |  9230 | ExclusiveLock| 
t
 relation  |16385 |16433 |  |   ||  
 | |   |  | 3/80   |  9230 | AccessShareLock  | 
t
 relation  |16385 |16427 |  |   ||  
 | |   |  | 5/535  |  2814 | RowExclusiveLock | 
t
 virtualxid|  |  |  |   | 5/535  |  
 | |   |  | 5/535  |  2814 | ExclusiveLock| 
t
 transactionid |  |  |  |   ||   
583 | |   |  | 5/535  |  2814 | ExclusiveLock   
 | t
 relation  |16385 |16449 |  |   ||  
 | |   |  | 3/80   |  9230 | RowExclusiveLock | 
t
 relation  |16385 |16427 |  |   ||  
 | |   |  | 3/80   |  9230 | RowShareLock | 
t
 transactionid |  |  |  |   ||   
583 | |   |  | 3/80   |  9230 | ShareLock   
 | f
 relation  |16385 |16433 |  |   ||  
 | |   |  | 5/535  |  2814 | RowExclusiveLock | 
t
(14 rows)

select relname, pg_class.oid from pg_class;
 act_rnc_pkey| 16433
 pg_inherits_parent_index|  2187
 pg_inherits_relid_seqno_index   |  2680
 pg_toast_16435  | 16438
 pg_trigger_oid_index|  2702
 pg_toast_16435_index| 16440
 act_rncgen  | 16435
 act_rncgen_pkey | 16441
 pg_toast_16443  | 16446
 pg_toast_16443_index| 16448
 act_iuo_pkey| 16449
 pg_amop |  2602
 act_iuo | 16443
 pg_largeobject  |  2613
 act_rnc | 16427
 pg_toast_11361  | 11363
 pg_toast_11361_index| 11365
 pg_toast_11366_index| 11370

I assume that the access to act_rnc_pkey causes the blocking, however why? Or 
how I can resolve the blocking (commit one transaction solves the problem, but 
should Postgres not recognize the blocking situation and release one 
transaction?). Is this an error in Postgres?

Best Regards
Ingo Sander

Best Regards/mfG
Ingo Sander
=
Nokia Siemens Networks GmbH Co. KG
NWS EP IV Platf Technical Service DE

St.-Martin-Str. 76
D-81541 München
*Tel.:  

Re: [HACKERS] Blocking Issue

2011-02-09 Thread Heikki Linnakangas

On 09.02.2011 12:15, Sander, Ingo (NSN - DE/Munich) wrote:

Hi,
I have create the following tables:
1. rnc table
CREATE TABLE act_rnc(rnc_id integer NOT NULL PRIMARY KEY, rnc_data BYTEA);
2. rncgen table
CREATE TABLE act_rncgen(rnc_id integer NOT NULL PRIMARY KEY, rncsubObj_Cnt 
integer, rncgen_data BYTEA);
3. iuo table which has a foreign key reference to rnc table
CREATE TABLE act_iuo(iuo_id integer NOT NULL primary key, rnc_id integer NOT 
NULL, iuo_data BYTEA, FOREIGN KEY(rnc_id) references act_rnc(rnc_id) on delete 
cascade);

Now i open two transactions (separate session with psql). In the first 
transaction I give the following sql sequence:
begin;
update act_rnc set rnc_data='rnc_data' where rnc_id=1;

The transaction will be open.

In a second transaction i give the following sql sequence:
begin;
insert into act_iuo values (1,1,'iuo_data');

--  now the second transaction is blocked. I work with PostgreSQL 9.0.
...

I assume that the access to act_rnc_pkey causes the blocking, however why? Or 
how I can resolve the blocking (commit one transaction solves the problem, but 
should Postgres not recognize the blocking situation and release one 
transaction?). Is this an error in Postgres?


The foreign key causes the blocking. PostgreSQL doesn't make a 
distinction on which columns are updated, as far as locking is 
concerned. If the update was update act_rnc set rnc_id=2 where 
rnc_id=1, the insert would have to block to see if the update commits 
or not - if it commits the insert would violate the foreign key and 
needs to be aborted, but if it aborts the insert can succeed. With your 
original example, the insert could go ahead in either case without 
violating the foreign key, since the update doesn't change rnc_id field, 
but PostgreSQL doesn't pay attention to that detail.


There's actually a patch in the current commitfest, awaiting review, to 
address exactly that scenario. See 
https://commitfest.postgresql.org/action/patch_view?id=502 and 
http://archives.postgresql.org/message-id/1294953201-sup-2...@alvh.no-ip.org.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2011-02-09 Thread Itagaki Takahiro
On Thu, Feb 3, 2011 at 00:24, Marko Tiikkaja
marko.tiikk...@cs.helsinki.fi wrote:
 .. and here's the patch.  I'm not too confident with the code I added to
 storage/lmgr/lock.c, but it seems to be working.

Sorry for the delayed review.

The patch needs adjustment of OIDs for recently commits, but it still works
well. See the attached small fix.  The patch looks almost ready to commit
unless we want to fix the pg_locks issue below.

=== Features ===
Now unlock functions only release session-level locks and the behavior
is documented, so no confusion here. We don't have upgrade method
for advisory locks actually -- session and xact locks block each other,
but they are acquired and released independently.

One issue might be in pg_locks, as you pointed out in the previous mail:
 if a session holds both a transaction level and a session level lock
 on the same resource, only one of them will appear in pg_locks.
Also, we cannot distinguish transaction-level locks from session-level
locks from pg_locks.

It was not an issue before because session locks are only used in
internal implementation. It looks as a transaction from users.
However, this feature reveals the status in public. We might need
to add some bits to shared lock state to show which lock is session-level.

=== Implementation ===
* pg_advisory_unlock_all() calls LockReleaseSession(), ant it releases
not only advisory locks but also all session-level locks.
We use session-level locks in some places, but there is no chance
for user to send SQL commands during the lock. The behavior is safe
as of now, but it might break something in the future.
So I'd recommend to keep locktype checks in it.

* user_lockmethod.transactional was changed to 'true', so we don't have
any differences between it and default_lockmethod except trace_flag.
LockMethodData is now almost useless, but we could keep it for compatibility.

 Earlier there was some discussion about adding regression tests for advisory
 locks.  However, I don't see where they would fit in our current .sql files
 and adding a new one just for a few tests didn't seem right.  Anyone have an
 idea where they should go or should I just add a new one?

I think you can add advisory_lock.sql for the test.

-- 
Itagaki Takahiro


advisory4fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] create an extension of postgresql 9 with Visual C++ 2008 express edition

2011-02-09 Thread michel wildcat
Hello,
I am a computer student in belgium, for my academic project I am working to
an extension of postgresql 9 under win xp, by creating h DLL in Visual C++
2008 - I am new in both environments; To start and understand how does it
works, I tried to compile the example complex.c which is in the
directory /tutorial
of  postgresql sources, but unfortunately I have lots of compilation errors
Although I made the various necessary include . Please I would like to
know if there could be a kind of tutorial that explains how to develop a DLL
for Postgresql using Visual C + + Express  (the necessary config, the
library to include, etc.. ), based on the /tutorial/complex.c ffile for
example.
Thanks you in the advance.


Re: [HACKERS] exposing COPY API

2011-02-09 Thread Shigeru HANADA
On Tue, 8 Feb 2011 08:49:36 -0500
Robert Haas robertmh...@gmail.com wrote:
 On Tue, Feb 8, 2011 at 4:42 AM, Shigeru HANADA
 han...@metrosystems.co.jp wrote:
  I'll submit revised file_fdw patch after removing IsForeignTable()
  catalog lookup along Heikki's proposal.
 
 So I'm a bit confused.  I don't see the actual copy API change patch
 anywhere here.  Are we close to getting something committed there?

I'm sorry but I might have missed your point...

I replied here to answer to Itagaki-san's mention about typos in
file_fdw patch.

Or, would you mean that file_fdw should not depend on copy API change
patch?

Regards,
--
Shigeru Hanada



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Peter Eisentraut
On mån, 2011-02-07 at 12:55 -0500, Robert Haas wrote:
 On Mon, Feb 7, 2011 at 12:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Robert Haas robertmh...@gmail.com writes:
  ... Well, the current CommitFest ends in one week, ...
 
  Really?  I thought the idea for the last CF of a development cycle was
  that it kept going till we'd dealt with everything.  Arbitrarily
  rejecting stuff we haven't dealt with doesn't seem fair.
 
 Uh, we did that with 8.4 and it was a disaster.  The CommitFest lasted
 *five months*. We've been doing schedule-based CommitFests ever since
 and it's worked much better.

The previous three commit fests contained about 50 patches each and
lasted one month each.  The current commit fest contains about 100
patches, so it shouldn't be surprising that it will take about 2 months
to get through it.

Moreover, under the current process, it is apparent that reviewing is
the bottleneck.  More code gets written than gets reviewed.  By
insisting on the current schedule, we would just push the growing review
backlog ahead of ourselves.  The solution (at least short-term, while
maintaining the process) has to be to increase the resources (in
practice: time) dedicated to reviewing relative to coding.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Andrew Dunstan



On 02/09/2011 07:53 AM, Peter Eisentraut wrote:

On mån, 2011-02-07 at 12:55 -0500, Robert Haas wrote:

On Mon, Feb 7, 2011 at 12:43 PM, Tom Lanet...@sss.pgh.pa.us  wrote:

Robert Haasrobertmh...@gmail.com  writes:

... Well, the current CommitFest ends in one week, ...

Really?  I thought the idea for the last CF of a development cycle was
that it kept going till we'd dealt with everything.  Arbitrarily
rejecting stuff we haven't dealt with doesn't seem fair.

Uh, we did that with 8.4 and it was a disaster.  The CommitFest lasted
*five months*. We've been doing schedule-based CommitFests ever since
and it's worked much better.

The previous three commit fests contained about 50 patches each and
lasted one month each.  The current commit fest contains about 100
patches, so it shouldn't be surprising that it will take about 2 months
to get through it.

Moreover, under the current process, it is apparent that reviewing is
the bottleneck.  More code gets written than gets reviewed.  By
insisting on the current schedule, we would just push the growing review
backlog ahead of ourselves.  The solution (at least short-term, while
maintaining the process) has to be to increase the resources (in
practice: time) dedicated to reviewing relative to coding.



Personally I think it's not unreasonable to extend the final commitfest 
of the release some. It doesn't need to be a huge amount longer, 
certainly not five months, but a couple of weeks to a month might be fair.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 On 02/09/2011 07:53 AM, Peter Eisentraut wrote:
 The previous three commit fests contained about 50 patches each and
 lasted one month each.  The current commit fest contains about 100
 patches, so it shouldn't be surprising that it will take about 2 months
 to get through it.

 Personally I think it's not unreasonable to extend the final commitfest 
 of the release some. It doesn't need to be a huge amount longer, 
 certainly not five months, but a couple of weeks to a month might be fair.

Yeah.  IIRC, in our first cycle using the CF process, we expected the
last CF to take longer than others.  I am not sure where the idea came
from that we'd be able to finish this one in a month.

I do accept the fact that we mustn't let it drag on indefinitely.
But two months instead of one isn't indefinite, and it seems more
realistic given the amount of work to be done.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-09 Thread Tom Lane
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
 As far as upgrade script for contrib extensions are concerned, we will
 be able to produce them from SQL, right?

Hm, interesting idea, but I'm afraid that pg_describe_object doesn't
produce exactly the syntax you need.

I had personally been thinking of generating the contrib upgrade scripts
via search-and-replace on the existing uninstall scripts.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-09 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 Hm, interesting idea, but I'm afraid that pg_describe_object doesn't
 produce exactly the syntax you need.

It's very close.  I've produced the previous set like that and the only
problem I had were with operator class and family objects, and with
array types.  In both case a very simple replace can be used, like
replace int[] with _int and for access method with using.

So you just add a CASE in the SELECT I proposed.  Well, I didn't do it
because I was not sure that it would still be needed with the API you're
using.

 I had personally been thinking of generating the contrib upgrade scripts
 via search-and-replace on the existing uninstall scripts.

Maybe that would work too.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread David Fetter
On Tue, Feb 08, 2011 at 09:09:48PM -0500, Robert Haas wrote:
 If we don't allocate all the memory up front, does that allow memory
 to be dynamically shared between different hash tables in shared
 memory?  I'm thinking not, but...
 
 Frankly, I think this is an example of how our current shared memory
 model is a piece of garbage.

What other model(s) might work better?

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Kevin Grittner
Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote:
 
 (2)  The predicate lock and lock target initialization code was
 initially copied and modified from the code for heavyweight
 locks.  The heavyweight lock code adds 10% to the calculated
 maximum size.  So I wound up doing that for
 PredicateLockTargetHash and PredicateLockHash, but didn't do it
 for SerializableXidHassh.  Should I eliminate this from the first
 two, add it to the third, or leave it alone?
 
 I'm inclined to eliminate it from the first two. Even in 
 LockShmemSize(), it seems a bit weird to add a safety margin, the
 sizes of the lock and proclock hashes are just rough estimates
 anyway.
 
I'm fine with that.  Trivial patch attached.
 
 * You missed that RWConflictPool is sized five times as large as 
 SerializableXidHash, and
 
 * The allocation for RWConflictPool elements was wrong, while the 
 estimate was correct.
 
 With these changes, the estimated and actual sizes match closely,
 so that actual hash table sizes are 50% of the estimated size as
 expected.
 
 I fixed those bugs
 
Thanks.  Sorry for missing them.
 
 but this doesn't help with the buildfarm members with limited
 shared memory yet.
 
Well, if dropping the 10% fudge factor on those two HTABs doesn't
bring it down far enough (which seems unlikely), what do we do?  We
could, as I said earlier, bring down the multiplier for the number
of transactions we track in SSI based on the maximum allowed
connections connections, but I would really want a GUC on it if we
do that.  We could bring down the default number of predicate locks
per transaction.  We could make the default configuration more
stingy about max_connections when memory is this tight.  Other
ideas?
 
I do think that anyone using SSI with a heavy workload will need
something like the current values to see decent performance, so it
would be good if there was some way to do this which would tend to
scale up as they increased something.  Wild idea: make the
multiplier equivalent to the bytes of shared memory divided by 100MB
clamped to a minimum of 2 and a maximum of 10?
 
-Kevin
*** a/src/backend/storage/lmgr/predicate.c
--- b/src/backend/storage/lmgr/predicate.c
***
*** 1173,1184  PredicateLockShmemSize(void)
size = add_size(size, hash_estimate_size(max_table_size,

 sizeof(PREDICATELOCK)));
  
-   /*
-* Since NPREDICATELOCKTARGETENTS is only an estimate, add 10% safety
-* margin.
-*/
-   size = add_size(size, size / 10);
- 
/* transaction list */
max_table_size = MaxBackends + max_prepared_xacts;
max_table_size *= 10;
--- 1173,1178 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] arrays as pl/perl input arguments [PATCH]

2011-02-09 Thread Alexey Klyukin

On Feb 9, 2011, at 3:44 AM, Alex Hunsaker wrote:

 
 So the merge while not exactly trivial was fairly simple. However it
 would be great if you could give it another look over.
 
 Find attached v7 changes include:
 - rebased against HEAD
 - fix potential use of uninitialized dims[cur_depth]
 - took out accidental (broken) hack to try and support composite types
 in ::encode_array_literal (added in v4 or something)
 - make_array_ref() now uses plperl_hash_from_datum() for composite
 types instead of its own hand rolled version
 - get_perl_array_ref() now grabs the 'array' directly instead of
 through the magic interface for simplicity
 - moved added static declarations to the bottom instead of being
 half on top and half on bottom
 pg_to_perl_arrays_v7.patch.gz

Thank you very much, the new patch applies cleanly and passes all tests on my
system. The new get_perl_array_ref seems to be much more clear to me, than the
prev. magic call.

What was actually broken in encode_array_literal support of composite types
(it converted perl hashes to the literal composite-type constants, expanding
nested arrays along the way) ? I think it would be a useful extension of the
existing encode_array_literal.

/A

--
Alexey Klyukin
The PostgreSQL Company - Command Prompt, Inc.





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Markus Wanner
On 02/09/2011 04:16 PM, David Fetter wrote:
 On Tue, Feb 08, 2011 at 09:09:48PM -0500, Robert Haas wrote:
 Frankly, I think this is an example of how our current shared memory
 model is a piece of garbage.
 
 What other model(s) might work better?

Thread based, dynamically allocatable and resizeable shared memory, as
most other projects and developers use, for example.

My dynshmem work is a first attempt at addressing the allocation part of
that.  It would theoretically allow more dynamic use of the overall
fixed amount of shared memory available (instead of requiring every
subsystem to use a fixed fraction of the overall available shared
memory, as is required now).

It has dismissed from CF 2010-07 for good reasons (lacking evidence of
usable performance, possible patent issues (on the allocator chosen),
lots of work for questionable benefit (existing subsystems would have to
be reworked to use that allocator)).

For anybody interested, please search the archives for 'dynshmem'.

Regards

Markus Wanner

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Kevin Grittner
Dan Ports d...@csail.mit.edu wrote:
 
 I think for SerializableXidHash we should probably just initially
 allocate it at its maximum size. Then it'll match the PredXact
 list which is allocated in full upfront, and there's no risk of
 being able to allocate a transaction but not register its xid. In
 fact, I believe there would be no way for starting a new
 serializable transaction to fail.
 
To be more precise, it would prevent an out of shared memory error
during an attempt to register an xid for an active serializable
transaction.  That seems like a good thing.  Patch to remove the
hint and initially allocate that HTAB at full size attached.
 
I didn't attempt to address the larger general issue of one HTAB
stealing shared memory from space calculated to belong to another,
and then holding on to it until the postmaster is shut down.
 
-Kevin
*** a/src/backend/storage/lmgr/predicate.c
--- b/src/backend/storage/lmgr/predicate.c
***
*** 1018,1024  InitPredicateLocks(void)
 * PredicateLockShmemSize!
 */
max_table_size = (MaxBackends + max_prepared_xacts);
-   init_table_size = max_table_size / 2;
  
/*
 * Allocate a list to hold information on transactions participating in
--- 1018,1023 
***
*** 1029,1035  InitPredicateLocks(void)
 * be summarized for storage in SLRU and the dummy transaction.
 */
max_table_size *= 10;
-   init_table_size *= 10;
  
PredXact = ShmemInitStruct(PredXactList,
   PredXactListDataSize,
--- 1028,1033 
***
*** 1092,1098  InitPredicateLocks(void)
hash_flags = (HASH_ELEM | HASH_FUNCTION);
  
SerializableXidHash = ShmemInitHash(SERIALIZABLEXID hash,
!   
init_table_size,

max_table_size,

info,

hash_flags);
--- 1090,1096 
hash_flags = (HASH_ELEM | HASH_FUNCTION);
  
SerializableXidHash = ShmemInitHash(SERIALIZABLEXID hash,
!   
max_table_size,

max_table_size,

info,

hash_flags);
***
*** 1595,1604  RegisterPredicateLockingXid(const TransactionId xid)

   sxidtag,

   HASH_ENTER, found);
if (!sxid)
ereport(ERROR,
(errcode(ERRCODE_OUT_OF_MEMORY),
!errmsg(out of shared memory),
!errhint(You might need to increase 
max_predicate_locks_per_transaction.)));
  
Assert(!found);
  
--- 1593,1602 

   sxidtag,

   HASH_ENTER, found);
if (!sxid)
+   /* This should not be possible, based on allocation. */
ereport(ERROR,
(errcode(ERRCODE_OUT_OF_MEMORY),
!errmsg(out of shared memory)));
  
Assert(!found);
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Remove more SGML tabs.

2011-02-09 Thread David Fetter
On Wed, Feb 09, 2011 at 01:17:06PM +, Bruce Momjian wrote:
 Remove more SGML tabs.

Perhaps we should see about putting something in .git/hooks/pre-commit
so people can focus on more substantive matters.

Is there some kind of cross-platform way to do this?  I'm thinking
that given the fact that our build system already requires Perl, there
should be, but I'm not quite sure how this would be accomplished.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 9:42 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Andrew Dunstan and...@dunslane.net writes:
 On 02/09/2011 07:53 AM, Peter Eisentraut wrote:
 The previous three commit fests contained about 50 patches each and
 lasted one month each.  The current commit fest contains about 100
 patches, so it shouldn't be surprising that it will take about 2 months
 to get through it.

 Personally I think it's not unreasonable to extend the final commitfest
 of the release some. It doesn't need to be a huge amount longer,
 certainly not five months, but a couple of weeks to a month might be fair.

 Yeah.  IIRC, in our first cycle using the CF process, we expected the
 last CF to take longer than others.  I am not sure where the idea came
 from that we'd be able to finish this one in a month.

It came from the fact that we did it last time.

 I do accept the fact that we mustn't let it drag on indefinitely.
 But two months instead of one isn't indefinite, and it seems more
 realistic given the amount of work to be done.

The work will expand to fill the time available.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] create an extension of postgresql 9 with Visual C++ 2008 express edition

2011-02-09 Thread Kevin Grittner
michel wildcat m.wildca...@gmail.com wrote:
 
 I am a computer student in belgium, for my academic project I am
 working to an extension of postgresql 9 under win xp, by creating
 h DLL in Visual C++ 2008 - I am new in both environments; To start
 and understand how does it works, I tried to compile the example
 complex.c which is in the directory /tutorial of  postgresql
 sources, but unfortunately I have lots of compilation errors
 Although I made the various necessary include .
 
This is a little to vague and confusing to be able to offer much
advice.  Some copy and paste of attempts and the exact errors would
help.  In general, you might want to review this page for ideas of
how to best ask a question to elicit a useful response:
 
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
 
 Please I would like to know if there could be a kind of tutorial
 that explains how to develop a DLL for Postgresql using Visual C++
 Express (the necessary config, the library to include, etc.. ),
 based on the /tutorial/complex.c ffile for example.
 
I don't know of anything, but since I don't develop for that
environment, I haven't really dug deep looking for it.  Be sure to
review what is in the Wiki, especially the links from the Windows
category:
 
http://wiki.postgresql.org/wiki/Category:Windows
 
as well as following the links from the developer tab of the main
PostgreSQL site:
 
http://www.postgresql.org/developer/
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Remove more SGML tabs.

2011-02-09 Thread Magnus Hagander
On Feb 9, 2011 5:01 PM, David Fetter da...@fetter.org wrote:

 On Wed, Feb 09, 2011 at 01:17:06PM +, Bruce Momjian wrote:
  Remove more SGML tabs.

 Perhaps we should see about putting something in .git/hooks/pre-commit
 so people can focus on more substantive matters.

 Is there some kind of cross-platform way to do this?  I'm thinking
 that given the fact that our build system already requires Perl, there
 should be, but I'm not quite sure how this would be accomplished.

If you can write a reliable perl script for this, we can run it like
pgindent. No need to complicate it with hooks.

/Magnus


Re: [HACKERS] Per-column collation, the finale

2011-02-09 Thread Peter Eisentraut
On tis, 2011-02-08 at 22:17 +, Thom Brown wrote:
 postgres=# create table meow (id serial, stuff text collate de_XX);
 NOTICE:  CREATE TABLE will create implicit sequence meow_id_seq for
 serial column meow.id
 ERROR:  collation de_XX for current database encoding UTF8 does not exist
 LINE 1: create table meow (id serial, stuff text collate de_XX);
 
 I wouldn't expect to see that first notice.  Shouldn't that step come
 a bit later?

This isn't much different from writing

create table meow (id serial, stuff nonsense);

You'll still get the notice before it errors out on type-not-found.

 A bit of weirdness, I'm allowed to specify more than one collation on
 a single column ordering...

 Is this the same principal as casting, where they can be chained?
 Which one wins in this case?

Yeah, last one wins.

 Also, if a locale is installed after initdb, is it then impossible to
 get pg_collate to pick up that new locale?

Currently, you can insert it yourself into pg_collation.  I have a
CREATE COLLATION patch in the works.

 If a locale is somehow
 removed from the system, what happens on the database side when
 attempting to use a collated column?

Then you're hosed, but that has always been the case, with per-cluster
and per-database locales.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 7:53 AM, Peter Eisentraut pete...@gmx.net wrote:
 Moreover, under the current process, it is apparent that reviewing is
 the bottleneck.  More code gets written than gets reviewed.  By
 insisting on the current schedule, we would just push the growing review
 backlog ahead of ourselves.  The solution (at least short-term, while
 maintaining the process) has to be to increase the resources (in
 practice: time) dedicated to reviewing relative to coding.

Yep.  People who submit patches must also review patches if they want
their own stuff reviewed.

It sounds to me like what's being proposed is that I should spend
another month working on other people's patches, while they work on
their own patches.  I can't get excited about that.  The situation
with reviewing has gotten totally out of hand.  I review and commit
more patches as part of each CommitFest than anyone except Tom, and I
think there have been some CommitFests where I did more patches than
he did (though he still wins by a mile if you factor in patch
complexity).   But on the flip side, I can't always get a reviewer for
my own patches, or sometimes I get a perfunctory review that someone
spent ten minutes on.  Huh?

So I heartily approve of the suggestion that we need to devote more
energy to reviewing, if it means more reviewing by the people who are
not me.  And allow me to suggest that that energy get put in NOW,
rather than a month from now.  Most of the patches that still need
review are not that complicated.  At least half of them could probably
be meaningfully reviewed in an hour or two.  Then the author could
post an update tomorrow.  Then the reviewer could spend another 30
minutes and mark them ready for committer.  Next!

There are certainly some patches in this CommitFest that need more
attention than that, and that probably need the attention of a senior
community member.  Jeff's range types patch and Alvaro's key lock
patch are two of those.  And I would be willing to do that, except
that I'm already listed as a reviewer for FOURTEEN PATCHES this
CommitFest, plus I committed some others that someone else reviewed
and am also functioning as CommitFest manager.  The problem isn't so
much the amount of calendar time that's required to get through 100
patches as the many people either submit half-baked code and assume
that they or someone else will fix it later, or else they submit code
but don't do an amount of review work equal to the amount of review
work they generate.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 10:38 AM, Markus Wanner mar...@bluegap.ch wrote:
 On 02/09/2011 04:16 PM, David Fetter wrote:
 On Tue, Feb 08, 2011 at 09:09:48PM -0500, Robert Haas wrote:
 Frankly, I think this is an example of how our current shared memory
 model is a piece of garbage.

 What other model(s) might work better?

 Thread based, dynamically allocatable and resizeable shared memory, as
 most other projects and developers use, for example.

Or less invasively, a small sysv shm to prevent the double-postmaster
problem, and allocate the rest using POSIX shm.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing COPY API

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 7:38 AM, Shigeru HANADA
han...@metrosystems.co.jp wrote:
 On Tue, 8 Feb 2011 08:49:36 -0500
 Robert Haas robertmh...@gmail.com wrote:
 On Tue, Feb 8, 2011 at 4:42 AM, Shigeru HANADA
 han...@metrosystems.co.jp wrote:
  I'll submit revised file_fdw patch after removing IsForeignTable()
  catalog lookup along Heikki's proposal.

 So I'm a bit confused.  I don't see the actual copy API change patch
 anywhere here.  Are we close to getting something committed there?

 I'm sorry but I might have missed your point...

 I replied here to answer to Itagaki-san's mention about typos in
 file_fdw patch.

 Or, would you mean that file_fdw should not depend on copy API change
 patch?

I mean that this thread is entitled exposing copy API, and I'm
wondering when and if the patch to expose the COPY API is going to be
committed.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transaction-scope advisory locks

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 7:12 AM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
 One issue might be in pg_locks, as you pointed out in the previous mail:
 if a session holds both a transaction level and a session level lock
 on the same resource, only one of them will appear in pg_locks.
 Also, we cannot distinguish transaction-level locks from session-level
 locks from pg_locks.

 It was not an issue before because session locks are only used in
 internal implementation. It looks as a transaction from users.
 However, this feature reveals the status in public. We might need
 to add some bits to shared lock state to show which lock is session-level.

Presumably that would carry a small performance penalty, since
changing the status of the lock would require modifications to the
shared hash table, not just the backend-private one.

It may still be worth doing, but I'm inclined to think that it's a
separate patch that someone could submit for 9.2.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing COPY API

2011-02-09 Thread Andrew Dunstan



On 02/09/2011 12:26 PM, Robert Haas wrote:

On Wed, Feb 9, 2011 at 7:38 AM, Shigeru HANADA
han...@metrosystems.co.jp  wrote:

On Tue, 8 Feb 2011 08:49:36 -0500
Robert Haasrobertmh...@gmail.com  wrote:

On Tue, Feb 8, 2011 at 4:42 AM, Shigeru HANADA
han...@metrosystems.co.jp  wrote:

I'll submit revised file_fdw patch after removing IsForeignTable()
catalog lookup along Heikki's proposal.

So I'm a bit confused.  I don't see the actual copy API change patch
anywhere here.  Are we close to getting something committed there?

I'm sorry but I might have missed your point...

I replied here to answer to Itagaki-san's mention about typos in
file_fdw patch.

Or, would you mean that file_fdw should not depend on copy API change
patch?

I mean that this thread is entitled exposing copy API, and I'm
wondering when and if the patch to expose the COPY API is going to be
committed.



Itagaki-san published a patch for this about about 12 hours ago in the 
file_fdw thread that looks pretty committable to me.


This whole API thing is a breakout from file_fdw, because the original 
file_fdw submission copied huge chunks of copy.c instead of trying to 
leverage it.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread David E. Wheeler
On Feb 9, 2011, at 9:20 AM, Robert Haas wrote:

 There are certainly some patches in this CommitFest that need more
 attention than that, and that probably need the attention of a senior
 community member.  Jeff's range types patch and Alvaro's key lock
 patch are two of those.  And I would be willing to do that, except
 that I'm already listed as a reviewer for FOURTEEN PATCHES this
 CommitFest, plus I committed some others that someone else reviewed
 and am also functioning as CommitFest manager.  The problem isn't so
 much the amount of calendar time that's required to get through 100
 patches as the many people either submit half-baked code and assume
 that they or someone else will fix it later, or else they submit code
 but don't do an amount of review work equal to the amount of review
 work they generate.

Frankly, I think you should surrender some of those 14 and cajole some other 
folks to take on more.

Best,

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-09 Thread David E. Wheeler
On Feb 8, 2011, at 6:48 PM, Tom Lane wrote:

 Like ALTER THING SET SCHEMA, ALTER THING SET EXTENSION is implicitly
 assuming that there can be only one owning extension for an object.
 Furthermore, it's not really intended for *removal* of an object from an
 extension (a concept that doesn't even exist for SET SCHEMA).  We could
 take a page from COMMENT ON and use SET EXTENSION NULL for that, but
 that's surely more of a hack than anything else.
 
 In contrast, ALTER EXTENSION ADD doesn't presuppose that you couldn't
 add the object to multiple extensions; and it has a natural inverse,
 ALTER EXTENSION DROP.  I am not necessarily suggesting that we will ever
 allow either of those things, but I do suggest that we should pick a
 syntax that doesn't look like it's being forced to conform if we ever
 want to do it.  The DROP case at least seems like it might be wanted
 in the relatively near future.
 
 So that looks to me like a fairly good argument for the ADD syn

It feels a lot more natural to me, frankly. I'd tend to think about what's 
grouped into an extension, and look for the documentation related to extensions 
for how to add an object to an extension. I don't think it would occur to me, 
on first pass, to look in the ALTER FUNCTION docs for how to add a function to 
an extension.

In my mind, I'm modifying the extension, not the function.

Best,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 1:09 PM, David E. Wheeler da...@kineticode.com wrote:
 Frankly, I think you should surrender some of those 14 and cajole some other 
 folks to take on more.

Happily...  only trouble is, I suck at cajoling.  Even my begging is
distinctly sub-par.

Plase?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing COPY API

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 12:45 PM, Andrew Dunstan and...@dunslane.net wrote:
 Itagaki-san published a patch for this about about 12 hours ago in the
 file_fdw thread that looks pretty committable to me.

OK, excellent.

 This whole API thing is a breakout from file_fdw, because the original
 file_fdw submission copied huge chunks of copy.c instead of trying to
 leverage it.

Yeah, I remembered that, I just got mixed up because the two patches
were on the same thread, and the one that is the topic of this thread
was posted elsewhere.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] new clang report

2011-02-09 Thread Peter Eisentraut
The lastest clang svn tip (2.9-to-be, I guess) builds PostgreSQL out of
the box and most tests pass.  Specifically, it no longer chokes on
-D_GNU_SOURCE on Linux, which was the previously reported blocker.

Warnings:

Lots of these:
clang: warning: argument unused during compilation: '-mthreads'
clang: warning: argument unused during compilation: '-mt'

Possible fix, check both link and compile invocations for warnings in
configure:

diff --git i/config/acx_pthread.m4 w/config/acx_pthread.m4
index ceb161a..ee181f9 100644
--- i/config/acx_pthread.m4
+++ w/config/acx_pthread.m4
@@ -142,7 +142,7 @@ main (int argc, char **argv)
 }
 _ACEOF
 rm -f conftest.$ac_objext conftest$ac_exeext
-if test `(eval $ac_link 21 15)` = ; then
+if test `(eval $ac_link 21 15)` =   test `(eval 
$ac_compile 21 15)` = ; then
 # we continue with more flags because Linux needs -lpthread
 # for libpq builds on PostgreSQL.  The test above only
 # tests for building binaries, not shared libraries.


The usual flex warning:

In file included from gram.y:12460:
scan.c:16256:23: warning: unused variable 'yyg' [-Wunused-variable]
struct yyguts_t * yyg = (struct yyguts_t*)yyscanner; /* This var may be 
unused depending upon options. */


And then only these two:

fe-exec.c:2408:13: warning: comparison of unsigned enum expression  0 is 
always false [-Wtautological-compare]
if (status  0 || status = sizeof pgresStatus / sizeof pgresStatus[0])
~~ ^ ~

pg_standby.c:347:22: warning: comparison of unsigned expression = 0 is always 
true [-Wtautological-compare]
if (tli  0  log = 0  seg  0)
   ~~~ ^  ~


Regression tests (world):

--- src/test/regress/expected/float8.out
+++ src/test/regress/results/float8.out
@@ -384,7 +384,15 @@
 SELECT '' AS bad, f.f1 * '1e200' from FLOAT8_TBL f;
 ERROR:  value out of range: overflow
 SELECT '' AS bad, f.f1 ^ '1e200' from FLOAT8_TBL f;
-ERROR:  value out of range: overflow
+ bad | ?column? 
+-+--
+ |0
+ |  NaN
+ |  NaN
+ |  NaN
+ |  NaN
+(5 rows)
+
 SELECT 0 ^ 0 + 0 ^ 1 + 0 ^ 0.0 + 0 ^ 0.5;
  ?column? 
 --

PL/Python test crashes.  I was able to make it work either by using -O0
or by applying the following patch: 

diff --git i/src/pl/plpython/plpython.c w/src/pl/plpython/plpython.c
index fff7de7..8eaee36 100644
--- i/src/pl/plpython/plpython.c
+++ w/src/pl/plpython/plpython.c
@@ -1019,12 +1019,13 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, 
PLyProcedure *proc, HeapTuple *r
 
 /* function handler and friends */
 static Datum
-PLy_function_handler(FunctionCallInfo fcinfo, PLyProcedure *proc)
+PLy_function_handler(FunctionCallInfo fcinfo, PLyProcedure *proc2)
 {
Datum   rv;
PyObject   *volatile plargs = NULL;
PyObject   *volatile plrv = NULL;
ErrorContextCallback plerrcontext;
+   PLyProcedure *volatile proc = proc2;
 
PG_TRY();
{

Hmmm.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread David E. Wheeler
On Feb 9, 2011, at 10:29 AM, Robert Haas wrote:

 Frankly, I think you should surrender some of those 14 and cajole some other 
 folks to take on more.
 
 Happily...  only trouble is, I suck at cajoling.  Even my begging is
 distinctly sub-par.
 
 Plase?

Try this:

“Listen up, bitches! I'm tired of Tom and me having to do all the work. All of 
you who submitted patches need to review some other patches! If you haven't 
submitted a review for someone else's patch by commitfest end, your patches 
will be marked returned.”

Then maybe cuff Jeff or Alvaro or someone, to show you mean business.

HTH,

David


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 On Wed, Feb 9, 2011 at 1:09 PM, David E. Wheeler da...@kineticode.com wrote:
  Frankly, I think you should surrender some of those 14 and cajole some 
  other folks to take on more.
 
 Happily...  only trouble is, I suck at cajoling.  Even my begging is
 distinctly sub-par.
 
 Plase?

Erm, I've been through the commitfest app a couple of different times,
but have ignored things which are marked 'Needs Reivew' when there's a
reviewer listed...

If there are patches where you're marked as the reviewer but you don't
have time to review them or want help, take your name off as a reviewer
for them and/or speak up and explicitly ask for help.  I'm not going to
start reviewing something if I think someone else is already working on
it..

Thanks,

Stephen


signature.asc
Description: Digital signature


[HACKERS] patches that could use additional reviewers

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 1:35 PM, Stephen Frost sfr...@snowman.net wrote:
 * Robert Haas (robertmh...@gmail.com) wrote:
 On Wed, Feb 9, 2011 at 1:09 PM, David E. Wheeler da...@kineticode.com 
 wrote:
  Frankly, I think you should surrender some of those 14 and cajole some 
  other folks to take on more.

 Happily...  only trouble is, I suck at cajoling.  Even my begging is
 distinctly sub-par.

 Plase?

 Erm, I've been through the commitfest app a couple of different times,
 but have ignored things which are marked 'Needs Reivew' when there's a
 reviewer listed...

 If there are patches where you're marked as the reviewer but you don't
 have time to review them or want help, take your name off as a reviewer
 for them and/or speak up and explicitly ask for help.  I'm not going to
 start reviewing something if I think someone else is already working on
 it..

Of the fourteen I signed up for, 10 are now marked Committed or
Returned with Feedback.  Of the remaining four, there are two that
could use more eyes:

MULTISET functions
Change pg_last_xlog_receive_location not to move backwards

A few other ones that could use more reviewers include:

range types
key locks
widen scale factor limit from pgbench

And your patch could probably use another reviewer too, if anyone else
is looking for stuff to help with:

log_csv_fields ; add current_role log option

And there are a few patches with no reviewer at all.

PL/Python invalidate composite argument functions
PL/Python tracebacks
contrib/btree_gist  (submitted very late)
SQL/MED - file_fdw

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] git commitmessages url format change

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 5:05 AM, Magnus Hagander mag...@hagander.net wrote:
 Hi!

 I've changed the format of the URLs in the git commit messages so they
 no longer contain a semicolon, since a number of people reported that
 made them stop working for users of gmail (which is a fair amount of
 users..) They'll now go to /pg/commitdiff/hash instead which will
 redirect back to the full gitweb URL.

 Thus, if you're a gmail user, you can now expect the commit messages
 to work again.

Thanks, although it would be even nicer if gmail would fix their bug...  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 1:32 PM, David E. Wheeler da...@kineticode.com wrote:
 On Feb 9, 2011, at 10:29 AM, Robert Haas wrote:
 Frankly, I think you should surrender some of those 14 and cajole some 
 other folks to take on more.

 Happily...  only trouble is, I suck at cajoling.  Even my begging is
 distinctly sub-par.

 Plase?

 Try this:

 “Listen up, bitches! I'm tired of Tom and me having to do all the work. All 
 of you who submitted patches need to review some other patches! If you 
 haven't submitted a review for someone else's patch by commitfest end, your 
 patches will be marked returned.”

 Then maybe cuff Jeff or Alvaro or someone, to show you mean business.

That tends not to get a lot of community support, and it isn't my
intention anyway.  We actually do not need to impose a draconian rule;
we just need everyone to put in a little extra effort to get us over
the hump.

But speaking of that, I just so happen to notice you haven't signed up
to review any patches this CF.  How about grabbing one or two?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread David E. Wheeler
On Feb 9, 2011, at 10:56 AM, Robert Haas wrote:

 “Listen up, bitches! I'm tired of Tom and me having to do all the work. All 
 of you who submitted patches need to review some other patches! If you 
 haven't submitted a review for someone else's patch by commitfest end, your 
 patches will be marked returned.”
 
 Then maybe cuff Jeff or Alvaro or someone, to show you mean business.
 
 That tends not to get a lot of community support, and it isn't my
 intention anyway.  We actually do not need to impose a draconian rule;
 we just need everyone to put in a little extra effort to get us over
 the hump.

Agreed. Let me remove my tongue from my cheek.

 But speaking of that, I just so happen to notice you haven't signed up
 to review any patches this CF.  How about grabbing one or tw

ha ha! Alas, I'm completely overcommitted at this point. Been having a hard 
time making time for PGXN. I've been tracking the extension stuff closely, 
though, as you can imagine.

Looking at the patches without reviewers anyway, frankly none look like the 
sorts of things I have the expertise to test in any but the most superficial 
way. Are there more that should have the reviewer removed? If there were one I 
could give a couple of hours to and speak with some knowledge, I could fix up 
some time next week.

Best,

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patches that could use additional reviewers

2011-02-09 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 Of the fourteen I signed up for, 10 are now marked Committed or
 Returned with Feedback.  Of the remaining four, there are two that
 could use more eyes:
 
 MULTISET functions

I'll work on this one.

 Change pg_last_xlog_receive_location not to move backwards

I'll take a look at this one too, but I'm not that familiar with the
xlog code, etc, so I'm not sure if I'll be able to comment on
correctness...

 A few other ones that could use more reviewers include:
 
 range types
 key locks

If I can get through the others, I'll try and come back and look at
these.

 widen scale factor limit from pgbench

I was already starting to look at this one, actually. :)

 And your patch could probably use another reviewer too, if anyone else
 is looking for stuff to help with:
 
 log_csv_fields ; add current_role log option

Not sure if it counts if I review it. ;)

 And there are a few patches with no reviewer at all.
 
 PL/Python invalidate composite argument functions
 PL/Python tracebacks

I thought from the other threads that we had someone working the
PL/Pyton patches..? :/

 contrib/btree_gist  (submitted very late)

Looks like this one might just be committable w/o additional review, but
if it's still hanging around, I might be able to help.

 SQL/MED - file_fdw

Ditto on this.

Alright, I've marked myself as a reviewer for the ones I'll look at in
the next couple days.  The others are up for grabs for others, any
takers on additional reviewers for them?

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] arrays as pl/perl input arguments [PATCH]

2011-02-09 Thread Tim Bunce
On Tue, Feb 08, 2011 at 09:40:38AM -0500, Andrew Dunstan wrote:
 On 02/03/2011 01:20 PM, Andrew Dunstan wrote:
 
 Well, the question seems to be whether or not it's a reasonable
 price to pay. On the whole I'm inclined to think it is, especially
 when it can be avoided by updating your code, which will be a
 saving in fragility and complexity as well.
 
 do you till have concerns about this, or are you happy for us to
 move ahead on it?

[I'm not really paying close enough attention for you to put much weight
on my opinions, but...]

I can't see any major issues so I'm happy for you to move ahead.

Thanks!

Tim.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread A.M.

On Feb 9, 2011, at 12:25 PM, Robert Haas wrote:

 On Wed, Feb 9, 2011 at 10:38 AM, Markus Wanner mar...@bluegap.ch wrote:
 On 02/09/2011 04:16 PM, David Fetter wrote:
 On Tue, Feb 08, 2011 at 09:09:48PM -0500, Robert Haas wrote:
 Frankly, I think this is an example of how our current shared memory
 model is a piece of garbage.
 
 What other model(s) might work better?
 
 Thread based, dynamically allocatable and resizeable shared memory, as
 most other projects and developers use, for example.
 
 Or less invasively, a small sysv shm to prevent the double-postmaster
 problem, and allocate the rest using POSIX shm.

Such a patch was proposed and rejected:
http://thread.gmane.org/gmane.comp.db.postgresql.devel.general/94791
Cheers,
M

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] arrays as pl/perl input arguments [PATCH]

2011-02-09 Thread Alex Hunsaker
On Wed, Feb 9, 2011 at 08:24, Alexey Klyukin al...@commandprompt.com wrote:

 What was actually broken in encode_array_literal support of composite types
 (it converted perl hashes to the literal composite-type constants, expanding
 nested arrays along the way) ? I think it would be a useful extension of the
 existing encode_array_literal.

Yeah, It does not work because it did not take into account the order
of composite columns. It always put them alphabetically by column
name. To do it properly we would need to pass in a typid or a column
order or something. Ideally we could expose the new
plperl_array_to_datum() to plperl functions in some manner.


Here is a longer perhaps more concrete example:

Imagine you have a composite type with two 'columns':
= create type foo as (z int, a int);
= create or replace function foo_pl(foo[]) returns foo[] as $$
my $arg = shift;
$$ language plperl;
= select foo_pl('{(1,2), (3,4)}');

In the above $arg looks something like (ignoring the
PostgreSQL::InServer::ARRAY object) [{'a'=2, 'z'=1}, {'a'=4,
'z'=3}]. When we call encode_arary_literal() we need to put it back
in to composite literal form which is basically (ignoring the array)
(column_z, column_a). However without type information we don't
know the order of the columns, as the composite is represented as a
hash we get kind of stuck. The hack I did sorted the hash keys
alphabetically, which worked for the regression tests as they happened
to have their composite columns sorted alphabetically. But would break
for this example putting $arg-[0]{a} into z and $arg-[0]{z} into a.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patches that could use additional reviewers

2011-02-09 Thread Bernd Helmle



--On 9. Februar 2011 13:45:11 -0500 Robert Haas robertmh...@gmail.com 
wrote:



Of the fourteen I signed up for, 10 are now marked Committed or
Returned with Feedback.  Of the remaining four, there are two that
could use more eyes:


I'd happily jump in and look into one of those, but before mid of next week 
i really have no spare time to come up with something :(


--
Thanks

Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 2:01 PM, David E. Wheeler da...@kineticode.com wrote:
 ha ha! Alas, I'm completely overcommitted at this point. Been having a hard 
 time making time for PGXN. I've been tracking the extension stuff closely, 
 though, as you can imagine.

It's a common problem, and of course none of us are in a position to
dictate how other people spend their time.  But the issue on the table
is whether we want PostgreSQL 9.1 to be released in 2011.  If yes,
then without making any statements about what any particular person
has to or must do, we collectively need to step it up a notch or two.

 Looking at the patches without reviewers anyway, frankly none look like the 
 sorts of things I have the expertise to test in any but the most superficial 
 way. Are there more that should have the reviewer removed? If there were one 
 I could give a couple of hours to and speak with some knowledge, I could fix 
 up some time next week.

I just sent a note on some that seem like they could use more looking
at, but there may be other ones too.  Now is not the time to hold back
because you think someone else might be working on it.  Most of the
time, the fact that a patch has a reviewer means that they either
intended to or actually did review it at some point in time, but not
that they are necessarily working on it right this minute, and
certainly not that other input isn't welcome.  This is especially true
towards the end of the CommitFest or when the thread hasn't had
anything new posted to it for several days.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 2:16 PM, A.M. age...@themactionfaction.com wrote:
 On Feb 9, 2011, at 12:25 PM, Robert Haas wrote:
 On Wed, Feb 9, 2011 at 10:38 AM, Markus Wanner mar...@bluegap.ch wrote:
 On 02/09/2011 04:16 PM, David Fetter wrote:
 On Tue, Feb 08, 2011 at 09:09:48PM -0500, Robert Haas wrote:
 Frankly, I think this is an example of how our current shared memory
 model is a piece of garbage.

 What other model(s) might work better?

 Thread based, dynamically allocatable and resizeable shared memory, as
 most other projects and developers use, for example.

 Or less invasively, a small sysv shm to prevent the double-postmaster
 problem, and allocate the rest using POSIX shm.

 Such a patch was proposed and rejected:
 http://thread.gmane.org/gmane.comp.db.postgresql.devel.general/94791

I know.  We need to revisit that for 9.2 and un-reject it.  It's nice
that PostgreSQL can run on my thermostat, but it isn't nice that
that's the only place where it delivers the expected level of
performance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] pgbench to the MAXINT

2011-02-09 Thread Stephen Frost
Greg,

* Greg Smith (g...@2ndquadrant.com) wrote:
 I took that complexity out and just put a hard line
 in there instead:  if scale=2, you get bigints.  That's not
 very different from the real limit, and it made documenting when the
 switch happens easy to write and to remember.

Agreed completely on this.

 It turns out that even though I've been running an i386 Linux on
 here, it's actually a 64-bit CPU.  (I think that it has a 32-bit
 install may be an artifact of Adobe Flash install issues, sadly)  So
 this may not be as good of a test case as I'd hoped.  

Actually, I would think it'd still be sufficient..  If you're under a
32bit kernel you're not going to be using the extended registers, etc,
that would be available under a 64bit kernel..  That said, the idea that
we should care about 32-bit systems these days, in a benchmarking tool,
is, well, silly, imv.

 1) A look into the expected range of the rand() function suggests
 the glibc implementation normally proves 30 bits of resolution, so
 about 1 billion numbers.  You'll have 1B rows in a pgbench database
 once the scale goes over 10,000.  So without a major overhaul of how
 random number generation is treated here, people can expect the
 distribution of rows touched by a test run to get less even once the
 database scale gets very large.  

Just wondering, did you consider just calling random() twice and
smashing the result together..?

 I added another warning paragraph
 to the end of the docs in this update to mention this.  Long-term, I
 suspect we may need to adopt a superior 64-bit RNG approach,
 something like a Mersenne Twister perhaps.  That's a bit more than
 can be chewed on during 9.1 development though.

I tend to agree that we should be able to improve the random number
generation in the future.  Additionally, imv, we should be able to say
pg_bench version X isn't comparable to version Y in the release notes
or something, or have seperate version #s for it which make it clear
what can be compared to each other and what can't.  Painting ourselves
into a corner by saying we can't ever make pgbench generate results that
can't be compared to every other released version of pgbench just isn't
practical.

 2) I'd rate odds are good there's one or more corner-case bugs in
 \setrandom or \setshell I haven't found yet, just from the way that
 code was converted.  Those have some changes I haven't specifically
 tested exhaustively yet.  I don't see any issues when running the
 most common two pgbench tests, but that's doesn't mean every part of
 that 32 - 64 bit conversion was done correctly.

I'll take a look. :)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Markus Wanner
On 02/09/2011 06:25 PM, Robert Haas wrote:
 On Wed, Feb 9, 2011 at 10:38 AM, Markus Wanner mar...@bluegap.ch wrote:
 Thread based, dynamically allocatable and resizeable shared memory, as
 most other projects and developers use, for example.

I didn't mean to say we should switch to that model.  It's just *the*
other model that works (whether or not it's better in general or for
Postgres is debatable).

 Or less invasively, a small sysv shm to prevent the double-postmaster
 problem, and allocate the rest using POSIX shm.

..which allows ftruncate() to resize, right?  That's the main benefit
over sysv shm which we currently use.

ISTM that addresses the resizing-of-the-overall-shared-memory question,
but doesn't that require dynamic allocation or some other kind of
book-keeping?  Or do you envision all subsystems to have to
re-initialize their new (grown or shrunken) chunk of it?

Regards

Markus Wanner

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI patch version 14

2011-02-09 Thread Robert Haas
On Wed, Feb 9, 2011 at 2:51 PM, Markus Wanner mar...@bluegap.ch wrote:
 On 02/09/2011 06:25 PM, Robert Haas wrote:
 On Wed, Feb 9, 2011 at 10:38 AM, Markus Wanner mar...@bluegap.ch wrote:
 Thread based, dynamically allocatable and resizeable shared memory, as
 most other projects and developers use, for example.

 I didn't mean to say we should switch to that model.  It's just *the*
 other model that works (whether or not it's better in general or for
 Postgres is debatable).

 Or less invasively, a small sysv shm to prevent the double-postmaster
 problem, and allocate the rest using POSIX shm.

 ..which allows ftruncate() to resize, right?  That's the main benefit
 over sysv shm which we currently use.

 ISTM that addresses the resizing-of-the-overall-shared-memory question,
 but doesn't that require dynamic allocation or some other kind of
 book-keeping?  Or do you envision all subsystems to have to
 re-initialize their new (grown or shrunken) chunk of it?

Basically, I'd be happy if all we got out of it was freedom from the
oppressive system shared memory limits.   On a modern system, it's
hard to imagine that the default for shared_buffers should be less
than 256MB, but that blows out the default POSIX shared memory
allocation limits on every operating system I use, and some of those
need a reboot to fix it.  That's needlessly reducing performance and
raising the barrier of entry for new users.  I am waiting for the day
when I have to explain to the guy with a terabyte of memory that the
reason why his performance sucks so bad is because he's got a 16MB
buffer cache.  The percentage of memory we're allocating to
shared_buffers should not need to be expressed in scientific notation.

But once we get out from under that, I think there might well be some
advantage to have certain subsystems allocate their own segments,
and/or using ftruncate() for resizing.  I don't have a concrete
proposal in mind, though.  It's very much non-trivial to resize
shared_buffers, for example, even if you assume that the size of the
shm can easily be changed.  So I don't expect quick progress on this
front; but it would be nice to have those options available.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] query execution question

2011-02-09 Thread amit sehas
We are making some performance measurements, we are trying to determine query 
execution behavior. Lets say we have 4 tables T1, T2, T3 and T4

and the query has the form:

select * from T1, T2, T3, T4 where (T1.a = T2.b   and  T2.c = T3.d
 T3.e = T4.f)  

where a,b,c,d,e,f are properties of the respective tables.

Lets say that the cost based optimizer determines that the order of the joins 
should be   T1.a=T2.b followed by T2.c = T3.d followed by T3.e = T4.f

the question we have is during query execution are the joins evaluated 
completely one by one in that order, or the first join is evaluated completely 
and generates an intermediate table which is then utilized to perform the next 
jointhis means that for such a query we will need space for all the 
intermediate tables, which if they are very large tables as they are in our 
case can significantly alter the cost of the operations...

thanks

-Ashish



  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postponing some large patches to 9.2

2011-02-09 Thread Chris Browne
pg...@j-davis.com (Jeff Davis) writes:
 On Tue, 2011-02-08 at 15:10 -0500, Chris Browne wrote:
 It's more than a bit sad...  The RangeType change has the massive merit
 of enabling some substantial development changes, where we can get rid
 of whole classes of comparison clauses, and hopefully whole classes of
 range errors.  That was my favorite would-be feature for 9.1.

 I appreciate the support.

 If you take the feature for a quick spin before the next commitfest,
 that would be a big help. If I get it in the first commitfest of 9.2
 that may mean some follow-up features, like RANGE KEYs/FKs, and maybe
 even RANGE JOIN might have a chance for 9.2 as well. Or, maybe some
 other features might find it useful, like partitioning or audit logs.

I've found my wish item...  I wish that queries could expand ranges in
much the same fashion that BETWEEN expands into two query nodes.

That way, you can use a range to pick data from a large table, and not
revert to a Seq Scan+Filter, which is what I'm seeing for the following
sort of query:

   select * from some_data where '[2010-01-01,2010-02-01)'::daterange @ 
whensit;
-- 
let name=cbbrowne and tld=gmail.com in String.concat @ [name;tld];;
http://linuxfinances.info/info/lsf.html
Rules of the Evil Overlord  #162. If I steal something very important
to the hero, I will not put it on public display.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Range Types - efficiency

2011-02-09 Thread Chris Browne
One of the things I'd particularly like to use range types for is to
make it easier to construct range-related queries.  Classic example is
that of reports that work on date ranges.

I create a table that will have transaction data:

CREATE TABLE some_data (
id serial,
whensit date
-- And it'll have other attributes, but those don't matter here...
);
CREATE INDEX some_when ON some_data USING btree (whensit);

I then populate it with a bunch of date-based data...

rangetest@localhost-  select count(*), min(whensit), max(whensit) from 
some_data;
 count |min |max
---++
 37440 | 2007-01-01 | 2014-12-27
(1 row)

Here's the traditional way of doing a range-based query on this data:

rangetest@localhost-  explain analyze  select * from some_data where whensit 
= '2010-01-01' and whensit  '2010-02-01';
  QUERY PLAN
---
 Bitmap Heap Scan on some_data  (cost=12.30..184.23 rows=395 width=8) (actual 
time=0.064..0.150 rows=390 loops=1)
   Recheck Cond: ((whensit = '2010-01-01'::date) AND (whensit  
'2010-02-01'::date))
   -  Bitmap Index Scan on some_when  (cost=0.00..12.21 rows=395 width=0) 
(actual time=0.054..0.054 rows=390 loops=1)
 Index Cond: ((whensit = '2010-01-01'::date) AND (whensit  
'2010-02-01'::date))
 Total runtime: 0.197 ms
(5 rows)

The RangeType-based equivalent is the following:

rangetest@localhost-  explain analyze select * from some_data where 
'[2010-01-01,2010-02-01)'::daterange @ whensit;
   QUERY PLAN
-
 Seq Scan on some_data  (cost=0.00..634.00 rows=1 width=8) (actual 
time=1.045..111.739 rows=390 loops=1)
   Filter: ('[ 2010-01-01, 2010-02-01 )'::daterange @ whensit)
 Total runtime: 111.780 ms
(3 rows)

This, alas, reverts to a seq scan on the table, rather than restricting
itself to the tuples of interest.

I realize that, after a fashion, I'm using this backwards.  But when I'm
doing temporal stuff, that tends to be the pattern:

 - There is a set of temporal configuration, indicating criteria that
   are true for particular date ranges

 - There is then event data, which has but a single date, but which
   needs to be matched against the temporal configuration.

It sure would be nice to expand that filter into subqueries involving
the two criteria, in much the same fashion that is true today for
BETWEEN.  I imagine that would allow many queries with this kind of
pattern to make use of indexes, making them visibly thousands of times
faster.
-- 
I have  traveled the  length and breadth  of this country  and talked
with the best people, and can assure you that data processing is a fad
that won't  last out  the year.  --  Business books  editor, Prentice
Hall 1957

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python explicit subtransactions

2011-02-09 Thread Peter Eisentraut
On tis, 2011-02-08 at 00:32 -0500, Steve Singer wrote:
 On 11-02-06 11:40 AM, Jan Urbański wrote:
 
  PFA an updated patch with documentation.
 
  Yeah, changed them.
 
 Those changes look fine.  The tests now pass.
 
 I've attached a new version of the patch that fixes a few typos/wording 
 issues I saw in the documentation.  I also changed the link to the 
 python reference manual section on context managers. I think it is 
 better to link to that versus the original PEP.
 
 The documentation could probably still use more word-smithing but that 
 can happen later.  I'm marking this as ready for a committer.

Is it necessarily a good idea that an explicit subtransaction disables
the implicit sub-subtransactions?  It might be conceivable that you'd
still want to do some try/catch within explicit subtransactions.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types - efficiency

2011-02-09 Thread Jeff Davis
On Wed, 2011-02-09 at 16:20 -0500, Chris Browne wrote: 
 rangetest@localhost-  explain analyze select * from some_data where 
 '[2010-01-01,2010-02-01)'::daterange @ whensit;
QUERY PLAN
 -
  Seq Scan on some_data  (cost=0.00..634.00 rows=1 width=8) (actual 
 time=1.045..111.739 rows=390 loops=1)
Filter: ('[ 2010-01-01, 2010-02-01 )'::daterange @ whensit)
  Total runtime: 111.780 ms
 (3 rows)
 
 This, alas, reverts to a seq scan on the table, rather than restricting
 itself to the tuples of interest.
 
 I realize that, after a fashion, I'm using this backwards.  But when I'm
 doing temporal stuff, that tends to be the pattern:

Yes. The index is a btree index on a normal column, so range types can't
exactly help with that directly -- except maybe as a rewrite like you
say.

One thing you might try is a functional index on (range(whensit)) and
then do: where '...' @ range(whensit).

Does that work for you?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types - efficiency

2011-02-09 Thread Chris Browne
pg...@j-davis.com (Jeff Davis) writes:
 On Wed, 2011-02-09 at 16:20 -0500, Chris Browne wrote: 
 rangetest@localhost-  explain analyze select * from some_data where 
 '[2010-01-01,2010-02-01)'::daterange @ whensit;
QUERY PLAN
 -
  Seq Scan on some_data  (cost=0.00..634.00 rows=1 width=8) (actual 
 time=1.045..111.739 rows=390 loops=1)
Filter: ('[ 2010-01-01, 2010-02-01 )'::daterange @ whensit)
  Total runtime: 111.780 ms
 (3 rows)
 
 This, alas, reverts to a seq scan on the table, rather than restricting
 itself to the tuples of interest.
 
 I realize that, after a fashion, I'm using this backwards.  But when I'm
 doing temporal stuff, that tends to be the pattern:

 Yes. The index is a btree index on a normal column, so range types can't
 exactly help with that directly -- except maybe as a rewrite like you
 say.

 One thing you might try is a functional index on (range(whensit)) and
 then do: where '...' @ range(whensit).

 Does that work for you?

That doesn't appear to actually help:

rangetest@localhost-  create index i2 on some_data (range(whensit));
CREATE INDEX
rangetest@localhost-  explain analyze select * from some_data where 
'[2010-01-01,2010-02-01)'::daterange @ range(whensit);
 QUERY PLAN
-
 Seq Scan on some_data  (cost=0.00..727.60 rows=12480 width=8) (actual 
time=1.030..110.542 rows=390 loops=1)
   Filter: ('[ 2010-01-01, 2010-02-01 )'::daterange @ range(whensit))
 Total runtime: 110.585 ms
(3 rows)

In any case, I suggest that as a couple steps down the road thing, it
would be desirable to have that query rewrite.  Seems like a reasonable
ToDo item to consider for the future, if not in the first deployment.

Maybe that's something to add in 9.2 CommitFest #3! :-)
-- 
There  isn't  any  reason  why  Linux  can't  be  implemented  as  an
enterprise  computing solution.   Find  out what  you've been  missing
while you've been rebooting Windows NT. - Infoworld

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] query execution question

2011-02-09 Thread Nicolas Barbier
2011/2/9 amit sehas cu...@yahoo.com:

 Lets say that the cost based optimizer determines that the order of the
 joins should be   T1.a=T2.b followed by T2.c = T3.d followed by T3.e = T4.f

 the question we have is during query execution are the joins evaluated
 completely one by one in that order, or the first join is evaluated
 completely and generates an intermediate table which is then utilized
 to perform the next jointhis means that for such a query we will need
 space for all the intermediate tables, which if they are very large tables
 as they are in our case can significantly alter the cost of the operations...

[ This is a question more appropriate for pgsql-performance. ]

The optimizer doesn't only determine the order (or tree actually) in
which to perform the joins, but also how to perform them: nested loop,
merge, or hash join. Depending on those physical join types, something
might need to be materialized (merge: the intermediate sort tapes;
hash: the full outer operand's contents) or not (nested loop).

Please see the EXPLAIN statement if you want to know how the query
would be executed.

Nicolas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] query execution question

2011-02-09 Thread Kevin Grittner
amit sehas cu...@yahoo.com wrote:
 
 We are making some performance measurements, we are trying to
 determine query execution behavior.
 
I hope you're not doing this on an un-tuned server or toy tables. 
There are a number of configuration parameters which should be tuned
for your particular server, which will affect the plans used on that
server.  Also, the plans the optimizer will choose for small tables
are generally quite different for those chosen for large tables.
 
 the question we have is during query execution are the joins
 evaluated completely one by one in that order, or the first join
 is evaluated completely and generates an intermediate table which
 is then utilized to perform the next jointhis means that for
 such a query we will need space for all the intermediate tables,
 which if they are very large tables as they are in our case can
 significantly alter the cost of the operations...
 
The query looks at the estimated cost of various plans, and uses the
one with the lowest estimated cost.  The plan consists of a set of
steps, with each step pulling rows from one or more lower steps; so
the process is driven from the top down -- you try to pull a result
row, and if the top level step needs something from a lower row it
tries to pull that, etc.  Some plans involve materializing a set
of rows, some pull through indexes or heap scans as rows are
requested from them.  Depending on the query, the indexes, the table
sizes, etc., a SELECT statement with joins like you describe might
use nested index joins and never have more than a few rows in RAM at
a time without ever writing anything to disk.
 
By the way, this is the wrong list for this question.  The -hackers
list is for discussion about work on developing the product. 
Questions like this about how it works are better posted to the
-general or -novice list.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typed-tables patch broke pg_upgrade

2011-02-09 Thread Tom Lane
I find that pg_upgrade fails in HEAD when asked to do a 9.1-to-9.1
upgrade of the regression database.  It gets to this bit of the
restore script:

CREATE TABLE test_tbl2 OF public.test_type2;

-- For binary upgrade, recreate dropped column.
UPDATE pg_catalog.pg_attribute
SET attlen = -1, attalign = 'i', attbyval = false
WHERE attname = 'pg.dropped.2'
  AND attrelid = 'test_tbl2'::pg_catalog.regclass;
ALTER TABLE ONLY test_tbl2 DROP COLUMN pg.dropped.2;

and fails with 

ERROR:  cannot drop column from typed table

which probably is because test_type2 has a dropped column.

Somebody has failed to think through something, because if this state of
affairs was allowed to be created during the regression tests, why
should we not be able to restore it?

(pg_upgrade's ENUM support is broken too, but at least that one is a
one-line fix.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new clang report

2011-02-09 Thread Greg Stark
On Wed, Feb 9, 2011 at 6:30 PM, Peter Eisentraut pete...@gmx.net wrote:
 The lastest clang svn tip (2.9-to-be, I guess) builds PostgreSQL out of
 the box and most tests pass.  Specifically, it no longer chokes on
 -D_GNU_SOURCE on Linux, which was the previously reported blocker.

Odd, I tried the same thing just a couple days ago and reported two bugs:

9161nor P   Linuunassignedb...@nondot.org   NEW 
False uninitialized
warning due to control-dependency of flag
9152nor P   Linuunassignedclangb...@nondot.org  NEW 
File takes 1
minute to compile much longer than with gcc or other similar files
with llvm

The latter is much better on svn head than the version I reported it
on but it's still a problem. It took 16s to compile with svn head. Was
the first one recently fixed?


-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/python explicit subtransactions

2011-02-09 Thread Steve Singer

On 11-02-09 05:22 PM, Peter Eisentraut wrote:

On tis, 2011-02-08 at 00:32 -0500, Steve Singer wrote:

On 11-02-06 11:40 AM, Jan Urbański wrote:


PFA an updated patch with documentation.
Yeah, changed them.

Those changes look fine.  The tests now pass.

I've attached a new version of the patch that fixes a few typos/wording
issues I saw in the documentation.  I also changed the link to the
python reference manual section on context managers. I think it is
better to link to that versus the original PEP.

The documentation could probably still use more word-smithing but that
can happen later.  I'm marking this as ready for a committer.

Is it necessarily a good idea that an explicit subtransaction disables
the implicit sub-subtransactions?  It might be conceivable that you'd
still want to do some try/catch within explicit subtransactions.




I had tested nested subtransactions but not a normal try/catch within a 
subtransaction.  That sounds reasonable to allow.


Unfortunately it leads to:


test=# create table foo(a int4 primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
foo_pkey for table foo

CREATE TABLE
test=# DO $$
test$# try:
test$#   with plpy.subtransaction():
test$# plpy.execute(insert into foo values(1))
test$# try:
test$#   plpy.execute(insert into foo values(1))
test$# except:
test$#   plpy.notice('inside exception')
test$# except plpy.SPIError:
test$#   f=0
test$# $$ language plpythonu;
TRAP: FailedAssertion(!(afterTriggers-query_depth == 
afterTriggers-depth_stack[my_level]), File: trigger.c, Line: 3846)

NOTICE:  inside exception
CONTEXT:  PL/Python anonymous code block
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] another mvcc.sgml typo

2011-02-09 Thread Kevin Grittner
Trivial patch attached.
 
-Kevin

*** a/doc/src/sgml/mvcc.sgml
--- b/doc/src/sgml/mvcc.sgml
***
*** 604,610  ERROR:  could not serialize access due to read/write 
dependencies among transact
  Consistent use of Serializable transactions can simplify development.
  The guarantee that any set of concurrent serializable transactions will
  have the same effect as if they were run one at a time means that if
! you can demonstrate that a singe transaction, as written, will do the
  right thing when run by itself, you can have confidence that it will
  do the right thing in any mix of serializable transactions, even without
  any information about what those other transactions might do.  It is
--- 604,610 
  Consistent use of Serializable transactions can simplify development.
  The guarantee that any set of concurrent serializable transactions will
  have the same effect as if they were run one at a time means that if
! you can demonstrate that a single transaction, as written, will do the
  right thing when run by itself, you can have confidence that it will
  do the right thing in any mix of serializable transactions, even without
  any information about what those other transactions might do.  It is

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] another mvcc.sgml typo

2011-02-09 Thread Itagaki Takahiro
On Thu, Feb 10, 2011 at 09:30, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Trivial patch attached.

Applied. Thanks!

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extensions versus pg_upgrade

2011-02-09 Thread Bruce Momjian
Dimitri Fontaine wrote:
 Tom Lane t...@sss.pgh.pa.us writes:
  In any case that would ratchet the priority of ALTER EXTENSION UPGRADE
  back up to a must-have-for-9.1, since pg_upgrade would then leave you
  with a non-upgraded extension.
 
  Now what?
 
 What would be the problem with pg_upgrade acting the same as a
 dumpreload cycle as far as extensions are concerned?  After all those
 can be considered as part of the schema, not part of the data, and the
 system catalogs are upgraded by the tool.
 
 It would then only break user objects that depend on the extension's
 objects OIDs, but that would be the same if they instead recorded the
 OID of catalog entries, right?
 
 So a valid answer for me would be that when you pg_upgrade, the
 extensions are installed again from their scripts.  If you want to go
 further than that, you can insist on having the same version of the
 extension on both sides, but that would defeat the purpose of the tool
 somehow.  After all you asked for an upgrade?

The C comment in pg_upgrade.c explains the problem:

 *  We control all assignments of pg_type.oid because these oids are stored
 *  in user composite type values.

(Wow, I am glad I recorded all these details.)

The problem is that pg_dump --binary-upgrade knows to call
binary_upgrade.set_next_pg_type_oid() before CREATE TYPE (you can test
it yourself to see), and I am afraid we will need to do something like
that in the extension code, perhaps by supporting a --binary-upgrade
flag like we do for pg_dump.  That seems to be the cleanest approach.
A worse approach would be to somehow pass oids to pg_upgrade and have it
renumber things but that seems hopelessly error-prone.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sync Rep for 2011CF1

2011-02-09 Thread Fujii Masao
On Wed, Feb 9, 2011 at 5:25 AM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Feb 8, 2011 at 2:34 PM, Magnus Hagander mag...@hagander.net wrote:
 I also agree with the general idea of trying to break it into smaller
 parts - even if they only provide small parts each on it's own. That
 also makes it easier to get an overview of exactly how much is left,
 to see where to focus.

 And on that note, here's the rest of the patch back, rebased over what
 I posted ~90 minutes ago.

Though I haven't read the patch enough yet, I have one review comment.

While walsender uses the non-blocking I/O function (i.e.,
pq_getbyte_if_available)
for the receive, it uses the blocking one (i.e., pq_flush, etc) for the send.
So, sync_rep_timeout_server would not work well when the walsender
gets blocked in sending WAL. This is one the problems which I struggled
with when I created the SyncRep patch before. I think that we need to
introduce the non-blocking send function for the replication timeout.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Move WAL warning

2011-02-09 Thread Fujii Masao
On Wed, Feb 9, 2011 at 7:02 PM, Magnus Hagander mag...@hagander.net wrote:
 On Thu, Feb 3, 2011 at 11:19, Magnus Hagander mag...@hagander.net wrote:
 On Wed, Feb 2, 2011 at 18:00, Magnus Hagander mag...@hagander.net wrote:
 On Wed, Feb 2, 2011 at 17:43, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 On 02.02.2011 16:36, Magnus Hagander wrote:

 When running pg_basebackup with -x to include all transaction log, the
 server will still throw a warning about xlog archiving if it's not
 enabled - that is completely irrelevant since pg_basebackup has
 included it already (and if it was gone, the base backup step itself
 will fail - actual error and not warning).

 This patch moves the warning from do_pg_base_backup to pg_base_backup,
 so it still shows when using the explicit function calls, but goes
 away when using pg_basebackup.

 For the sake of consistency, how about moving the pg_stop_backup complete,
 all required WAL segments have been archived notice too?

 Well, it goes out as a NOTICE, so by default it doesn't show.. But
 yeah, for code-consistency it makes sense. Like so, then.

 Thinking some more about it, I realized this is not going to be enough
 - we need to be able to turn off the waiting for WAL segment as well,
 in the case when you're streaming the log. Thus, it needs to be
 controllable from the backup client, and we can't just assume the
 default is ok.

 Attached is an updated patch that adds a NOWAIT option to BASE_BACKUP,
 that turns off the waiting. If it's set, it also doesn't warn about
 not being able to wait in the case when there is nothing to wait for,
 so this is a replacement for the previous patch.

 Applied.

Back to your original complaint. When -x option is specified, pg_basebackup
should use NOWAIT option in BASE_BACKUP command to send to the server?

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typed-tables patch broke pg_upgrade

2011-02-09 Thread Bruce Momjian
Tom Lane wrote:
 I find that pg_upgrade fails in HEAD when asked to do a 9.1-to-9.1
 upgrade of the regression database.  It gets to this bit of the
 restore script:
 
 CREATE TABLE test_tbl2 OF public.test_type2;
 
 -- For binary upgrade, recreate dropped column.
 UPDATE pg_catalog.pg_attribute
 SET attlen = -1, attalign = 'i', attbyval = false
 WHERE attname = 'pg.dropped.2'
   AND attrelid = 'test_tbl2'::pg_catalog.regclass;
 ALTER TABLE ONLY test_tbl2 DROP COLUMN pg.dropped.2;
 
 and fails with 
 
 ERROR:  cannot drop column from typed table
 
 which probably is because test_type2 has a dropped column.
 
 Somebody has failed to think through something, because if this state of
 affairs was allowed to be created during the regression tests, why
 should we not be able to restore it?

I am not aware of this code changing in 9.1.  Was this test in 9.0? 
Does this problem happen for 9.0?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typed-tables patch broke pg_upgrade

2011-02-09 Thread Peter Eisentraut
On ons, 2011-02-09 at 18:43 -0500, Tom Lane wrote:
 I find that pg_upgrade fails in HEAD when asked to do a 9.1-to-9.1
 upgrade of the regression database.  It gets to this bit of the
 restore script:
 
 CREATE TABLE test_tbl2 OF public.test_type2;
 
 -- For binary upgrade, recreate dropped column.
 UPDATE pg_catalog.pg_attribute
 SET attlen = -1, attalign = 'i', attbyval = false
 WHERE attname = 'pg.dropped.2'
   AND attrelid = 'test_tbl2'::pg_catalog.regclass;
 ALTER TABLE ONLY test_tbl2 DROP COLUMN pg.dropped.2;
 
 and fails with 
 
 ERROR:  cannot drop column from typed table
 
 which probably is because test_type2 has a dropped column.

It should call

ALTER TYPE test_type2 DROP ATTRIBUTE xyz CASCADE;

instead.  That will propagate to the table.

I'm not sure though, whether a composite type preserves the dropped
attribute for re-dropping in this case.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typed-tables patch broke pg_upgrade

2011-02-09 Thread Peter Eisentraut
On ons, 2011-02-09 at 23:16 -0500, Bruce Momjian wrote:
 I am not aware of this code changing in 9.1.  Was this test in 9.0? 
 Does this problem happen for 9.0?

No, because you can't drop anything from a typed table in 9.0.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DROP SCHEMA xxx CASCADE: ERROR: could not open relation with OID yyy

2011-02-09 Thread Tom Lane
strk s...@keybit.net writes:
 I've finally completed the debugging phase and have
 a minimal self-contained testcase showing the problem.
 It has to do with INITIALLY DEFERRED constraints.

I looked into this and find that the issue is you're trying to drop a
table that has unfired AFTER TRIGGER events pending.  When they finally
fire, they can't find the table anymore.

I'm inclined to think that we should disallow that; or even more to the
point, that it'd be a good thing to apply CheckTableNotInUse() when
about to drop a table.  If we disallow such cases for ALTER TABLE, then
a fortiori we should do so for DROP TABLE.

Aside from disallowing unfired trigger events, CheckTableNotInUse would
disallow the table being actively relation_open'd by any operation.
This seems like a real good thing anyway (imagine, eg, DROP TABLE
executed from a trigger for that table).

It's possible that we could handle the unfired-trigger problem by
marking the relevant events AFTER_TRIGGER_DONE, but I'm unconvinced that
it's worth spending effort on.  The relation_open part of it seems
essential even so; you could likely crash the backend with that.

Comments?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: DROP SCHEMA xxx CASCADE: ERROR: could not open relation with OID yyy

2011-02-09 Thread Noah Misch
On Thu, Feb 10, 2011 at 12:03:49AM -0500, Tom Lane wrote:
 strk s...@keybit.net writes:
  I've finally completed the debugging phase and have
  a minimal self-contained testcase showing the problem.
  It has to do with INITIALLY DEFERRED constraints.
 
 I looked into this and find that the issue is you're trying to drop a
 table that has unfired AFTER TRIGGER events pending.  When they finally
 fire, they can't find the table anymore.
 
 I'm inclined to think that we should disallow that; or even more to the
 point, that it'd be a good thing to apply CheckTableNotInUse() when
 about to drop a table.  If we disallow such cases for ALTER TABLE, then
 a fortiori we should do so for DROP TABLE.
 
 Aside from disallowing unfired trigger events, CheckTableNotInUse would
 disallow the table being actively relation_open'd by any operation.
 This seems like a real good thing anyway (imagine, eg, DROP TABLE
 executed from a trigger for that table).

+1.  We even do it for TRUNCATE, so surely it's proper for DROP.

 It's possible that we could handle the unfired-trigger problem by
 marking the relevant events AFTER_TRIGGER_DONE, but I'm unconvinced that
 it's worth spending effort on.

Seems rare enough not to worry much about, particularly considering the SET
CONSTRAINTS escape hatch.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Update docs on building for Windows to accomodate current realit

2011-02-09 Thread Gurjeet Singh
This commit refers to www.mingw64.org which does not exist.

Also, clicking on the gitweb link below (from GMail), opens the browser
window with an address where ';' are replaced with %3B , which leads to 404
- no such project. Is GMail broken, or can have gitweb treat %3B as a ; ?

Regards,

On Mon, Jan 31, 2011 at 1:48 PM, Andrew Dunstan and...@dunslane.net wrote:

 Update docs on building for Windows to accomodate current reality.

 Document how to build 64 bit Windows binaries using the MinGW64 tool set.
 Remove recommendation against using Mingw as a build platform.
 Be more specific about when Cygwin is useful and when it's not,  in
 particular note its usefulness for running psql, and
 add a note about building on Cygwin in non-C locales.

 Per recent discussions.

 Branch
 --
 master

 Details
 ---

 http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=51be78b09a83b8d533e4a9f81cf9a7f2edde6654

 Modified Files
 --
 doc/src/sgml/install-windows.sgml |   21 -
 doc/src/sgml/installation.sgml|   20 +++-
 2 files changed, 35 insertions(+), 6 deletions(-)


 --
 Sent via pgsql-committers mailing list (pgsql-committ...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-committers




-- 
gurjeet.singh
@ EnterpriseDB - The Enterprise Postgres Company
http://www.EnterpriseDB.com

singh.gurjeet@{ gmail | yahoo }.com
Twitter/Skype: singh_gurjeet

Mail sent from my BlackLaptop device


Re: [HACKERS] Varchar and binary protocol

2011-02-09 Thread Noah Misch
On Sat, Feb 05, 2011 at 10:59:45PM +0100, Rados??aw Smogura wrote:
 I do performance tests against orignal JDBC driver and my version in binary 
 and in text mode. I saw strange results when I was reading varchar values.
 Here is some output from simple benchmark
 
 Plain strings speed   Execution: 8316582, local: 2116608, 
 all: 
 10433190
 Binary strings speed  Execution: 9354613, local: 2755949, 
 all: 
 12110562
 Text NG strings speed Execution: 8346902, local: 2704242, 
 all: 
 11051144
 
 Plain is standard JDBC driver, Binary is my version with binary transfer, 
 Text 
 is my version with normal transfer. 1st column, Execution is time spend on 
 query execution this includes send, recivie proto message, store it, etc, no 
 conversion to output format. Values are in nanoseconds.
 
 In new version I added some functionality, but routines to read parts in 
 Execution block are almost same for binary and text.
 
 But as you see the binary version is 10-20% slower then orginal, and my text 
 version, if I increase number of read records this proportion will not 
 change. 
 I done many checks, against even skip proto message content driver, end 
 results was same 10-20% slower.

Comparing COPY tbl(varchar_col) TO '/dev/null' to COPY tbl(varchar_col) TO
'/dev/null' WITH BINARY gives a better sense of the situation.  Your data could
have reflected a backend performance problem, but it could just as well have
arisen from your client-side changes.  (This thread also really belongs on
pgsql-performance.  See http://wiki.postgresql.org/wiki/SlowQueryQuestions)

I can reproduce a 20% slowdown using the test case I mentioned above.  I didn't
investigate much further.

Thanks,
nm

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Range Types - efficiency

2011-02-09 Thread Jeff Davis
On Wed, 2011-02-09 at 18:07 -0500, Chris Browne wrote:
 rangetest@localhost-  create index i2 on some_data (range(whensit));
 CREATE INDEX

If you make this a GiST index, it should work.

The rewrites so that it can use a btree are an interesting idea though.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Varchar and binary protocol

2011-02-09 Thread Merlin Moncure
On Sat, Feb 5, 2011 at 4:59 PM, Radosław Smogura
rsmog...@softperience.eu wrote:
 Hi,

 I do performance tests against orignal JDBC driver and my version in binary
 and in text mode. I saw strange results when I was reading varchar values.
 Here is some output from simple benchmark

 Plain strings speed   Execution: 8316582        , local: 2116608        , all:
 10433190
 Binary strings speed  Execution: 9354613        , local: 2755949        , all:
 12110562
 Text NG strings speed Execution: 8346902        , local: 2704242        , all:
 11051144

 Plain is standard JDBC driver, Binary is my version with binary transfer, Text
 is my version with normal transfer. 1st column, Execution is time spend on
 query execution this includes send, recivie proto message, store it, etc, no
 conversion to output format. Values are in nanoseconds.

 In new version I added some functionality, but routines to read parts in
 Execution block are almost same for binary and text.

 But as you see the binary version is 10-20% slower then orginal, and my text
 version, if I increase number of read records this proportion will not change.
 I done many checks, against even skip proto message content driver, end
 results was same 10-20% slower.

Since there is basically zero difference in how *varchar* is handled
in the database for the text or binary protocols (AFAIK, they use the
same code), this is almost certainly an issue with the JDBC driver, or
your benchmark application.

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Varchar and binary protocol

2011-02-09 Thread Radosław Smogura
Actually difference is
http://archives.postgresql.org/pgsql-hackers/2011-02/msg00415.php

Merlin Moncure mmonc...@gmail.com Thursday 10 February 2011 08:48:26
 On Sat, Feb 5, 2011 at 4:59 PM, Radosław Smogura
 
 rsmog...@softperience.eu wrote:
  Hi,
  
  I do performance tests against orignal JDBC driver and my version in
  binary and in text mode. I saw strange results when I was reading
  varchar values. Here is some output from simple benchmark
  
  Plain strings speed   Execution: 8316582, local: 2116608,
  all: 10433190
  Binary strings speed  Execution: 9354613, local: 2755949,
  all: 12110562
  Text NG strings speed Execution: 8346902, local: 2704242,
  all: 11051144
  
  Plain is standard JDBC driver, Binary is my version with binary transfer,
  Text is my version with normal transfer. 1st column, Execution is time
  spend on query execution this includes send, recivie proto message,
  store it, etc, no conversion to output format. Values are in
  nanoseconds.
  
  In new version I added some functionality, but routines to read parts in
  Execution block are almost same for binary and text.
  
  But as you see the binary version is 10-20% slower then orginal, and my
  text version, if I increase number of read records this proportion will
  not change. I done many checks, against even skip proto message
  content driver, end results was same 10-20% slower.
 
 Since there is basically zero difference in how *varchar* is handled
 in the database for the text or binary protocols (AFAIK, they use the
 same code), this is almost certainly an issue with the JDBC driver, or
 your benchmark application.
 
 merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers