Re: [HACKERS] SQLDA fix for ECPG

2011-11-14 Thread Boszormenyi Zoltan
2011-11-13 17:27 keltezéssel, Tom Lane írta:
 Boszormenyi Zoltan z...@cybertec.at writes:
 I had a report about ECPG code crashing which involved
 a query using a date field. Attached is a one liner fix to make
 the date type's offset computed consistently across
 sqlda_common_total_size(), sqlda_compat_total_size() and
 sqlda_native_total_size().
 Is this really the only issue there?  I notice discrepancies among those
 three routines for some other types too, notably ECPGt_timestamp and
 ECPGt_interval.

   regards, tom lane

Yes, you are right. For timestamp and interval, the safe alignment is int64.
Patch is attached.

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/

--- postgresql-9.1.1/src/interfaces/ecpg/ecpglib/sqlda.c.orig   2011-11-14 
08:59:15.118711180 +0100
+++ postgresql-9.1.1/src/interfaces/ecpg/ecpglib/sqlda.c2011-11-14 
09:02:53.787803059 +0100
@@ -127,10 +127,10 @@ sqlda_common_total_size(const PGresult *
ecpg_sqlda_align_add_size(offset, sizeof(date), 
sizeof(date), offset, next_offset);
break;
case ECPGt_timestamp:
-   ecpg_sqlda_align_add_size(offset, sizeof(int), 
sizeof(timestamp), offset, next_offset);
+   ecpg_sqlda_align_add_size(offset, 
sizeof(int64), sizeof(timestamp), offset, next_offset);
break;
case ECPGt_interval:
-   ecpg_sqlda_align_add_size(offset, sizeof(int), 
sizeof(interval), offset, next_offset);
+   ecpg_sqlda_align_add_size(offset, 
sizeof(int64), sizeof(interval), offset, next_offset);
break;
case ECPGt_char:
case ECPGt_unsigned_char:
@@ -359,7 +359,7 @@ ecpg_set_compat_sqlda(int lineno, struct
sqlda-sqlvar[i].sqllen = sizeof(date);
break;
case ECPGt_timestamp:
-   ecpg_sqlda_align_add_size(offset, 
sizeof(timestamp), sizeof(timestamp), offset, next_offset);
+   ecpg_sqlda_align_add_size(offset, 
sizeof(int64), sizeof(timestamp), offset, next_offset);
sqlda-sqlvar[i].sqldata = (char *) sqlda + 
offset;
sqlda-sqlvar[i].sqllen = sizeof(timestamp);
break;
@@ -545,7 +545,7 @@ ecpg_set_native_sqlda(int lineno, struct
sqlda-sqlvar[i].sqllen = sizeof(date);
break;
case ECPGt_timestamp:
-   ecpg_sqlda_align_add_size(offset, 
sizeof(timestamp), sizeof(timestamp), offset, next_offset);
+   ecpg_sqlda_align_add_size(offset, 
sizeof(int64), sizeof(timestamp), offset, next_offset);
sqlda-sqlvar[i].sqldata = (char *) sqlda + 
offset;
sqlda-sqlvar[i].sqllen = sizeof(timestamp);
break;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Syntax for partitioning

2011-11-14 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 to match the desired granularity of data removal.  I don't really see
 any way that the database can be expected to know what that is, unless
 it's told in advance.  So AFAICS you really have to have a declarative
 way of telling it how to do the partitioning --- it's not going to be
 able to infer that automatically.

Yes, I'm taking that back. Declarative is not the same thing as explicit
partitioning though, that index like physical map is declarative too,
e.g.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Syntax for partitioning

2011-11-14 Thread Dimitri Fontaine
Martijn van Oosterhout klep...@svana.org writes:
 While I agree that explicit partitioning is somewhat of a hack, it's a
 really useful hack.  But for me the most important use of partitioning
 is dropping a billion rows efficiently and getting the disk space
 back.  And the biggest problem is always that dropping blocks of a
 table requires fixing all the indexes.

The problem with partitions that are in fact table is that the index are
separated and you can't enforce unique globally in the partition set.

Even with that physical map idea (segment based partitioning, but
allowing a finer control than segments), you could still maintain any
number of partial indexes, but still use a single primary key e.g.

 However, in the very special case where the drop boundaries explicitly
 match the dataset, you can simply drop all the indexes.

That's the idea with partial indexes too, right?

 Now, if someone cames up with an efficient way to drop a huge number of
 rows quickly, then I admit one of the major issues is fixed.  But
 recovering the disk space is much harder.  Yes, recent versions of
 Linux come with ways to punch holes in existing files, but that doesn't
 make it quick or efficient.

If you happen to drop a part of the data that fits in one or more
segments (and with a decent fillfactor you need less than 1GB to get
there), then you can unlink() whole files at a time.  That would be the
goal here.

 I hope so, but I'm not sure I'd like partitioning support to wait on
 someone hitting on the right idea.

I would think that's exactly what's been happening to us for several
years already.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Detach/attach database

2011-11-14 Thread Thom Brown
On 13 November 2011 15:26, Simon Riggs si...@2ndquadrant.com wrote:
 On Sun, Nov 13, 2011 at 1:13 PM, Thom Brown t...@linux.com wrote:

 I don't know if this has been discussed before, but would it be
 feasible to introduce the ability to detach and attach databases? (if
 you're thinking stop right there skip to the end)  What I had in
 mind would be to do something like the following:

 That would be better done at the tablespace level, and then the
 feature becomes transportable tablespaces. Which seems like a good
 and useful idea to me.

I've been trying to think why the tablespace equivalent would be
better but can't see it.  The reason for detaching a database would be
that you want do so something with an entire related set of data.  A
tablespace can contain just indexes, or a few tables from several
databases.

 You may now be able to infer where this notion came from, when someone
 asked if you can clone databases without kicking users off.  However,
 this isn't a schema-only copy, but naturally contains data as well.

 The OP wanted to do this without freezing activity on the database,
 which is not easy...

 OTOH we can do a backup of just a single database and then filter
 recovery at database level to produce just a single copy of another
 database on its own server, if anyone wanted that.

Filtering recovery sounds very tricky to me.  And it's the global
objects part which makes things extra difficult.  But the whole idea I
was seeking sounds riddled with holes anyway, but glad I can at least
put it from my mind.

So can I humbly request we completely re-architect the whole of
PostgreSQL to fit this feature?  Thanks.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Group Commit

2011-11-14 Thread Simon Riggs
Enclosed patch implements Group Commit and also powersave mode for WALWriter.

XLogFlush() waits for WALWriter to run XLogBackgroundFlush(), which
flushes WAL and then wakes waiters. Uses same concepts and similar
code to sync rep.

Purpose is to provide consistent WAL writes, even when WALInsertLock
contended. Currently no off option, thinking is that the overhead of
doing this is relatively low and so it can be always on - exactly as
it is for sync rep.

WALWriter now has variable wakeups, so wal_writer_delay is removed.
Commit_delay and Commit_siblings are now superfluous and are also removed.

Works, but needs discussion in some areas, docs and possibly tuning
first, so this is more of a quicky than a slow, comfortable patch.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


group_commit.v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] (PATCH) Adding CORRESPONDING to Set Operations

2011-11-14 Thread Thom Brown
On 25 October 2011 18:49, Kerem Kat kerem...@gmail.com wrote:
 On Mon, Oct 24, 2011 at 20:52, Erik Rijkers e...@xs4all.nl wrote:
 On Wed, October 19, 2011 15:01, Kerem Kat wrote:
 Adding CORRESPONDING to Set Operations
 Initial patch, filename: corresponding_clause_v2.patch

 I had a quick look at the behaviour of this patch.

 Btw, the examples in your email were typoed (one select is missing):

 SELECT 1 a, 2 b, 3 c UNION CORRESPONDING 4 b, 5 d, 6 c, 7 f;
 should be:
  SELECT 1 a, 2 b, 3 c UNION CORRESPONDING select 4 b, 5 d, 6 c, 7 f;

 and

 SELECT 1 a, 2 b, 3 c UNION CORRESPONDING BY(b) 4 b, 5 d, 6 c, 7 f;
 should be:
  SELECT 1 a, 2 b, 3 c UNION CORRESPONDING BY(b) select 4 b, 5 d, 6 c, 7 f;


 Yes you are correct, mea culpa.




 But there is also a small bug, I think: the order in the CORRESPONDING BY 
 list should be followed,
 according to the standard (foundation, p. 408):

 2) If corresponding column list is specified, then let SL be a select 
 list of those column
 names explicitly appearing in the corresponding column list in the order 
 that these
 column names appear in the corresponding column list. Every column 
 name in the
 corresponding column list shall be a column name of both T1 and T2.

 That would make this wrong, I think:

 SELECT 1 a, 2 b, 3 c UNION CORRESPONDING BY(c,b) select 5 d, 6 c, 7 f, 4 b ;

  b | c
 ---+---
  2 | 3
  4 | 6
 (2 rows)

 i.e., I think it should show columns in the order c, b (and not b, c); the 
 order of the
 CORRESPONDING BY phrase.

 (but maybe I'm misreading the text of the standard; I find it often 
 difficult to follow)


 It wasn't a misread, I checked the draft, in my version same
 explanation is at p.410.
 I have corrected the ordering of the targetlists of subqueries. And
 added 12 regression
 tests for column list ordering. Can you confirm that the order has
 changed for you?



 Thanks,


 Erik Rijkers



 Regards,

 Kerem KAT

This explain plan doesn't look right to me:

test=# explain select a,b,c from one intersect corresponding by (a,c)
select a,b,c from two;
   QUERY PLAN
-
 HashSetOp Intersect  (cost=0.00..117.00 rows=200 width=8)
   -  Append  (cost=0.00..97.60 rows=3880 width=8)
 -  Subquery Scan on *SELECT* 3  (cost=0.00..48.80 rows=1940 width=8)
   -  Seq Scan on one  (cost=0.00..29.40 rows=1940 width=8)
 -  Subquery Scan on *SELECT* 4  (cost=0.00..48.80 rows=1940 width=8)
   -  Seq Scan on two  (cost=0.00..29.40 rows=1940 width=8)
(6 rows)

If I do the same thing without the corresponding...:

test=# explain select a,b,c from one intersect select a,b,c from two;
QUERY PLAN
--
 HashSetOp Intersect  (cost=0.00..126.70 rows=200 width=12)
   -  Append  (cost=0.00..97.60 rows=3880 width=12)
 -  Subquery Scan on *SELECT* 1  (cost=0.00..48.80
rows=1940 width=12)
   -  Seq Scan on one  (cost=0.00..29.40 rows=1940 width=12)
 -  Subquery Scan on *SELECT* 2  (cost=0.00..48.80
rows=1940 width=12)
   -  Seq Scan on two  (cost=0.00..29.40 rows=1940 width=12)
(6 rows)

So it looks like it's now seeing the two tables as the 3rd and 4th
tables, even though there are only 2 tables in total.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] (PATCH) Adding CORRESPONDING to Set Operations

2011-11-14 Thread Kerem Kat
 This explain plan doesn't look right to me:

 test=# explain select a,b,c from one intersect corresponding by (a,c)
 select a,b,c from two;
                                   QUERY PLAN
 -
  HashSetOp Intersect  (cost=0.00..117.00 rows=200 width=8)
   -  Append  (cost=0.00..97.60 rows=3880 width=8)
         -  Subquery Scan on *SELECT* 3  (cost=0.00..48.80 rows=1940 
 width=8)
               -  Seq Scan on one  (cost=0.00..29.40 rows=1940 width=8)
         -  Subquery Scan on *SELECT* 4  (cost=0.00..48.80 rows=1940 
 width=8)
               -  Seq Scan on two  (cost=0.00..29.40 rows=1940 width=8)
 (6 rows)

In the current implementation,

select a,b,c from one intersect corresponding by (a,c) select a,b,c from two;

is translated to equivalent

select a, c from (select a,b,c from one)
intersect
select a, c from (select a,b,c from two);

Methinks that's the reason for this explain output.

Corresponding is currently implemented in the parse/analyze phase. If
it were to be implemented in the planning phase, explain output would
likely be as you expect it to be.


 If I do the same thing without the corresponding...:

 test=# explain select a,b,c from one intersect select a,b,c from two;
                                    QUERY PLAN
 --
  HashSetOp Intersect  (cost=0.00..126.70 rows=200 width=12)
   -  Append  (cost=0.00..97.60 rows=3880 width=12)
         -  Subquery Scan on *SELECT* 1  (cost=0.00..48.80
 rows=1940 width=12)
               -  Seq Scan on one  (cost=0.00..29.40 rows=1940 width=12)
         -  Subquery Scan on *SELECT* 2  (cost=0.00..48.80
 rows=1940 width=12)
               -  Seq Scan on two  (cost=0.00..29.40 rows=1940 width=12)
 (6 rows)

 So it looks like it's now seeing the two tables as the 3rd and 4th
 tables, even though there are only 2 tables in total.

 --
 Thom Brown
 Twitter: @darkixion
 IRC (freenode): dark_ixion
 Registered Linux user: #516935

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



Regards,

Kerem KAT

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW system columns

2011-11-14 Thread Shigeru Hanada
(2011/11/14 11:25), Robert Haas wrote:
 My vote is to nuke 'em all.  :-)

+1.

IIRC, main purpose of supporting tableoid for foreign tables was to be
basis of foreign table inheritance, which was not included in 9.1, and
we have not supported it yet.  Other system columns are essentially
garbage, but they survived at 9.1 development because (maybe) it seemed
little odd to have system columns partially at that time.

So, IMHO removing all system columns from foreign tables seems
reasonable, unless it doesn't break any external tool seriously (Perhaps
there would be few tools which assume that foreign tables have system
columns).

If there seems to be a consensus on removing system column from foreign
tables, I'd like to work on this issue.  Attached is a halfway patch,
and ISTM there is no problem so far.

Regards,
-- 
Shigeru Hanada
diff --git a/contrib/file_fdw/input/file_fdw.source 
b/contrib/file_fdw/input/file_fdw.source
index 8e3d553..8ddeb17 100644
*** a/contrib/file_fdw/input/file_fdw.source
--- b/contrib/file_fdw/input/file_fdw.source
*** EXECUTE st(100);
*** 111,119 
  EXECUTE st(100);
  DEALLOCATE st;
  
- -- tableoid
- SELECT tableoid::regclass, b FROM agg_csv;
- 
  -- updates aren't supported
  INSERT INTO agg_csv VALUES(1,2.0);
  UPDATE agg_csv SET a = 1;
--- 111,116 
diff --git a/contrib/file_fdw/output/file_fdw.source 
b/contrib/file_fdw/output/file_fdw.source
index 84f0750..adf03c5 100644
*** a/contrib/file_fdw/output/file_fdw.source
--- b/contrib/file_fdw/output/file_fdw.source
*** EXECUTE st(100);
*** 174,188 
  (1 row)
  
  DEALLOCATE st;
- -- tableoid
- SELECT tableoid::regclass, b FROM agg_csv;
-  tableoid |b
- --+-
-  agg_csv  |  99.097
-  agg_csv  | 0.09561
-  agg_csv  |  324.78
- (3 rows)
- 
  -- updates aren't supported
  INSERT INTO agg_csv VALUES(1,2.0);
  ERROR:  cannot change foreign table agg_csv
--- 174,179 
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index e11d896..33f91d8 100644
*** a/src/backend/catalog/heap.c
--- b/src/backend/catalog/heap.c
*** CheckAttributeNamesTypes(TupleDesc tupde
*** 399,408 
/*
 * first check for collision with system attribute names
 *
!* Skip this for a view or type relation, since those don't have system
!* attributes.
 */
!   if (relkind != RELKIND_VIEW  relkind != RELKIND_COMPOSITE_TYPE)
{
for (i = 0; i  natts; i++)
{
--- 399,409 
/*
 * first check for collision with system attribute names
 *
!* Skip this for a view or type relation or foreign table, since those
!* don't have system attributes.
 */
!   if (relkind != RELKIND_VIEW  relkind != RELKIND_COMPOSITE_TYPE 
!   relkind != RELKIND_FOREIGN_TABLE)
{
for (i = 0; i  natts; i++)
{
*** AddNewAttributeTuples(Oid new_rel_oid,
*** 695,704 
  
/*
 * Next we add the system attributes.  Skip OID if rel has no OIDs. Skip
!* all for a view or type relation.  We don't bother with making 
datatype
!* dependencies here, since presumably all these types are pinned.
 */
!   if (relkind != RELKIND_VIEW  relkind != RELKIND_COMPOSITE_TYPE)
{
for (i = 0; i  (int) lengthof(SysAtt); i++)
{
--- 696,707 
  
/*
 * Next we add the system attributes.  Skip OID if rel has no OIDs. Skip
!* all for a view or type relation or foreign table.  We don't bother 
with
!* making datatype dependencies here, since presumably all these types 
are
!* pinned.
 */
!   if (relkind != RELKIND_VIEW  relkind != RELKIND_COMPOSITE_TYPE 
!   relkind != RELKIND_FOREIGN_TABLE)
{
for (i = 0; i  (int) lengthof(SysAtt); i++)
{
diff --git a/src/backend/executor/nodeForeignscan.c 
b/src/backend/executor/nodeForeignscan.c
index 841ae69..f7b8393 100644
*** a/src/backend/executor/nodeForeignscan.c
--- b/src/backend/executor/nodeForeignscan.c
*** ForeignNext(ForeignScanState *node)
*** 51,67 
MemoryContextSwitchTo(oldcontext);
  
/*
!* If any system columns are requested, we have to force the tuple into
!* physical-tuple form to avoid cannot extract system attribute from
!* virtual tuple errors later.  We also insert a valid value for
!* tableoid, which is the only actually-useful system column.
 */
-   if (plan-fsSystemCol  !TupIsNull(slot))
-   {
-   HeapTuple   tup = ExecMaterializeSlot(slot);
- 
-   tup-t_tableOid = RelationGetRelid(node-ss.ss_currentRelation);
-   }
  
return slot;
  }
--- 51,62 
MemoryContextSwitchTo(oldcontext);
  
/*
!* XXX If we support system columns and any of them are 

Re: [HACKERS] FDW system columns

2011-11-14 Thread Thom Brown
2011/11/14 Shigeru Hanada shigeru.han...@gmail.com

 (2011/11/14 11:25), Robert Haas wrote:
  My vote is to nuke 'em all.  :-)

 +1.

 IIRC, main purpose of supporting tableoid for foreign tables was to be
 basis of foreign table inheritance, which was not included in 9.1, and
 we have not supported it yet.  Other system columns are essentially
 garbage, but they survived at 9.1 development because (maybe) it seemed
 little odd to have system columns partially at that time.

 So, IMHO removing all system columns from foreign tables seems
 reasonable, unless it doesn't break any external tool seriously (Perhaps
 there would be few tools which assume that foreign tables have system
 columns).

 If there seems to be a consensus on removing system column from foreign
 tables, I'd like to work on this issue.  Attached is a halfway patch,
 and ISTM there is no problem so far.


I can say that at least PgAdmin doesn't use these columns.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] Cause of intermittent rangetypes regression test failures

2011-11-14 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Sun, 2011-11-13 at 15:38 -0500, Tom Lane wrote:
 I think this demonstrates that the current definition of range_before is
 broken.  It is not reasonable for it to throw an error on a perfectly
 valid input ... at least, not unless you'd like to mark it VOLATILE so
 that the planner will not risk calling it.
 
 What shall we have it do instead?

 We could have it return NULL, I suppose. I was worried that that would
 lead to confusion between NULL and the empty range, but it might be
 better than marking it VOLATILE.

It needs to return FALSE, actually.  After further reading I realized
that you have that behavior hard-wired into the range GiST routines,
and it's silly to make the stand-alone versions of the function act
differently.

This doesn't seem terribly unreasonable: we just have to document
that the empty range is neither before nor after any other range.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] (PATCH) Adding CORRESPONDING to Set Operations

2011-11-14 Thread Thom Brown
On 14 November 2011 11:29, Kerem Kat kerem...@gmail.com wrote:

  This explain plan doesn't look right to me:
 
  test=# explain select a,b,c from one intersect corresponding by (a,c)
  select a,b,c from two;
QUERY PLAN
 
 -
   HashSetOp Intersect  (cost=0.00..117.00 rows=200 width=8)
-  Append  (cost=0.00..97.60 rows=3880 width=8)
  -  Subquery Scan on *SELECT* 3  (cost=0.00..48.80 rows=1940
 width=8)
-  Seq Scan on one  (cost=0.00..29.40 rows=1940 width=8)
  -  Subquery Scan on *SELECT* 4  (cost=0.00..48.80 rows=1940
 width=8)
-  Seq Scan on two  (cost=0.00..29.40 rows=1940 width=8)
  (6 rows)

 In the current implementation,

 select a,b,c from one intersect corresponding by (a,c) select a,b,c from
 two;

 is translated to equivalent

 select a, c from (select a,b,c from one)
 intersect
 select a, c from (select a,b,c from two);

 Methinks that's the reason for this explain output.

 Corresponding is currently implemented in the parse/analyze phase. If
 it were to be implemented in the planning phase, explain output would
 likely be as you expect it to be.


I'm certainly no expert on what the right way to represent the plan is, but
I'm still uncomfortable with its current representation.   And having just
tested the translated equivalent, I still don't get the same explain plan:

test=# explain select a, c from (select a,b,c from one) a
intersect
select a, c from (select a,b,c from two) b;
   QUERY PLAN

-
 HashSetOp Intersect  (cost=0.00..117.00 rows=200 width=8)
   -  Append  (cost=0.00..97.60 rows=3880 width=8)
 -  Subquery Scan on *SELECT* 1  (cost=0.00..48.80 rows=1940
width=8)
   -  Seq Scan on one  (cost=0.00..29.40 rows=1940 width=8)
 -  Subquery Scan on *SELECT* 2  (cost=0.00..48.80 rows=1940
width=8)
   -  Seq Scan on two  (cost=0.00..29.40 rows=1940 width=8)
(6 rows)

Also you probably want to update src/backend/catalog/sql_features.txt so
that F301 is marked as YES for supporting the standard. :)

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


[HACKERS] star join optimization

2011-11-14 Thread Rudyar

Hello,

the hybrid hash join algorithm implemented in the current version of 
PostgreSQL has any kind of optimization

for star join queries for Data Warehouse model?

Regards.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] (PATCH) Adding CORRESPONDING to Set Operations

2011-11-14 Thread Tom Lane
Kerem Kat kerem...@gmail.com writes:
 Corresponding is currently implemented in the parse/analyze phase. If
 it were to be implemented in the planning phase, explain output would
 likely be as you expect it to be.

It's already been pointed out to you that doing this at parse time is
unacceptable, because of the implications for reverse-listing of rules
(views).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Detach/attach database

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 4:55 AM, Thom Brown t...@linux.com wrote:
 So can I humbly request we completely re-architect the whole of
 PostgreSQL to fit this feature?  Thanks.

Heh.

I have to admit I've thought about this from time to time, and it
would be pretty cool.  I was initially thinking that it wouldn't be
that difficult to do this on a per-database level, because if you
slurp up a whole database then by definition you're also including the
system catalogs, which means that you have the pg_class, pg_attribute,
and pg_type entries that are necessary to interpret the table
contents.  If you do anything more fine-grained (per-tablespace,
per-table, or whatever) then things get much more complex, but at the
database level you only need to worry about interactions with other
globals: tablespace and role definitions.  And we could probably write
code to grovel through the system catalogs for a newly mounted
database and do search and replace on the appropriate columns, to map
from the old OIDs to the new ones.  It wouldn't be simple, but I think
it could be done.

But Tom's point about XIDs and LSNs seems like it kind of puts a
bullet through the heart of the whole idea.  Now, before you can move
the database (or table, or whatever) between clusters, you've got to
rewrite all the data files to freeze XIDs and, I don't know, zero out
LSNs, or something.  And if you're going to rewrite all the data, then
you've pretty much lost all the benefit of doing this in the first
place. In fact, it might end up being *slower* than a dump and
restore; even an uncompressed dump will be smaller than the on-disk
footprint of the original database, and many dumps compress quite
well.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] (PATCH) Adding CORRESPONDING to Set Operations

2011-11-14 Thread Kerem Kat
On Mon, Nov 14, 2011 at 15:32, Tom Lane t...@sss.pgh.pa.us wrote:
 Kerem Kat kerem...@gmail.com writes:
 Corresponding is currently implemented in the parse/analyze phase. If
 it were to be implemented in the planning phase, explain output would
 likely be as you expect it to be.

 It's already been pointed out to you that doing this at parse time is
 unacceptable, because of the implications for reverse-listing of rules
 (views).

                        regards, tom lane


I am well aware of that thank you.

Regards,

Kerem KAT

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Working with git repo tagged versions

2011-11-14 Thread Peter Eisentraut
On fre, 2011-11-11 at 15:53 -0500, Bruce Momjian wrote:
 Basically, git checkout assumes a tag, unless you -b for a branch. 

No, git checkout assumes a branch, and if it doesn't find a branch, it
looks for a commit by the given name, and a tag is one way of naming a
commit.  The -b option creates a new branch.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Patch for cursor calling with named parameters

2011-11-14 Thread Yeb Havinga

On 2011-10-15 07:41, Tom Lane wrote:

Yeb Havingayebhavi...@gmail.com  writes:

Hello Royce,
Thanks again for testing.

I looked this patch over but concluded that it's not ready to apply,
mainly because there are too many weird behaviors around error
reporting.


Thanks again for the review and comments. Attached is v3 of the patch 
that addresses all of the points made by Tom. In the regression test I 
added a section under --- START ADDITIONAL TESTS that might speedup testing.



On the documentation front, the patch includes a hunk that changes the
description of DECLARE to claim that the argument names are optional,
something I see no support for in the code.  It also fails to document
that this patch affects the behavior of cursor FOR loops as well as OPEN,
since both of those places use read_cursor_args().


The declare section was removed. The cursor for loop section was changed 
to include a reference to named parameters, however I was unsure about 
OPEN as I was under the impression that was already altered.


regards,
Yeb Havinga

diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
new file mode 100644
index f33cef5..6a77b75
*** a/doc/src/sgml/plpgsql.sgml
--- b/doc/src/sgml/plpgsql.sgml
*** OPEN curs1 FOR EXECUTE 'SELECT * FROM '
*** 2823,2833 
 /para
   /sect3
  
! sect3
   titleOpening a Bound Cursor/title
  
  synopsis
! OPEN replaceablebound_cursorvar/replaceable optional ( replaceableargument_values/replaceable ) /optional;
  /synopsis
  
   para
--- 2823,2833 
 /para
   /sect3
  
! sect3 id=plpgsql-open-bound-cursor
   titleOpening a Bound Cursor/title
  
  synopsis
!  OPEN replaceablebound_cursorvar/replaceable optional ( optional replaceableargname/replaceable := /optional replaceableargument_value/replaceable optional, .../optional ) /optional;
  /synopsis
  
   para
*** OPEN replaceablebound_cursorvar/repla
*** 2847,2856 
--- 2847,2868 
   /para
  
   para
+   Cursors that have named parameters may be opened using either
+   firsttermnamed/firstterm or firsttermpositional/firstterm
+   notation. In contrast with calling functions, described in xref
+   linkend=sql-syntax-calling-funcs, it is not allowed to mix
+   positional and named notation. In positional notation, all arguments
+   are specified in order. In named notation, each argument's name is
+   specified using literal:=/literal to separate it from the
+   argument expression.
+  /para
+ 
+  para
Examples (these use the cursor declaration examples above):
  programlisting
  OPEN curs2;
  OPEN curs3(42);
+ OPEN curs3(key := 42);
  /programlisting
   /para
  
*** COMMIT;
*** 3169,3175 
  
  synopsis
  optional lt;lt;replaceablelabel/replaceablegt;gt; /optional
! FOR replaceablerecordvar/replaceable IN replaceablebound_cursorvar/replaceable optional ( replaceableargument_values/replaceable ) /optional LOOP
  replaceablestatements/replaceable
  END LOOP optional replaceablelabel/replaceable /optional;
  /synopsis
--- 3181,3187 
  
  synopsis
  optional lt;lt;replaceablelabel/replaceablegt;gt; /optional
! FOR replaceablerecordvar/replaceable IN replaceablebound_cursorvar/replaceable optional ( optional replaceableargname/replaceable := /optional replaceableargument_value/replaceable optional, .../optional ) /optional LOOP
  replaceablestatements/replaceable
  END LOOP optional replaceablelabel/replaceable /optional;
  /synopsis
*** END LOOP optional replaceablelabel/
*** 3187,3192 
--- 3199,3209 
   Each row returned by the cursor is successively assigned to this
   record variable and the loop body is executed.
  /para
+ 
+ para
+  See xref linkend=plpgsql-open-bound-cursor on calling cursors with
+  named notation.
+ /para
 /sect2
  
/sect1
diff --git a/src/pl/plpgsql/src/gram.y b/src/pl/plpgsql/src/gram.y
new file mode 100644
index 8c4c2f7..5271ab5
*** a/src/pl/plpgsql/src/gram.y
--- b/src/pl/plpgsql/src/gram.y
*** static	PLpgSQL_expr	*read_sql_construct(
*** 67,72 
--- 67,73 
  			const char *sqlstart,
  			bool isexpression,
  			bool valid_sql,
+ 			bool trim,
  			int *startloc,
  			int *endtoken);
  static	PLpgSQL_expr	*read_sql_expression(int until,
*** for_control		: for_variable K_IN
*** 1313,1318 
--- 1314,1320 
  	   SELECT ,
  	   true,
  	   false,
+ 	   true,
  	   expr1loc,
  	   tok);
  
*** stmt_raise		: K_RAISE
*** 1692,1698 
  	expr = read_sql_construct(',', ';', K_USING,
  			  , or ; or USING,
  			  SELECT ,
! 			  true, true,
  			  NULL, tok);
  	new-params 

Re: [HACKERS] Detach/attach database

2011-11-14 Thread Thom Brown
On 14 November 2011 13:32, Robert Haas robertmh...@gmail.com wrote:

 But Tom's point about XIDs and LSNs seems like it kind of puts a
 bullet through the heart of the whole idea.


What about having database-level XIDs rather than cluster-level?  Is that
remotely feasible?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] Detach/attach database

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 10:05 AM, Thom Brown t...@linux.com wrote:
 On 14 November 2011 13:32, Robert Haas robertmh...@gmail.com wrote:

 But Tom's point about XIDs and LSNs seems like it kind of puts a
 bullet through the heart of the whole idea.

 What about having database-level XIDs rather than cluster-level?  Is that
 remotely feasible?

Maybe.  You'd need a set separate set for shared catalogs, too.  It
seems like a heck of a lot of work, though, especially since (IME,
anyway) most people only really one run one database per cluster.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Detach/attach database

2011-11-14 Thread Thom Brown
On 14 November 2011 15:07, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Nov 14, 2011 at 10:05 AM, Thom Brown t...@linux.com wrote:
  On 14 November 2011 13:32, Robert Haas robertmh...@gmail.com wrote:
 
  But Tom's point about XIDs and LSNs seems like it kind of puts a
  bullet through the heart of the whole idea.
 
  What about having database-level XIDs rather than cluster-level?  Is that
  remotely feasible?

 Maybe.  You'd need a set separate set for shared catalogs, too.  It
 seems like a heck of a lot of work, though, especially since (IME,
 anyway) most people only really one run one database per cluster.


Thought it would be a lot of work.  Well one benefit I could potentially
see is paving the way for per-database replication.  But I'll let this
dream go as it's clearly not something to realistically pursue.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] star join optimization

2011-11-14 Thread Rudyar

On 14/11/11 12:20, Robert Haas wrote:

On Mon, Nov 14, 2011 at 8:25 AM, Rudyarrudyar.cor...@gmail.com  wrote:

the hybrid hash join algorithm implemented in the current version of
PostgreSQL has any kind of optimization
for star join queries for Data Warehouse model?

Not really.  As much as possible, we try to make the query optimizer a
general-purpose tool that can handle any query you happen to throw at
it, rather than putting in special-purpose hacks to cater to specific
types of queries.  I'm not aware of anything in particular that we
could do to better optimize the star-join case than what we do for any
other query.

Now, one thing that was discussed a year or two ago was the
possibility of considering join algorithms that can handle more than
two tables at a time.  Currently, we don't do that, so a four-way join
will be implemented either by joining two tables, then the other two
tables, and then the results of those; or more commonly by joining two
tables, joining the results to a third table, and then joining those
results to the final table.  Due to the pipelined nature of our
executor, this works pretty well, but it's possible that there are
better algorithms out there.


Thanks Robert,

I'm a new programmer in postgreSQL source code and I working in my tesis 
project about that optimizations to HHJ algorithm.
I think so is very useful that optimizer recognize one star join and 
apply this optimizations..
For example, SQL Server and Oracle databases implements star join query 
optimizations for OLAP queries in DW.


How can contribute with my tesis project to postreSQL source code?

Regards.

--

Rudyar Cortés.
Estudiante de Ingeniería Civil Informática
Universidad Técnica Federico Santa María.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] star join optimization

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 10:36 AM, Rudyar rudyar.cor...@gmail.com wrote:
 On 14/11/11 12:20, Robert Haas wrote:

 On Mon, Nov 14, 2011 at 8:25 AM, Rudyarrudyar.cor...@gmail.com  wrote:

 the hybrid hash join algorithm implemented in the current version of
 PostgreSQL has any kind of optimization
 for star join queries for Data Warehouse model?

 Not really.  As much as possible, we try to make the query optimizer a
 general-purpose tool that can handle any query you happen to throw at
 it, rather than putting in special-purpose hacks to cater to specific
 types of queries.  I'm not aware of anything in particular that we
 could do to better optimize the star-join case than what we do for any
 other query.

 Now, one thing that was discussed a year or two ago was the
 possibility of considering join algorithms that can handle more than
 two tables at a time.  Currently, we don't do that, so a four-way join
 will be implemented either by joining two tables, then the other two
 tables, and then the results of those; or more commonly by joining two
 tables, joining the results to a third table, and then joining those
 results to the final table.  Due to the pipelined nature of our
 executor, this works pretty well, but it's possible that there are
 better algorithms out there.

 Thanks Robert,

 I'm a new programmer in postgreSQL source code and I working in my tesis
 project about that optimizations to HHJ algorithm.
 I think so is very useful that optimizer recognize one star join and apply
 this optimizations..
 For example, SQL Server and Oracle databases implements star join query
 optimizations for OLAP queries in DW.

 How can contribute with my tesis project to postreSQL source code?

A good example might be to show us some of the specific cases that you
think can be improved.  Perhaps with a script to set up the test data,
and EXPLAIN ANALYZE output from the queries involved, and a
description of where you see an opportunity for improvement.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] star join optimization

2011-11-14 Thread Rudyar

On 14/11/11 12:37, Robert Haas wrote:

On Mon, Nov 14, 2011 at 10:36 AM, Rudyarrudyar.cor...@gmail.com  wrote:

On 14/11/11 12:20, Robert Haas wrote:

On Mon, Nov 14, 2011 at 8:25 AM, Rudyarrudyar.cor...@gmail.comwrote:

the hybrid hash join algorithm implemented in the current version of
PostgreSQL has any kind of optimization
for star join queries for Data Warehouse model?

Not really.  As much as possible, we try to make the query optimizer a
general-purpose tool that can handle any query you happen to throw at
it, rather than putting in special-purpose hacks to cater to specific
types of queries.  I'm not aware of anything in particular that we
could do to better optimize the star-join case than what we do for any
other query.

Now, one thing that was discussed a year or two ago was the
possibility of considering join algorithms that can handle more than
two tables at a time.  Currently, we don't do that, so a four-way join
will be implemented either by joining two tables, then the other two
tables, and then the results of those; or more commonly by joining two
tables, joining the results to a third table, and then joining those
results to the final table.  Due to the pipelined nature of our
executor, this works pretty well, but it's possible that there are
better algorithms out there.


Thanks Robert,

I'm a new programmer in postgreSQL source code and I working in my tesis
project about that optimizations to HHJ algorithm.
I think so is very useful that optimizer recognize one star join and apply
this optimizations..
For example, SQL Server and Oracle databases implements star join query
optimizations for OLAP queries in DW.

How can contribute with my tesis project to postreSQL source code?

A good example might be to show us some of the specific cases that you
think can be improved.  Perhaps with a script to set up the test data,
and EXPLAIN ANALYZE output from the queries involved, and a
description of where you see an opportunity for improvement.


Ok, I'm working in that project. I will send you my results
and comparision with SQL server HHJ optimization in one or two months.

Regards

--
Rudyar Cortés.
Estudiante de Ingeniería Civil Informática
Universidad Técnica Federico Santa María.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Feature Request] \dx show options

2011-11-14 Thread Emanuel Calvo
2011/11/10 Robert Haas robertmh...@gmail.com:
 On Wed, Nov 9, 2011 at 12:31 PM, Emanuel Calvo postgres@gmail.com wrote:
 \dew+ lists the actual options supplied to a foreign data wrapper already.

 Checked, but the options doesn't appear (the column exists, but is empty).

 Well, that just means that you didn't specify any options when you ran
 CREATE FOREIGN DATA WRAPPER.

 rhaas=# create foreign data wrapper dummy options (foo 'bar');
 CREATE FOREIGN DATA WRAPPER
 rhaas=# \dew+
                            List of foreign-data wrappers
  Name  | Owner | Handler | Validator | Access privileges | FDW Options
 | Description
 ---+---+-+---+---+-+-
  dummy | rhaas | -       | -         |                   | (foo 'bar') |
 (1 row)

 I'm not sure we're talking about the same thing, though.


No. I thought 'options' were the parameters when you create a fdw (example:
host, port, file, etc).



-- 
--
              Emanuel Calvo
              Helpame.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] star join optimization

2011-11-14 Thread Greg Smith

On 11/14/2011 10:44 AM, Rudyar wrote:

Ok, I'm working in that project. I will send you my results
and comparision with SQL server HHJ optimization in one or two months.


Please be careful not to share here details of how features like this 
are built in any commercial databases you evaluate.  Some of those 
implementations use patented design approaches that should be avoided in 
an open source project.  Oracle, Microsoft, and DB2 are all aggressive 
about patenting the innovative parts of their database server code.


In addition to not wanting to accidentally incorporate such a design, 
it's better for the PostgreSQL project to not be aware of what patents 
in this area exist too.  We don't even want a survey of patents in this 
area published here because there are increased penalties for willful 
patent infringement.  See http://en.wikipedia.org/wiki/Treble_damages 
for example.


What this project likes best are innovative approaches from recent 
academic research that haven't been incorporated in any commercial 
products yet.  A good example is how the Serializable Snapshot Isolation 
technique developed by Cahill and others was added to PostgreSQL 9.1:  
http://wiki.postgresql.org/wiki/Serializable  There was less concern 
over accidentally duplicating a patented approach because that technique 
wasn't in any of the commercial databases yet.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] star join optimization

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 8:25 AM, Rudyar rudyar.cor...@gmail.com wrote:
 the hybrid hash join algorithm implemented in the current version of
 PostgreSQL has any kind of optimization
 for star join queries for Data Warehouse model?

Not really.  As much as possible, we try to make the query optimizer a
general-purpose tool that can handle any query you happen to throw at
it, rather than putting in special-purpose hacks to cater to specific
types of queries.  I'm not aware of anything in particular that we
could do to better optimize the star-join case than what we do for any
other query.

Now, one thing that was discussed a year or two ago was the
possibility of considering join algorithms that can handle more than
two tables at a time.  Currently, we don't do that, so a four-way join
will be implemented either by joining two tables, then the other two
tables, and then the results of those; or more commonly by joining two
tables, joining the results to a third table, and then joining those
results to the final table.  Due to the pipelined nature of our
executor, this works pretty well, but it's possible that there are
better algorithms out there.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Feature Request] \dx show options

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 10:45 AM, Emanuel Calvo postgres@gmail.com wrote:
 2011/11/10 Robert Haas robertmh...@gmail.com:
 On Wed, Nov 9, 2011 at 12:31 PM, Emanuel Calvo postgres@gmail.com 
 wrote:
 \dew+ lists the actual options supplied to a foreign data wrapper already.

 Checked, but the options doesn't appear (the column exists, but is empty).

 Well, that just means that you didn't specify any options when you ran
 CREATE FOREIGN DATA WRAPPER.

 rhaas=# create foreign data wrapper dummy options (foo 'bar');
 CREATE FOREIGN DATA WRAPPER
 rhaas=# \dew+
                            List of foreign-data wrappers
  Name  | Owner | Handler | Validator | Access privileges | FDW Options
 | Description
 ---+---+-+---+---+-+-
  dummy | rhaas | -       | -         |                   | (foo 'bar') |
 (1 row)

 I'm not sure we're talking about the same thing, though.


 No. I thought 'options' were the parameters when you create a fdw (example:
 host, port, file, etc).

Each FDW can make its own decisions about which options it wants to
support - the core server support doesn't know anything about how the
data will be used.  You can set options on the FDW level, the server
level, the foreign table level, and maybe a few other places.
Normally I would expect things like host and port to be set on the
server level, rather than the foreign data wrapper level.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] star join optimization

2011-11-14 Thread Rudyar

On 14/11/11 13:09, Greg Smith wrote:

On 11/14/2011 10:44 AM, Rudyar wrote:

Ok, I'm working in that project. I will send you my results
and comparision with SQL server HHJ optimization in one or two months.


Please be careful not to share here details of how features like this 
are built in any commercial databases you evaluate.  Some of those 
implementations use patented design approaches that should be avoided 
in an open source project.  Oracle, Microsoft, and DB2 are all 
aggressive about patenting the innovative parts of their database 
server code.


In addition to not wanting to accidentally incorporate such a design, 
it's better for the PostgreSQL project to not be aware of what patents 
in this area exist too.  We don't even want a survey of patents in 
this area published here because there are increased penalties for 
willful patent infringement.  See 
http://en.wikipedia.org/wiki/Treble_damages for example.


What this project likes best are innovative approaches from recent 
academic research that haven't been incorporated in any commercial 
products yet.  A good example is how the Serializable Snapshot 
Isolation technique developed by Cahill and others was added to 
PostgreSQL 9.1:  http://wiki.postgresql.org/wiki/Serializable  There 
was less concern over accidentally duplicating a patented approach 
because that technique wasn't in any of the commercial databases yet.



Greg,

Ok. I will consider your recommendations.

Best Regards.

--
Rudyar Cortés.
Estudiante de Ingeniería Civil Informática
Universidad Técnica Federico Santa María.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] So where are we on the open commitfest?

2011-11-14 Thread Nikhil Sontakke
  * Non-inheritable check constraints
 


So, this patch got shifted to the next commitfest...

Regards,
Nikhils


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2011-11-14 Thread Nikhil Sontakke
 If all you need to do is lock a schema, you can just call
 LockDatabaseObject(NamespaceRelationId, namespace_oid, 0,
 AccessShareLock); there's no need to fake up an objectaddress just to
 take a lock.  But I think that's not really all you need to do,
 because somebody could drop the namespace between the time that you
 decide what OID to lock and the time you acquire the lock.  So I think
 you need something like what we did in RangeVarGetRelid().  See
 attached patch.


Thanks Robert. But currently there are very few callers of
RangeVarGetAndCheckCreationNamespace() function. For the sake of
completeness we will have to introduce a call to this function while
creating all other objects too.

Regards,
Nikhils


 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



Re: [HACKERS] [Feature Request] \dx show options

2011-11-14 Thread Emanuel Calvo
2011/11/14 Robert Haas robertmh...@gmail.com:
 On Mon, Nov 14, 2011 at 10:45 AM, Emanuel Calvo postgres@gmail.com 
 wrote:
 2011/11/10 Robert Haas robertmh...@gmail.com:
 On Wed, Nov 9, 2011 at 12:31 PM, Emanuel Calvo postgres@gmail.com 
 wrote:
 \dew+ lists the actual options supplied to a foreign data wrapper already.

 Checked, but the options doesn't appear (the column exists, but is empty).

 Well, that just means that you didn't specify any options when you ran
 CREATE FOREIGN DATA WRAPPER.

 rhaas=# create foreign data wrapper dummy options (foo 'bar');
 CREATE FOREIGN DATA WRAPPER
 rhaas=# \dew+
                            List of foreign-data wrappers
  Name  | Owner | Handler | Validator | Access privileges | FDW Options
 | Description
 ---+---+-+---+---+-+-
  dummy | rhaas | -       | -         |                   | (foo 'bar') |
 (1 row)

 I'm not sure we're talking about the same thing, though.


 No. I thought 'options' were the parameters when you create a fdw (example:
 host, port, file, etc).

 Each FDW can make its own decisions about which options it wants to
 support - the core server support doesn't know anything about how the
 data will be used.  You can set options on the FDW level, the server
 level, the foreign table level, and maybe a few other places.
 Normally I would expect things like host and port to be set on the
 server level, rather than the foreign data wrapper level.


Gotcha. Thanks Robert! I must report to the fdw creator.


-- 
--
              Emanuel Calvo
              Helpame.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 11:48 AM, Nikhil Sontakke
nikhil.sonta...@enterprisedb.com wrote:
 If all you need to do is lock a schema, you can just call
 LockDatabaseObject(NamespaceRelationId, namespace_oid, 0,
 AccessShareLock); there's no need to fake up an objectaddress just to
 take a lock.  But I think that's not really all you need to do,
 because somebody could drop the namespace between the time that you
 decide what OID to lock and the time you acquire the lock.  So I think
 you need something like what we did in RangeVarGetRelid().  See
 attached patch.

 Thanks Robert. But currently there are very few callers of
 RangeVarGetAndCheckCreationNamespace() function. For the sake of
 completeness we will have to introduce a call to this function while
 creating all other objects too.

Well, RangeVarGetAndCheckCreationNamespace is only (and can only) be
used for relations.  To get similar protection for other object types,
we'd need to add a similar logic elsewhere.  I haven't looked at where
it would need to go.

In fact, I think that the technique demonstrated here (which was
pioneered by Noah Misch) is actually quite general, and there are
probably a lot of places where we need to be doing it but currently
are not.  So it's probably going to take a while to get this
completely nailed down, but we can keep chipping away at it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression tests fail once XID counter exceeds 2 billion

2011-11-14 Thread Robert Haas
On Sun, Nov 13, 2011 at 6:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 While investigating bug #6291 I was somewhat surprised to discover
 $SUBJECT.  The cause turns out to be this kluge in alter_table.sql:

        select virtualtransaction
        from pg_locks
        where transactionid = txid_current()::integer

 which of course starts to fail with integer out of range as soon as
 txid_current() gets past 2^31.  Right now, since there is no cast
 between xid and any integer type, and no comparison operator except the
 dubious xideqint4 one, the only way we could fix this is something
 like

        where transactionid::text = (txid_current() % (2^32))::text

 which is surely pretty ugly.  Is it worth doing something less ugly?
 I'm not sure if there are any other use-cases for this type of
 comparison, but if there are, seems like it would be sensible to invent
 a function along the lines of

        txid_from_xid(xid) returns bigint

 that plasters on the appropriate epoch value for an
 assumed-to-be-current-or-recent xid, and returns something that squares
 with the txid_snapshot functions.  Then the test could be coded without
 kluges as

        where txid_from_xid(transactionid) = txid_current()

 Thoughts?

Well, the mod-2^32 arithmetic doesn't bother me, but if you're feeling
motivated to invent txid_from_xid() I think that would be fine, too.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2011-11-14 Thread Nikhil Sontakke
 So it's probably going to take a while to get this
 completely nailed down, but we can keep chipping away at it.


Agreed. So are you planning to commit this change? Or we want some more
objects to be fixed? Last I looked at this, we will need locking to be done
while creating tables, views, types, sequences, functions, indexes,
extensions, constraints, operators stuff, ts stuff, rules, domains, etc.
that can go into schemas.

So might even make sense to write a schema specific function based on your
patch template to cater in general to schema locking during object creation.

Regards,
Nikhils


[HACKERS] CommitFest 2011-11 starting soon

2011-11-14 Thread Greg Smith
Tomorrow November 15, patch submission will close for the 2011-11 
CommitFest after 11:59PM PST.  New patches ready for review should be 
submitted to this mailing list and added to the CommitFest application 
at https://commitfest.postgresql.org/  See 
http://wiki.postgresql.org/wiki/Development_information for guidelines 
on patch development, submission, review, and how the CommitFest work 
happens.


Even before the last minute rush there are already 25 submissions that 
are looking for reviewers.  If you can apply a patch and you can use the 
new feature, you're qualified to start reviewing it.  And we've produced 
some guides to the patch part--see 
http://www.pgcon.org/2011/schedule/events/368.en.html as one example.


If you're interested in helping with review, but are looking for 
suggestions on what patch to select, you should join the Round Robin 
Reviewers list to get assigned one.  More information about that at 
http://wiki.postgresql.org/wiki/RRReviewers


This is the 3rd of the 4 CommitFests for PostgreSQL 9.2.  The hope is 
that any major features aimed at 9.2 will have been submitted for 
initial review by this one, so there's still time to get feedback and 
re-submit before the final CommitFest for 9.2, 2012-01.


There are also 4 uncontroversial submissions in this CommitFest that are 
marked Ready for Committer:


-Non-inheritable check constraints
-plperl verify utf8 strings
-Perl xsubpp from cpan
-Add Support for building with Visual Studio 2010

Committers who would like to get an early start have some options already.

--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql history vs. dearmor (pgcrypto)

2011-11-14 Thread Robert Haas
2011/11/13 Tomas Vondra t...@fuzzy.cz:
 but recalling it from the query buffer results in

  ERROR:  Corrupt ascii-armor

 I've noticed this on 9.1 but 9.2devel behaves exactly the same. I'm
 using 64-bit Linux with UTF8, nothing special.

It looks like the problem is that the original has a blank line after
the line that says Version: GnuPG v2.0.17 (GNU/Linux), but when you
recall it from the query buffer, that extra blank line gets elided.

The attached patch fixes it for me.  I'm a little worried it might
cause a problem in some case I'm not thinking about, but I can't think
of any such case right this minute.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


save-empty-lines.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cause of intermittent rangetypes regression test failures

2011-11-14 Thread Jeff Davis
On Mon, 2011-11-14 at 08:11 -0500, Tom Lane wrote:
 It needs to return FALSE, actually.  After further reading I realized
 that you have that behavior hard-wired into the range GiST routines,
 and it's silly to make the stand-alone versions of the function act
 differently.

Good point. That makes sense to me.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Syntax for partitioning

2011-11-14 Thread Martijn van Oosterhout
On Mon, Nov 14, 2011 at 10:33:36AM +0100, Dimitri Fontaine wrote:
 Martijn van Oosterhout klep...@svana.org writes:
  While I agree that explicit partitioning is somewhat of a hack, it's a
  really useful hack.  But for me the most important use of partitioning
  is dropping a billion rows efficiently and getting the disk space
  back.  And the biggest problem is always that dropping blocks of a
  table requires fixing all the indexes.
 
 The problem with partitions that are in fact table is that the index are
 separated and you can't enforce unique globally in the partition set.
 
 Even with that physical map idea (segment based partitioning, but
 allowing a finer control than segments), you could still maintain any
 number of partial indexes, but still use a single primary key e.g.

Ah, well, if you can come up with a way to get the advantages of
partition while still being able to enforce primary keys over
partitions, that would be A Really Cool Idea.

That said, I still don't see how you can enforce a unique index over
multiple segments over something other than the partition key while
still allowing quick dropping of segments.  If you can fix that you can
make it work for the current inheritence-style partitioning.

 If you happen to drop a part of the data that fits in one or more
 segments (and with a decent fillfactor you need less than 1GB to get
 there), then you can unlink() whole files at a time.  That would be the
 goal here.

I feel uncomfortable with the happen to. You can add the magic too,
but for scripting purposes I'd feel better if it could be done via DDL
also. That way typos don't end up being 5 day queries all of a sudden.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [HACKERS] Cause of intermittent rangetypes regression test failures

2011-11-14 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Mon, 2011-11-14 at 08:11 -0500, Tom Lane wrote:
 It needs to return FALSE, actually.  After further reading I realized
 that you have that behavior hard-wired into the range GiST routines,
 and it's silly to make the stand-alone versions of the function act
 differently.

 Good point. That makes sense to me.

While thinking about this ... would it be sensible for range_lower and
range_upper to return NULL instead of throwing an exception for empty or
infinite ranges?  As with these comparison functions, throwing an error
seems like a fairly unpleasant definition to work with in practice.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 12:48 PM, Nikhil Sontakke nikkh...@gmail.com wrote:
 So it's probably going to take a while to get this
 completely nailed down, but we can keep chipping away at it.

 Agreed. So are you planning to commit this change? Or we want some more
 objects to be fixed? Last I looked at this, we will need locking to be done
 while creating tables, views, types, sequences, functions, indexes,
 extensions, constraints, operators stuff, ts stuff, rules, domains, etc.
 that can go into schemas.

reads the code

Well, it looks to me like there are three different places that we
need to nail down: RangeVarGetAndCheckCreationNamespace() is used for
relations (except that a few places call RangeVarGetCreationNamespace
directly, which means my previous patch probably needs some tweaking
before commit), QualifiedNameGetCreationNamespace() is used for pretty
much all other schema-qualified objects, and LookupCreationNamespace()
is used for ALTER BLAH SET SCHEMA (which I think has a problem when
you rename an object into a schema that is concurrently being
dropped).

I'm fairly unhappy with the idea of modifying a function that is
described as doing a get or lookup to have the side effect of
locking something.  So probably some renaming or refactoring is in
order here.  It seems like we're duplicating almost identical logic in
an awful lot of places in namespace.c.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] So where are we on the open commitfest?

2011-11-14 Thread Greg Stark
On Mon, Nov 14, 2011 at 4:23 PM, Nikhil Sontakke nikkh...@gmail.com wrote:
  * Non-inheritable check constraints
 

 So, this patch got shifted to the next commitfest...

I'm sorry, I had intended to get to it for the last two weekends. I'm
not going to wait until the commitfest to look at it.


What I want to test is that it behaves sanely when you add and remove
children to the inheritance graph. Other than that I expect it should
be pretty non-controversial and useful.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cause of intermittent rangetypes regression test failures

2011-11-14 Thread Erik Rijkers
On Mon, November 14, 2011 19:43, Tom Lane wrote:
 Jeff Davis pg...@j-davis.com writes:
 On Mon, 2011-11-14 at 08:11 -0500, Tom Lane wrote:

 While thinking about this ... would it be sensible for range_lower and
 range_upper to return NULL instead of throwing an exception for empty or
 infinite ranges?  As with these comparison functions, throwing an error
 seems like a fairly unpleasant definition to work with in practice.


+1

much better, IMHO.


Erik Rijkers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Kevin Grittner
Florian Pflug f...@phlo.org wrote:
 
 If we're concerned about helping the compiler produce better code,
 I think we should try to make our code safe under strict aliasing
 rules. AFAIK, that generally helps much more than
 const-correctness. (Dunno how feasible that is, though)
 
To get a preliminary feel for how much this might help, I set my
workstation with an i7-2600 and 16GB RAM to run Robert Haas's
pgbench concurrency tests against PostgreSQL built with (default)
-O2 and no strict aliasing versus -O3 and strict aliasing.  I
ignored the ten warnings about punning under strict aliasing.  Both
builds were with asserts disabled.  No other changes from Friday's
HEAD.  All runs were at the REPEATABLE READ isolation level.  I
scheduled it for a window of time where the box wasn't running any
scheduled maintenance.
 
The results were interesting.  While the small overlap between
samples from the two builds at most levels means that this was
somewhat unlikely to be just sampling noise, there could have been
alignment issues that account for some of the differences.  In
short, the strict aliasing build always beat the other with 4
clients or fewer (on this 4 core machine), but always lost with more
than 4 clients.
 
1 client:   +0.8%
2 clients:  +2.0%
4 clients:  +3.2%
8 clients:  -0.9%
16 clients: -0.5%
32 clients: -0.9%
 
I wouldn't want to make too much out of this without repeating the
tests and trying different hardware, but I'm wondering whether the
abrupt difference at the number of cores makes sense to anybody. 
Also, is there something I should do to deal with the warnings
before this would be considered a meaningful test?
 
Raw numbers:
 
no-strict-aliasing.1  tps = 7140.253910
no-strict-aliasing.1  tps = 7291.465297
no-strict-aliasing.1  tps = 7219.054359
no-strict-aliasing.2  tps = 16592.613779
no-strict-aliasing.2  tps = 15418.602945
no-strict-aliasing.2  tps = 16826.200551
no-strict-aliasing.4  tps = 48145.69
no-strict-aliasing.4  tps = 47141.611960
no-strict-aliasing.4  tps = 47263.175254
no-strict-aliasing.8  tps = 93466.397174
no-strict-aliasing.8  tps = 93757.111493
no-strict-aliasing.8  tps = 93422.349453
no-strict-aliasing.16  tps = 88758.623319
no-strict-aliasing.16  tps = 88976.546555
no-strict-aliasing.16  tps = 88521.025343
no-strict-aliasing.32  tps = 87799.019143
no-strict-aliasing.32  tps = 88006.881881
no-strict-aliasing.32  tps = 88295.826711

strict-aliasing.1  tps = 7067.461710
strict-aliasing.1  tps = 7415.244823
strict-aliasing.1  tps = 7277.643321
strict-aliasing.2  tps = 14576.820162
strict-aliasing.2  tps = 16928.746994
strict-aliasing.2  tps = 19958.285834
strict-aliasing.4  tps = 48780.830247
strict-aliasing.4  tps = 49067.751657
strict-aliasing.4  tps = 48303.413578
strict-aliasing.8  tps = 93155.601896
strict-aliasing.8  tps = 92279.973490
strict-aliasing.8  tps = 92629.332125
strict-aliasing.16  tps = 88328.799197
strict-aliasing.16  tps = 88283.503270
strict-aliasing.16  tps = 88463.673815
strict-aliasing.32  tps = 87148.701204
strict-aliasing.32  tps = 87398.233624
strict-aliasing.32  tps = 87201.021722
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2011-11-14 Thread Alvaro Herrera

Excerpts from Robert Haas's message of lun nov 14 15:56:43 -0300 2011:

 Well, it looks to me like there are three different places that we
 need to nail down: RangeVarGetAndCheckCreationNamespace() is used for
 relations (except that a few places call RangeVarGetCreationNamespace
 directly, which means my previous patch probably needs some tweaking
 before commit), QualifiedNameGetCreationNamespace() is used for pretty
 much all other schema-qualified objects, and LookupCreationNamespace()
 is used for ALTER BLAH SET SCHEMA (which I think has a problem when
 you rename an object into a schema that is concurrently being
 dropped).
 
 I'm fairly unhappy with the idea of modifying a function that is
 described as doing a get or lookup to have the side effect of
 locking something.  So probably some renaming or refactoring is in
 order here.  It seems like we're duplicating almost identical logic in
 an awful lot of places in namespace.c.

So RangeVarGetCheckAndLockCreationNamespace(), uh?  Pity you can't
stick a comma in there.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: CHECK FUNCTION statement

2011-11-14 Thread Thom Brown
On 6 October 2011 12:52, Pavel Stehule pavel.steh...@gmail.com wrote:

 Hello

 I am sending a version with regress tests and basic documentation

Hi Pavel,

I think this sentence needs rewriting:

checkfunction is the name of a previously registered function that
will be called when a new function in the language is created, to
check the function by statemnt CHECK FUNCTION or CHECK TRIGGER.

to something like:

checkfunction is the name of an existing function that will be called
whenever a CHECK FUNCTION or CHECK TRIGGER is requested on a function
written in the language.

And shouldn't this apply to ALTER LANGUAGE too?

And there seem to be copy/paste symptoms in
doc/src/sgml/ref/check_function.sgml where it shows the definition of
CREATE FUNCTION and CREATE TRIGGER instead of CHECK FUNCTION and CHECK
TRIGGER.

In src/include/nodes/parsenodes.h there's the error message there are
no plan for query:.  This should probably read there is no plan for
query:.  This appears more than once.

And cannot to identify real type for record type variable doesn't
sound right.  Firstly to shouldn't be in there, and referring to a
real type is ambiguous as there is a data type called real.  This
appears at least twice.

In src/pl/plpgsql/src/pl_exec.c:

cannot to determine a result of dynamic SQL should be cannot
determine result of dynamic SQL.

Also, I recommend rebasing this patch as it doesn't apply cleanly.  In
particular, the following fail:

src/pl/plpgsql/src/pl_funcs.c
src/test/regress/expected/plpgsql.out
src/test/regress/sql/plpgsql.sql

I haven't tried actually testing the patch itsel, but I will probably
give it a go if a rebased version appears. :)

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 2:26 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of lun nov 14 15:56:43 -0300 2011:

 Well, it looks to me like there are three different places that we
 need to nail down: RangeVarGetAndCheckCreationNamespace() is used for
 relations (except that a few places call RangeVarGetCreationNamespace
 directly, which means my previous patch probably needs some tweaking
 before commit), QualifiedNameGetCreationNamespace() is used for pretty
 much all other schema-qualified objects, and LookupCreationNamespace()
 is used for ALTER BLAH SET SCHEMA (which I think has a problem when
 you rename an object into a schema that is concurrently being
 dropped).

 I'm fairly unhappy with the idea of modifying a function that is
 described as doing a get or lookup to have the side effect of
 locking something.  So probably some renaming or refactoring is in
 order here.  It seems like we're duplicating almost identical logic in
 an awful lot of places in namespace.c.

 So RangeVarGetCheckAndLockCreationNamespace(), uh?  Pity you can't
 stick a comma in there.

Yeah, really.  :-)

Actually, I think that one could probably stay as-is.  Check implies
that there's something else going on besides just a lookup, and we
can't go nuts with it.  I'm more concerned about
QualifiedNameGetCreationNamespace() and LookupCreationNamespace().

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 The results were interesting.  While the small overlap between
 samples from the two builds at most levels means that this was
 somewhat unlikely to be just sampling noise, there could have been
 alignment issues that account for some of the differences.  In
 short, the strict aliasing build always beat the other with 4
 clients or fewer (on this 4 core machine), but always lost with more
 than 4 clients.

That is *weird*.

 Also, is there something I should do to deal with the warnings
 before this would be considered a meaningful test?

Dunno ... where were the warnings exactly?  Also, did you run the
regression tests (particularly the parallel version) against the
build?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
 The results were interesting.  While the small overlap between
 samples from the two builds at most levels means that this was
 somewhat unlikely to be just sampling noise, there could have
 been alignment issues that account for some of the differences. 
 In short, the strict aliasing build always beat the other with 4
 clients or fewer (on this 4 core machine), but always lost with
 more than 4 clients.
 
 That is *weird*.
 
Yeah, my only theories are that it was an unlucky set of samples
(which seems a little thin looking at the numbers) or that some of
the optimizations in -O3 are about improving pipelining at what
would otherwise be an increase in cycles, but that context switching
breaks up the pipelining enough that it's a net loss at high
concurrency.  That doesn't seem quite as thin as the other
explanation, but it's not very satisfying without some sort of
confirmation.
 
 Also, is there something I should do to deal with the warnings
 before this would be considered a meaningful test?
 
 Dunno ... where were the warnings exactly?
 
All 10 were like this:
 
  warning: dereferencing type-punned pointer will break
strict-aliasing rules
 
The warning is about reading a union using a different type than was
last stored there.  It seems like that might sometimes be legitimate
reasons to do that, and that if it was broken with strict aliasing
it might be broken without.  But strict aliasing is new territory
for me.
 
 Also, did you run the regression tests (particularly the parallel
 version) against the build?
 
Yes.  The normal parallel `make check-world`, the `make
installcheck-world` against an install with
default_transaction_isolation = 'serializable' and
max_prepared_transactions = 10, and `make -C src/test/isolation
installcheck`.  All ran without problem.
 
I'm inclined to try -O3 and -strict-aliasing separately, with a more
iterations; but I want to fix anything that's wrong with the
aliasing first.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:
 
 Dunno ... where were the warnings exactly?
 
Ah, you asked where, not what.  I don't think I saved that, and
I had to reboot for a new kernel, so I don't have the buffer sitting
around.  I'll do a new build and let you know shortly.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Group Commit

2011-11-14 Thread Josh Berkus

 Purpose is to provide consistent WAL writes, even when WALInsertLock
 contended. Currently no off option, thinking is that the overhead of
 doing this is relatively low and so it can be always on - exactly as
 it is for sync rep.

Hmmm, have you had a chance to do any performance tests?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Kevin Grittner
Kevin Grittner kevin.gritt...@wicourts.gov wrote: 
 Tom Lane t...@sss.pgh.pa.us wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
 
 Also, is there something I should do to deal with the warnings
 before this would be considered a meaningful test?
 
 Dunno ... where were the warnings exactly?
  
 All 10 were like this:
  
   warning: dereferencing type-punned pointer will break
 strict-aliasing rules
 
From HEAD checkout of a few minutes ago I now see only 9:
 
parse_type.c: In function *typenameTypeMod*:
parse_type.c:313:4
parse_type.c:318:4
parse_type.c:319:7

guc.c: In function *flatten_set_variable_args*:
guc.c:6036:3
guc.c:6087:7

plpython.c: In function *PLy_plan_status*:
plpython.c:3213:3

btree_utils_var.c: In function *gbt_var_node_truncate*:
btree_utils_var.c:213:2

trgm_gist.c: In function *gtrgm_consistent*:
trgm_gist.c:262:5
trgm_gist.c:262:5
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: CHECK FUNCTION statement

2011-11-14 Thread Pavel Stehule
Hello

2011/11/14 Thom Brown t...@linux.com:
 On 6 October 2011 12:52, Pavel Stehule pavel.steh...@gmail.com wrote:

 Hello

 I am sending a version with regress tests and basic documentation

 Hi Pavel,

 I think this sentence needs rewriting:

 checkfunction is the name of a previously registered function that
 will be called when a new function in the language is created, to
 check the function by statemnt CHECK FUNCTION or CHECK TRIGGER.

 to something like:

 checkfunction is the name of an existing function that will be called
 whenever a CHECK FUNCTION or CHECK TRIGGER is requested on a function
 written in the language.

 And shouldn't this apply to ALTER LANGUAGE too?

 And there seem to be copy/paste symptoms in
 doc/src/sgml/ref/check_function.sgml where it shows the definition of
 CREATE FUNCTION and CREATE TRIGGER instead of CHECK FUNCTION and CHECK
 TRIGGER.

 In src/include/nodes/parsenodes.h there's the error message there are
 no plan for query:.  This should probably read there is no plan for
 query:.  This appears more than once.

 And cannot to identify real type for record type variable doesn't
 sound right.  Firstly to shouldn't be in there, and referring to a
 real type is ambiguous as there is a data type called real.  This
 appears at least twice.

I am not native speaker, so please, fix documentation as you like.


 In src/pl/plpgsql/src/pl_exec.c:

 cannot to determine a result of dynamic SQL should be cannot
 determine result of dynamic SQL.

 Also, I recommend rebasing this patch as it doesn't apply cleanly.  In
 particular, the following fail:

 src/pl/plpgsql/src/pl_funcs.c
 src/test/regress/expected/plpgsql.out
 src/test/regress/sql/plpgsql.sql

 I haven't tried actually testing the patch itsel, but I will probably
 give it a go if a rebased version appears. :)

There will be more work, I found one area, that was not checked - expr targets.

this new code is on github https://github.com/okbob/plpgsql_lint

this week I plan to redesign this contrib module to CHECK FUNCTION
implementation for 9.2.

Regards

Pavel


 --
 Thom Brown
 Twitter: @darkixion
 IRC (freenode): dark_ixion
 Registered Linux user: #516935

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why do we need two snapshots per query?

2011-11-14 Thread Robert Haas
On Sun, Nov 13, 2011 at 9:40 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Nov 13, 2011 at 8:57 PM, Robert Haas robertmh...@gmail.com wrote:
 In the -M extended case, we take a snapshot from exec_parse_message(),
 and the same two in the exec_bind_message() call that are taken in the
 -M prepared case.  So reducing the prepared case from two snapshots to
 one will reduce the extended case from three snapshots to two, thus
 saving one snapshot per query regardless of how it's executed.

 And here are the revised patches.  Apply refactor-portal-start
 (unchanged) first and then just-one-snapshot-v2.

Some pgbench -S numbers (SELECT-only test) from Nate Boley's 32-core
box.   I benchmarked commit f1585362856d4da17113ba2e4ba46cf83cba0cf2,
patched and unpatched.  I set shared_buffers = 8GB,
maintenance_work_mem = 1GB, synchronous_commit = off,
checkpoint_segments = 300, checkpoint_timeout = 15min,
checkpoint_completion_target = 0.9, wal_writer_delay = 20ms.  All
numbers are median of five-minute runs.  Lines beginning with m are
unpatched master; lines beginning with s are patched; the number
immediately following is the client count.

== with -M simple ==
m01 tps = 4347.393421 (including connections establishing)
s01 tps = 4336.883587 (including connections establishing)
m08 tps = 33510.055702 (including connections establishing)
s08 tps = 33826.161862 (including connections establishing)
m32 tps = 203457.891154 (including connections establishing)
s32 tps = 218206.065239 (including connections establishing)
m80 tps = 200494.623552 (including connections establishing)
s80 tps = 219344.961016 (including connections establishing)

== with -M extended ==
m01 tps = 3567.409671 (including connections establishing)
s01 tps = 3678.526702 (including connections establishing)
m08 tps = 27754.682736 (including connections establishing)
s08 tps = 28474.566418 (including connections establishing)
m32 tps = 177439.118199 (including connections establishing)
s32 tps = 187307.500501 (including connections establishing)
m80 tps = 173765.388249 (including connections establishing)
s80 tps = 184047.873286 (including connections establishing)

== with -M prepared ==
m01 tps = 7310.682085 (including connections establishing)
s01 tps = 7229.791967 (including connections establishing)
m08 tps = 54397.250840 (including connections establishing)
s08 tps = 55045.651468 (including connections establishing)
m32 tps = 303142.385619 (including connections establishing)
s32 tps = 313493.928436 (including connections establishing)
m80 tps = 304652.195974 (including connections establishing)
s80 tps = 311330.343510 (including connections establishing)

Of course, the fact that this gives good benchmark numbers doesn't
make it correct.  But the fact that it gives good benchmark numbers
seems - to me anyway - like a good reason to think carefully before
rejecting this approach out of hand.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: CHECK FUNCTION statement

2011-11-14 Thread Thom Brown
On 14 November 2011 20:54, Pavel Stehule pavel.steh...@gmail.com wrote:
 Hello

 2011/11/14 Thom Brown t...@linux.com:
 On 6 October 2011 12:52, Pavel Stehule pavel.steh...@gmail.com wrote:

 Hello

 I am sending a version with regress tests and basic documentation

 Hi Pavel,

 I think this sentence needs rewriting:

 checkfunction is the name of a previously registered function that
 will be called when a new function in the language is created, to
 check the function by statemnt CHECK FUNCTION or CHECK TRIGGER.

 to something like:

 checkfunction is the name of an existing function that will be called
 whenever a CHECK FUNCTION or CHECK TRIGGER is requested on a function
 written in the language.

 And shouldn't this apply to ALTER LANGUAGE too?

 And there seem to be copy/paste symptoms in
 doc/src/sgml/ref/check_function.sgml where it shows the definition of
 CREATE FUNCTION and CREATE TRIGGER instead of CHECK FUNCTION and CHECK
 TRIGGER.

 In src/include/nodes/parsenodes.h there's the error message there are
 no plan for query:.  This should probably read there is no plan for
 query:.  This appears more than once.

 And cannot to identify real type for record type variable doesn't
 sound right.  Firstly to shouldn't be in there, and referring to a
 real type is ambiguous as there is a data type called real.  This
 appears at least twice.

 I am not native speaker, so please, fix documentation as you like.

Well I wasn't entirely confident my interpretations were correct.  I'd
prefer to have a rebased patch I can fully apply first, and then I can
provide a corrective patch as I'd like to test it too.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Tom Lane t...@sss.pgh.pa.us wrote:
 Dunno ... where were the warnings exactly?
 
 From HEAD checkout of a few minutes ago I now see only 9:

Hmm ... well, none of those look likely to be in performance-sensitive
areas.  But I wonder just how good the trouble-detection code is these
days.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Alvaro Herrera

Excerpts from Kevin Grittner's message of lun nov 14 17:30:50 -0300 2011:
 Tom Lane t...@sss.pgh.pa.us wrote:
  Kevin Grittner kevin.gritt...@wicourts.gov writes:

  Also, is there something I should do to deal with the warnings
  before this would be considered a meaningful test?
  
  Dunno ... where were the warnings exactly?
  
 All 10 were like this:
  
   warning: dereferencing type-punned pointer will break
 strict-aliasing rules

Uhm, shouldn't we expect there to be one warning for each use of a Node
using some specific node pointer type as well as something generic such
as inside a ListCell etc?

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Group Commit

2011-11-14 Thread Greg Smith

On 11/14/2011 03:43 PM, Josh Berkus wrote:
   

Purpose is to provide consistent WAL writes, even when WALInsertLock
contended. Currently no off option, thinking is that the overhead of
doing this is relatively low and so it can be always on - exactly as
it is for sync rep.
 

Hmmm, have you had a chance to do any performance tests?
   


I was planning to run some later this week, but someone else is welcome 
to take a shot at it.  The inspiration for this change was the 
performance scaling tests I did for sync rep last month.  Don't recall 
if I shared those with this list yet; I've attached the fun graph.  Over 
a slow international link with 100ms ping times, I was only getting the 
expected 10 TPS doing sync rep with a single client.  But as more 
clients were added, so that a chunk of them were acknowledged in each 
commit reply, the total throughput among all of them scaled near 
linearly.  With 300 clients, that managed to hit a crazy 2000 TPS.


The best scenario to show this patch working would be a laptop drive 
spinning at a slow speed (5400 or 4200 RPM) so that individual local 
commits are slow.  That won't be 100ms slow, but close to 10ms is easy 
to see.  When adding clients to a system with a slow local commit, what 
I've observed is that the scaling levels off between 750 and 1000 TPS, 
no matter how many clients are involved.  The hope is that this 
alternate implementation will give the higher scaling in the face of 
slow commits that is seen on sync rep.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us

attachment: clients-3.png
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why do we need two snapshots per query?

2011-11-14 Thread Greg Smith

On 11/14/2011 04:04 PM, Robert Haas wrote:

Some pgbench -S numbers (SELECT-only test) from Nate Boley's 32-core
box


It seems like Nate Boley's system should be be credited in the 9.2 
release notes.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why do we need two snapshots per query?

2011-11-14 Thread Robert Haas
On Nov 14, 2011, at 4:31 PM, Greg Smith g...@2ndquadrant.com wrote:
 On 11/14/2011 04:04 PM, Robert Haas wrote:
 Some pgbench -S numbers (SELECT-only test) from Nate Boley's 32-core
 box
 
 It seems like Nate Boley's system should be be credited in the 9.2 release 
 notes.

+1.  Having access to that box has been extremely helpful; it would be nice to 
have equally convenient access to a few more.

...Robert
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Thom Brown
On 14 November 2011 09:08, Greg Smith g...@2ndquadrant.com wrote:
 I've revived the corpose of the patch submitted in May, now that it's a much
 less strange time of the development cycle to consider it.
  http://archives.postgresql.org/message-id/4df048bd.8040...@2ndquadrant.com
 was the first attempt to move some extensions from contrib/ to a new
 src/extension/ directory.  I have fixed the main complaints from the last
 submit attempt, that I accidentally grabbed some old makesfiles and CVS
 junk.  The new attempt is attached, and is easiest to follow with the a diff
 view that understands moved a file, like github's:
  https://github.com/greg2ndQuadrant/postgres/compare/master...core-extensions

 You can also check out the docs changes done so far at
 http://www.highperfpostgres.com/docs/html/extensions.html  I reorganized the
 docs to break out what I decided to tentatively name Core Extensions into
 their own chapter.  They're no longer mixed in with the rest of the contrib
 modules, and I introduce them a bit differently.  I'm not completely happy
 on the wordering there yet.  The use of both modules and extensions is
 probably worth eliminating, and maybe that continues on to doing that
 against the language I swiped from the contrib intro too.  There's also a
 lot of shared text at the end there, common wording from that and the
 contrib page about how to install and migrate these extensions.  Not sure
 how to refactor it out into another section cleanly though.

I'm all for removing all mention of modules.  It's ambiguous and
used inconsistently.

In my previous post in this area
(http://archives.postgresql.org/pgsql-hackers/2011-10/msg00781.php) I
suggested that bundling tools, libraries and extensions together in
the same category is confusing.  So those are still a problem for me.
And auto_explain appears in your new Core Extensions section, but
it's not an extension in the terminology PostgreSQL uses, so that's
also potentially confusing.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Measuring relation free space

2011-11-14 Thread Jaime Casanova
On Wed, Nov 9, 2011 at 7:58 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:

 Excerpts from Jaime Casanova's message of mar nov 08 18:12:25 -0300 2011:
 On Sun, Nov 6, 2011 at 5:38 AM, Magnus Hagander mag...@hagander.net wrote:
 
  Looks pretty useful.

 thanks for the review, attached is a new version of it

 Note that AFAIK you shouldn't update the 1.0 extension script ... you
 have to create a 1.1 version (or whatever), update the default version
 in the control file, and create an 1.0--1.1 script to upgrade from the
 original version to 1.1.


good point... fixed that...
a question i have is: are we supposed to let the old script (1.0) around?

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
diff --git a/contrib/pageinspect/Makefile b/contrib/pageinspect/Makefile
new file mode 100644
index 13ba6d3..63fab95
*** a/contrib/pageinspect/Makefile
--- b/contrib/pageinspect/Makefile
*** MODULE_big	= pageinspect
*** 4,10 
  OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
  
  EXTENSION = pageinspect
! DATA = pageinspect--1.0.sql pageinspect--unpackaged--1.0.sql
  
  ifdef USE_PGXS
  PG_CONFIG = pg_config
--- 4,12 
  OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
  
  EXTENSION = pageinspect
! DATA = pageinspect--1.0.sql pageinspect--1.1.sql \
!pageinspect--1.0--1.1.sql \
!pageinspect--unpackaged--1.0.sql
  
  ifdef USE_PGXS
  PG_CONFIG = pg_config
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
new file mode 100644
index dbb2158..8be21ed
*** a/contrib/pageinspect/btreefuncs.c
--- b/contrib/pageinspect/btreefuncs.c
***
*** 34,39 
--- 34,40 
  #include utils/builtins.h
  #include utils/rel.h
  
+ #include btreefuncs.h
  
  extern Datum bt_metap(PG_FUNCTION_ARGS);
  extern Datum bt_page_items(PG_FUNCTION_ARGS);
*** GetBTPageStatistics(BlockNumber blkno, B
*** 155,160 
--- 156,204 
  		stat-avg_item_size = 0;
  }
  
+ /*
+  * GetBTRelationFreeSpace
+  *
+  * Get the free space for a btree index.
+  * This is a helper function for relation_free_space()
+  *
+  */
+ float4
+ GetBTRelationFreeSpace(Relation rel)
+ {
+ 	BTPageStat stat;
+ 
+ 	Buffer		buffer;
+ 	BlockNumber blkno;
+ 	BlockNumber totalBlcksInRelation = RelationGetNumberOfBlocks(rel);
+ 	Size 		free_space = 0;
+ 	double		free_percent = 0;
+ 
+ BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD);
+ 	
+ 	/* Skip page 0 because it is a metapage */
+ 	for (blkno = 1; blkno  totalBlcksInRelation; blkno++)
+ 	{
+ 		buffer = ReadBufferExtended(rel, MAIN_FORKNUM, blkno, RBM_NORMAL, bstrategy);
+ 		/* 
+ 		 * get the statistics of the indexes and use that info
+ 		 * to determine free space on the page
+ 		 */
+ 		GetBTPageStatistics(blkno, buffer, stat);
+ 		if (stat.type == 'd')
+ 			free_space += stat.page_size;
+ 		else
+ 			free_space += stat.free_size;		
+ 
+ 		ReleaseBuffer(buffer);
+ 	}
+ 
+ 	if (totalBlcksInRelation  1)
+ 		free_percent = ((float4) free_space) / ((totalBlcksInRelation - 1) * BLCKSZ);
+ 	return free_percent;
+ }
+ 
+ 
  /* ---
   * bt_page()
   *
diff --git a/contrib/pageinspect/btreefuncs.h b/contrib/pageinspect/btreefuncs.h
new file mode 100644
index ...549f878
*** a/contrib/pageinspect/btreefuncs.h
--- b/contrib/pageinspect/btreefuncs.h
***
*** 0 
--- 1,5 
+ /*
+  * contrib/pageinspect/btreefuncs.h
+  */
+ 
+ float4 GetBTRelationFreeSpace(Relation);
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
new file mode 100644
index fa50655..e7436fb
*** a/contrib/pageinspect/heapfuncs.c
--- b/contrib/pageinspect/heapfuncs.c
***
*** 28,33 
--- 28,36 
  #include funcapi.h
  #include utils/builtins.h
  #include miscadmin.h
+ #include storage/bufmgr.h
+ 
+ #include heapfuncs.h
  
  Datum		heap_page_items(PG_FUNCTION_ARGS);
  
*** bits_to_text(bits8 *bits, int len)
*** 55,60 
--- 58,96 
  }
  
  
+ /*
+  * GetHeapRelationFreeSpace()
+  *
+  * Get the free space for a heap relation.
+  * This is a helper function for relation_free_space()
+  */
+ float4
+ GetHeapRelationFreeSpace(Relation rel)
+ {
+ 	Buffer  buffer;
+ 	Page		page;
+ 	BlockNumber blkno;
+ 	BlockNumber totalBlcksInRelation = RelationGetNumberOfBlocks(rel);
+ 	Sizefree_space = 0;
+ 	double		free_percent = 0;
+ 
+ 	BufferAccessStrategy bstrategy = GetAccessStrategy(BAS_BULKREAD);
+ 
+ 	for (blkno = 0; blkno  totalBlcksInRelation; blkno++)
+ 	{
+ 		buffer = ReadBufferExtended(rel, MAIN_FORKNUM, blkno, RBM_NORMAL, bstrategy);
+ 		page   = BufferGetPage(buffer);
+ 		free_space += PageGetHeapFreeSpace(page);
+ 
+ 		ReleaseBuffer(buffer);
+ 	}
+ 
+ 	if (totalBlcksInRelation  0)
+ 		free_percent = ((float4) free_space) / (totalBlcksInRelation * BLCKSZ);
+ 	return 

Re: [HACKERS] proposal: psql concise mode

2011-11-14 Thread Ross Reedstrom
On Mon, Nov 07, 2011 at 11:01:39PM -0500, Josh Kupershmidt wrote:
 On Mon, Nov 7, 2011 at 10:04 PM, Robert Haas robertmh...@gmail.com wrote:
 
   I can also see myself turning it on and then going
  - oh, wait, is that column not there, or did it just disappear because
  I'm in concise mode?
 
 Yeah, that would be a bit of a nuisance in some cases.

Well, that specific problem could be fixed with some format signalling,
such as changing the vertical divider, or perhaps leaving it doubled:

Given your test case:

 test=# \d+ foo
  Table public.foo
  Column |  Type   | Modifiers | Storage | Stats target | Description
 +-+---+-+--+-
  a  | integer |   | plain   |  |
  b  | integer |   | plain   |  |
 Has OIDs: no
 
Concise output might look like (bikeshed argument: splat indicates
columns squashed out):
 
 test=# \d+ foo
  Table public.foo
  Column |  Type   # Storage #
 +-+-+
  a  | integer # plain   #
  b  | integer # plain   #
 Has OIDs: no

or:

  Column |  Type   || Storage |
 +-++-+
  a  | integer || plain   |
  b  | integer || plain   |

or even:
 
  Column |  Type   || Storage ||
 +-++-++
  a  | integer || plain   ||
  b  | integer || plain   ||

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request: auto savepoint for interactive psql when in transaction.

2011-11-14 Thread Ross Reedstrom
On Wed, Sep 28, 2011 at 11:47:51AM -0700, David Fetter wrote:
 On Wed, Sep 28, 2011 at 02:25:44PM -0400, Gurjeet Singh wrote:
  On Wed, Sep 28, 2011 at 1:51 PM, Kevin Grittner kevin.gritt...@wicourts.gov
   wrote:
  
   Alvaro Herrera alvhe...@commandprompt.com wrote:
  
See ON_ERROR_ROLLBACK
http://www.postgresql.org/docs/9.0/static/app-psql.html
  
   I had missed that.  Dang, this database product is rich with nice
   features!  :-)
  
  
  +1
  
  I would like it to be on/interactive by default, though.
 
 You can have it by putting it in your .psqlrc.
 
 If we were just starting out, I'd be all for changing the defaults,
 but we're not.  We'd break things unnecessarily if we changed this
 default.
 

This discussion died out with a plea for better documentation, and perhaps some
form of discoverability. I've scanned ahead and see no further discussion.
However, I'm wondering, what use-cases would be broken by setting the default
to 'interactive'? Running a non-interactive script by piping it to psql?
Reading the code, I see that case is covered: the definition of 'interactive'
includes both stdin and stdout are a tty, and the source of commands is stdin.
Seems this functionality appeared in version 8.1.  Was there discussion re:
making it the default at that time?  I'm all for backward compatibility, but I'm
having trouble seeing what would break.

I see that Peter blogged about this from a different angle over a year ago
(http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html)
which drew a comment from Tom Lane that perhaps we need a better/different tool
for running scripts. That would argue the defaults for psql proper should favor
safe interactive use (autocommit off, anyone?) Peter mentioned the traditional
method unix shells use to handle this: different config files are read for
interactive vs. non-interactive startup. Seems we have that, just for the one
setting ON_ERROR_ROLLBACK.

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Dimitri Fontaine
Thom Brown t...@linux.com writes:
 I'm all for removing all mention of modules.  It's ambiguous and
 used inconsistently.

The module is the shared library object.  It should be possible to use
that consistently.  And I have some plans on my TODO list about them
anyway, so making them disappear from the manual would not serve my
later plans :)

 And auto_explain appears in your new Core Extensions section, but
 it's not an extension in the terminology PostgreSQL uses, so that's
 also potentially confusing.

This is a related problem, we should have a terminology for contrib
tools such as pg_standby or pg_archivecleanup, for modules like the one
you talk about, that provide new features but nothing visible from SQL,
and extensions, that are all about SQL --- and if I can work on my plans
will get even more about SQL in a near future.

It's too late for me today to contribute nice ideas here though.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Detach/attach database

2011-11-14 Thread Bruce Momjian
Robert Haas wrote:
 But Tom's point about XIDs and LSNs seems like it kind of puts a
 bullet through the heart of the whole idea.  Now, before you can move
 the database (or table, or whatever) between clusters, you've got to
 rewrite all the data files to freeze XIDs and, I don't know, zero out
 LSNs, or something.  And if you're going to rewrite all the data, then
 you've pretty much lost all the benefit of doing this in the first
 place. In fact, it might end up being *slower* than a dump and
 restore; even an uncompressed dump will be smaller than the on-disk
 footprint of the original database, and many dumps compress quite
 well.

These are the same limitations pg_upgrade has, except it freezes the
system tables of the new cluster (very small) and then moves the clog
files from the old cluster to the new cluster to match the user files. 
No way to really merge two different cluster clog files.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Martijn van Oosterhout
On Mon, Nov 14, 2011 at 06:25:19PM -0300, Alvaro Herrera wrote:
  All 10 were like this:
   
warning: dereferencing type-punned pointer will break
  strict-aliasing rules
 
 Uhm, shouldn't we expect there to be one warning for each use of a Node
 using some specific node pointer type as well as something generic such
 as inside a ListCell etc?

Maybe they're safe? But in any case given the use of Node, a may be an
idea to mark it with attribute((__may_alias__)), that should clear up
most of the problems in that area.

http://ohse.de/uwe/articles/gcc-attributes.html#type-may_alias

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [HACKERS] feature request: auto savepoint for interactive psql when in transaction.

2011-11-14 Thread Ross Reedstrom
On Mon, Nov 14, 2011 at 02:45:04PM -0800, Will Leinweber wrote:
 My coworker Dan suggested that some people copy and paste scripts. However
 I feel that that is an orthogonal problem and if there is a very high rate
 of input psql should detect that and turn interactive off. And I
 still strongly feel that on_error_rollback=interactive should be the
 default.

Hmm, I think that falls under the don't so that, then usecase. I've been
known to cp the occasional script - I guess the concern here would be not
seeing failed steps that scrolled off the terminal. (I set my scrollback to
basically infinity and actaully use it, but then I'm strange that way :-) )

Trying to autodetect 'high rate of input' seems ... problematic. The code as is
does handle detecting interactivity at startup, and for the current command
- switching mid-stream ... catching repeated auto-rollbacks might be a
  possibility, then switching the transaction into 'failed' state. That should
catch most of the possible cases where an early set of steps failed, but
scrolled off, so there's no visible error at the end of paste.
 
 Until then, I've included this as a PSA at the start of any postgres talks
 I've given, because it's simply not widely known.

Good man. (That's a Postgres Service Announcement, then?)

Ross
-- 
Ross Reedstrom, Ph.D. reeds...@rice.edu
Systems Engineer  Admin, Research Scientistphone: 713-348-6166
Connexions  http://cnx.orgfax: 713-348-3665
Rice University MS-375, Houston, TX 77005
GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Andres Freund
On Monday, November 14, 2011 10:25:19 PM Alvaro Herrera wrote:
 Excerpts from Kevin Grittner's message of lun nov 14 17:30:50 -0300 2011:
  Tom Lane t...@sss.pgh.pa.us wrote:
   Kevin Grittner kevin.gritt...@wicourts.gov writes:
   Also, is there something I should do to deal with the warnings
   before this would be considered a meaningful test?
   
   Dunno ... where were the warnings exactly?
  
  All 10 were like this:
warning: dereferencing type-punned pointer will break

  strict-aliasing rules
 
 Uhm, shouldn't we expect there to be one warning for each use of a Node
 using some specific node pointer type as well as something generic such
 as inside a ListCell etc?
The case with Node's being accessed by SomethingNode is legal to my knowledge 
as the individual memory locations are accessed by variables of the same type.
That follows from the rules an aggregate or union type that includes one of 
the aforementioned types among its members (including, recursively, a member 
of a subaggregate or contained union) and a type compatible with the 
effective type of the object.

And the ListCell case is ok as well unless there is a wrong cast in code using 
the ListCell somewhere.

E.g. its afaics safe to do something like:

void do_something_int(int);

int bla;
void* foo = bla;
...
do_something_int(*(int*)foo);

but

do_something_short(*(short*)foo);
is illegal.

The compiler obviously cant be able to prove all misusage of the void* 
pointers in e.g. ListCell's though...

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] strict aliasing (was: const correctness)

2011-11-14 Thread Andres Freund
On Monday, November 14, 2011 10:22:52 PM Tom Lane wrote:
 Kevin Grittner kevin.gritt...@wicourts.gov writes:
  Tom Lane t...@sss.pgh.pa.us wrote:
  Dunno ... where were the warnings exactly?
  
  From HEAD checkout of a few minutes ago I now see only 9:
 Hmm ... well, none of those look likely to be in performance-sensitive
 areas.  But I wonder just how good the trouble-detection code is these
 days.
No idea about how good it is but you can make the detection code more 
aggressive by -Wstrict-aliasing=1 (which will produce more false positives).

I don't gcc will ever be able to call all possible misusages. E.g. The List 
api is a case where its basically impossible to catch everything (as gcc won't 
be able to figure out what the ListCell.data.ptr_value pointed to originally 
in the general case).

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql concise mode

2011-11-14 Thread Josh Kupershmidt
On Mon, Nov 14, 2011 at 5:16 PM, Ross Reedstrom reeds...@rice.edu wrote:
 Concise output might look like (bikeshed argument: splat indicates
 columns squashed out):

  test=# \d+ foo
                          Table public.foo
  Column |  Type   # Storage #
  +-+-+
  a      | integer # plain   #
  b      | integer # plain   #
  Has OIDs: no

 or:

  Column |  Type   || Storage |
  +-++-+
  a      | integer || plain   |
  b      | integer || plain   |

 or even:

  Column |  Type   || Storage ||
  +-++-++
  a      | integer || plain   ||
  b      | integer || plain   ||

Yeah, that's an idea. And/or the table footer could list the omitted columns.

Josh

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql + libedit command history truncation (was: psql history vs. dearmor (pgcrypto))

2011-11-14 Thread Josh Kupershmidt
On Mon, Nov 14, 2011 at 1:01 PM, Robert Haas robertmh...@gmail.com wrote:
 It looks like the problem is that the original has a blank line after
 the line that says Version: GnuPG v2.0.17 (GNU/Linux), but when you
 recall it from the query buffer, that extra blank line gets elided.

 The attached patch fixes it for me.  I'm a little worried it might
 cause a problem in some case I'm not thinking about, but I can't think
 of any such case right this minute.

(FYI, the patch does seem to fix the problem Tomas was complaining about.)

But it reminded me of another issue. With OS X 10.6.8, and otool -L
reporting that psql depends on libedit version 2.11.0, the up-arrow
recall of Tomas' query gets truncated around here:
  5I0/NTm+fFkB0McY9E2fAA [rest of the line missing]

i.e. it's keeping roughly 1021 characters. I was about to just chalk
that up to some limit in libedit's readline() implementation, but I
can see in my ~/.psql_history file that the entire query is logged.
Plus \e recalls the full query correctly. And if I up-arrow to recall
the query, then do anything to modify that recalled query (such as
typing a few characters at the end, then moving back or forth through
the history), then subsequent recalls of the query work fine.

So I'm not sure if this is a bug purely in libedit, or if there's
something amiss in psql. I saw a possibly-related complaint about
psql+libedit on Debian[1]. Anyone have a better guess about what's
going on?
Josh

[1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=603922

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why do we need two snapshots per query?

2011-11-14 Thread Tatsuo Ishii
 On the other hand, if our goal in life is to promote the extended
 query protocol over the simple query protocol at all costs, then I
 agree that we shouldn't optimize the simple query protocol in any way.
  Perhaps we should even post a big notice on it that says this
 facility is deprecated and will be removed in a future version of
 PostgreSQL.  But why should that be our goal?  Presumably our goal is
 to put forward the best technology, not to artificially pump up one
 alternative at the expense of some other one.  If the simple protocol
 is faster in certain use cases than the extended protocol, then let
 people use it.  I wouldn't have noticed this optimization opportunity
 in the first place but for the fact that psql seems to use the simple
 protocol - why does it do that, if the extended protocol is
 universally better?  I suspect that, as with many other things where
 we support multiple alternatives, the best alternative depends on the
 situation, and we should let users pick depending on their use case.

+1. I don't see any justfication not to enhance simple protocol case
influenced by extended protocol's relatively poor performance.

 At any rate, if you're concerned about the relative efficiency of the
 simple query protocol versus the extended protocol, it seems that the
 horse has already left the barn.  I just did a quick 32-client pgbench
 -S test on a 32-core box.  This is just a thirty-second run, but
 that's enough to make the point: if you're not using prepared queries,
 using the extended query protocol incurs a significant penalty - more
 than 15% on this test:
 
 [simple] tps = 246808.409932 (including connections establishing)
 [extended] tps = 205609.438247 (including connections establishing)
 [prepared] tps = 338150.881389 (including connections establishing)

Quite impressive result.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Josh Berkus

 This is a related problem, we should have a terminology for contrib
 tools such as pg_standby or pg_archivecleanup, for modules like the one
 you talk about, that provide new features but nothing visible from SQL,
 and extensions, that are all about SQL --- and if I can work on my plans
 will get even more about SQL in a near future.

I see nothing wrong with Tools and Extensions.  I'm not sure that
having one catch-all name for them serves the user.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Josh Berkus
Greg,

So I'm a bit unclear on why most of the optional data types were
excluded from your list of Core Extensions.  I would regard the
following as stable and of general utility:

btree_gin
btree_gist
citext
dblink
file_fdw
fuzzystrmatch
hstore
intarray
isn
ltree
pgcrypto
pg_trgm
unaccent
uuid-ossp

These should, in my opinion, all be Core Extensions.  I'd go further to
say that if something is materially an extension (as opposed to a tool
or a code example), and we're shipping it with the core distribution, it
either ought to be a core extension, or it should be kicked out to PGXN.

Am I completely misunderstanding what you're trying to accomplish here?

... also, why is there still a tsearch2 contrib module around at all?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Thom Brown
On 15 November 2011 00:56, Josh Berkus j...@agliodbs.com wrote:
 Greg,

 So I'm a bit unclear on why most of the optional data types were
 excluded from your list of Core Extensions.  I would regard the
 following as stable and of general utility:

 btree_gin
 btree_gist
 citext
 dblink
 file_fdw
 fuzzystrmatch
 hstore
 intarray
 isn
 ltree
 pgcrypto
 pg_trgm
 unaccent
 uuid-ossp

Greg clarified on the core extensions page text:

These core extensions supply useful features in areas such as
database diagnostics and performance monitoring.

None of those others perform such a role.  Instead they add additional
functionality intended to be utilised as part of general data usage,
adding new types, operators, query functions etc.  Maybe the term
core is inappropriate.  Instead we might wish to refer to them as
utility extensions or something like that, although that may be just
as vague.

 ... also, why is there still a tsearch2 contrib module around at all?

Backwards compatibility.  No-one will use it except if they're coming
from an older version.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Peter Geoghegan
On 15 November 2011 00:56, Josh Berkus j...@agliodbs.com wrote:
 So I'm a bit unclear on why most of the optional data types were
 excluded from your list of Core Extensions.  I would regard the
 following as stable and of general utility:

 isn

I consider contrib/isn to be quite broken. It hard codes ISBN prefixes
for the purposes of sanitising ISBNs, even though their assignment is
actually controlled by a decentralised body of regional authorities.
I'd vote for kicking it out of contrib.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Greg Smith

On 11/14/2011 07:56 PM, Josh Berkus wrote:

So I'm a bit unclear on why most of the optional data types were
excluded from your list of Core Extensions.


I was aiming for the extensions that seemed uncontroversial for a first 
pass here.  One of the tests I applied was do people sometimes need 
this module after going into production with their application?  The 
very specific problem I was most concerned about eliminating was people 
discovering they needed an extension to troubleshoot performance or 
corruption issues, only to discover it wasn't available--because they 
hadn't installed the postgresql-contrib package.  New package 
installation can be a giant pain to get onto a production system in some 
places, if it wasn't there during QA etc.


All of the data type extensions fail that test.  If you need one of 
those, you would have discovered that on your development server, and 
made sure the contrib package was available on production too.  There 
very well may be some types that should be rolled into the core 
extensions list, but I didn't want arguments over that to block moving 
forward with the set I did suggest.  We can always move more of them 
later, if this general approach is accepted.  It only takes about 5 
minutes per extension to move them from contrib to src/extension, once 
the new directory tree and doc section is there.  But I didn't want to 
do the work of moving another 15 of them if the whole idea was going to 
get shot down.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Online base backup from the hot-standby

2011-11-14 Thread Steve Singer
On 11-10-31 12:11 AM, Jun Ishiduka wrote:

 Agreed. I'll extract FPW stuff from the patch that I submitted, and revise it
 as the infrastructure patch.

 The changes of pg_start_backup() etc that Ishiduka-san did are also
 a server-side infrastructure. I will extract them as another infrastructure 
 one.

 Ishiduka-san, if you have time, feel free to try the above, barring 
 objection.

 Done.
 Changed the name of the patch.

 Modifications
  So changed to the positioning of infrastructure,
* Removed the documentation.
* changed to an error when you run pg_start/stop_backup() on the standby.



Here is my stab at reviewing this version of this version of the patch.

Submission
---
The purpose of this version of the patch is to provide some
infrastructure needed for backups from the slave without having to solve
some of the usability issues raised in previous versions of the patch.

This patch applied fine earlier versions of head but it doesn't today.
Simon moved some of the code touched by this patch as part of the xlog
refactoring. Please post an updated/rebased version of the patch.


I think the purpose of this patch is to provide

a) The code changes to record changes to fpw state of the master in WAL.
b) Track the state of FPW while in recovery mode

This version of the patch is NOT intended to allow SQL calls to
pg_start_backup() on slaves to work. This patch lays the infrastructure
for another patch (which I haven't seen) to allow pg_basebackup to do a
base backup from a slave assuming fpw=on has been set on the master (my
understanding of this patch is that it puts into place all of the pieces
required for the pg_basebackup patch to detect if fpw!=on and abort).


The consensus upthread was to get this infrastructure in and figure out
a safe+usable way of doing a slave backup without pg_basebackup later.

The patch seems to do what I expect of it.

I don't see any issues with most of the code changes in this patch.
However I admit that even after reviewing many versions of this patch I
still am not familiar enough with the recovery code to comment on a lot
of the details.

One thing I did see:

In pg_ctl.c

! if (stat(recovery_file, statbuf) != 0)
! print_msg(_(WARNING: online backup mode is active\n
! Shutdown will not complete until pg_stop_backup() is called.\n\n));
! else
! print_msg(_(WARNING: online backup mode is active if you can connect
as a superuser to server\n
! If so, shutdown will not complete until pg_stop_backup() is
called.\n\n));

I am having difficulty understanding what this error message is trying
to tell me. I think it is telling me (based on the code comments) that
if I can't connect to the server because the server is not yet accepting
connections then I shouldn't worry about anything. However if the server
is accepting connections then I need to login and call pg_stop_backup().

Maybe
WARNING: online backup mode is active. If your server is accepting
connections then you must connect as superuser and run pg_stop_backup()
before shutdown will complete

I will wait on attempting to test the patch until you have sent a
version that applies against the current HEAD.


 Regards.


 
 Jun Ishizuka
 NTT Software Corporation
 TEL:045-317-7018
 E-Mail: ishizuka@po.ntts.co.jp
 






Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Robert Haas
On Mon, Nov 14, 2011 at 8:44 PM, Greg Smith g...@2ndquadrant.com wrote:
 On 11/14/2011 07:56 PM, Josh Berkus wrote:

 So I'm a bit unclear on why most of the optional data types were
 excluded from your list of Core Extensions.

 I was aiming for the extensions that seemed uncontroversial for a first pass
 here.  One of the tests I applied was do people sometimes need this module
 after going into production with their application?  The very specific
 problem I was most concerned about eliminating was people discovering they
 needed an extension to troubleshoot performance or corruption issues, only
 to discover it wasn't available--because they hadn't installed the
 postgresql-contrib package.  New package installation can be a giant pain to
 get onto a production system in some places, if it wasn't there during QA
 etc.

 All of the data type extensions fail that test.  If you need one of those,
 you would have discovered that on your development server, and made sure the
 contrib package was available on production too.  There very well may be
 some types that should be rolled into the core extensions list, but I didn't
 want arguments over that to block moving forward with the set I did suggest.
  We can always move more of them later, if this general approach is
 accepted.  It only takes about 5 minutes per extension to move them from
 contrib to src/extension, once the new directory tree and doc section is
 there.  But I didn't want to do the work of moving another 15 of them if the
 whole idea was going to get shot down

I continue to think that we should be trying to sort these by subject
matter.  The term core extensions doesn't convey that these are
server management and debugging tools, hence Josh's confusion.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Core Extensions relocation

2011-11-14 Thread Greg Smith

On 11/14/2011 10:09 PM, Robert Haas wrote:

I continue to think that we should be trying to sort these by subject
matter.  The term core extensions doesn't convey that these are
server management and debugging tools, hence Josh's confusion.
   


I'm not attached to the name, which I just pulled out of the air for the 
documentation.  Could just as easily call them built-in modules or 
extensions.  If the objection is that extensions isn't technically 
correct for auto-explain, you might call them core add-ons instead.  My 
thinking was that the one exception didn't make it worth the trouble to 
introduce a new term altogether here.  There's already too many terms 
used for talking about this sort of thing, the confusion from using a 
word other than extensions seemed larger than the confusion sown by 
auto-explain not fitting perfectly.


The distinction I care about here is primarily a packaging one.  These 
are server additions that people should be able to count on having 
available, whereas right now they may or may not be installed depending 
on if contrib was added.  Everything I'm touching requires our RPM and 
Debian packagers (at least) make a packaging change, too.  I can't 
justify why that's worth doing for any of the other extensions, which is 
one reason I don't try to tackle them.


The type of finer sorting you and Thom are suggesting seems like it's 
mainly a documentation change to me.  I'm indifferent to the idea; no 
plans to either work on it or object to it.  The docs could be made 
easier to follow here without any change to the directory tree, and 
trying to push out a larger packaging change has downsides.  Useful 
reminder reading here is 
http://wiki.postgresql.org/wiki/PgCon_2011_Developer_Meeting#Moving_Contrib_Around  
To quote from there, Users hate having loads and loads of packages. We 
do need to be careful not to oversplit it.  There's some useful notes 
about dependency issues there too.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch : Allow toast tables to be moved to a different tablespace

2011-11-14 Thread Jaime Casanova
On Fri, Oct 7, 2011 at 10:10 AM, Julien Tachoires jul...@gmail.com wrote:
 Hi,

 Here's a patch to allow TOAST tables to be moved to a different tablespace.
 This item has been picked up from the TODO list.
 Main idea is to consider that a TOAST table can have its own tablespace.


Hi,

This patch doesn't apply cleanly to head now... can you send a new
version against head?

about the patch itself. i don't like the fact that now the normal case
needs to include the word TABLE. IMHO, it should be optional and if
ommited TABLE should be assumed

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request: auto savepoint for interactive psql when in transaction.

2011-11-14 Thread Will Leinweber
My coworker Dan suggested that some people copy and paste scripts. However
I feel that that is an orthogonal problem and if there is a very high rate
of input psql should detect that and turn interactive off. And I
still strongly feel that on_error_rollback=interactive should be the
default.

Until then, I've included this as a PSA at the start of any postgres talks
I've given, because it's simply not widely known.

On Mon, Nov 14, 2011 at 2:19 PM, Ross Reedstrom reeds...@rice.edu wrote:

 On Wed, Sep 28, 2011 at 11:47:51AM -0700, David Fetter wrote:
  On Wed, Sep 28, 2011 at 02:25:44PM -0400, Gurjeet Singh wrote:
   On Wed, Sep 28, 2011 at 1:51 PM, Kevin Grittner 
 kevin.gritt...@wicourts.gov
wrote:
  
Alvaro Herrera alvhe...@commandprompt.com wrote:
   
 See ON_ERROR_ROLLBACK
 http://www.postgresql.org/docs/9.0/static/app-psql.html
   
I had missed that.  Dang, this database product is rich with nice
features!  :-)
   
  
   +1
  
   I would like it to be on/interactive by default, though.
 
  You can have it by putting it in your .psqlrc.
 
  If we were just starting out, I'd be all for changing the defaults,
  but we're not.  We'd break things unnecessarily if we changed this
  default.
 

 This discussion died out with a plea for better documentation, and perhaps
 some
 form of discoverability. I've scanned ahead and see no further discussion.
 However, I'm wondering, what use-cases would be broken by setting the
 default
 to 'interactive'? Running a non-interactive script by piping it to psql?
 Reading the code, I see that case is covered: the definition of
 'interactive'
 includes both stdin and stdout are a tty, and the source of commands is
 stdin.
 Seems this functionality appeared in version 8.1.  Was there discussion re:
 making it the default at that time?  I'm all for backward compatibility,
 but I'm
 having trouble seeing what would break.

 I see that Peter blogged about this from a different angle over a year ago
 (
 http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html
 )
 which drew a comment from Tom Lane that perhaps we need a better/different
 tool
 for running scripts. That would argue the defaults for psql proper should
 favor
 safe interactive use (autocommit off, anyone?) Peter mentioned the
 traditional
 method unix shells use to handle this: different config files are read for
 interactive vs. non-interactive startup. Seems we have that, just for the
 one
 setting ON_ERROR_ROLLBACK.

 Ross
 --
 Ross Reedstrom, Ph.D. reeds...@rice.edu
 Systems Engineer  Admin, Research Scientistphone: 713-348-6166
 Connexions  http://cnx.orgfax: 713-348-3665
 Rice University MS-375, Houston, TX 77005
 GPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E  F888 D3AE 810E 88F0 BEDE