[HACKERS] Re: [COMMITTERS] pgsql: Create libpgcommon, and move pg_malloc et al to it

2013-02-14 Thread Heikki Linnakangas

On 12.02.2013 16:55, Alvaro Herrera wrote:

Create libpgcommon, and move pg_malloc et al to it

libpgcommon is a new static library to allow sharing code among the
various frontend programs and backend; this lets us eliminate duplicate
implementations of common routines.  We avoid libpgport, because that's
intended as a place for porting issues; per discussion, it seems better
to keep them separate.

The first use case, and the only implemented by this patch, is pg_malloc
and friends, which many frontend programs were already using.

At the same time, we can use this to provide palloc emulation functions
for the frontend; this way, some palloc-using files in the backend can
also be used by the frontend cleanly.  To do this, we change palloc() in
the backend to be a function instead of a macro on top of
MemoryContextAlloc().  This was previously believed to cause loss of
performance, but this implementation has been tweaked by Tom and Andres
so that on modern compilers it provides a slight improvement over the
previous one.


The new header file, fe_memutils.h, is not installed anywhere by make 
install. That makes #include postgres_fe.h to fail in any program 
compiled out-of-tree, with pgxs.


Where should it be installed? postgres_fe.h and port.h are installed to 
include/internal, so that would seem logical. I wonder what the 
internal actually means there, though. If we want to encourage 
frontend programs to use these, internal doesn't sound right.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW for PostgreSQL

2013-02-14 Thread Shigeru Hanada
On Thu, Feb 14, 2013 at 1:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 * The code seems to always use GetOuterUserId() to select the foreign
 user mapping to use.  This seems entirely wrong.  For instance it will
 do the wrong thing inside a SECURITY DEFINER function, where surely the
 relevant privileges should be those of the function owner, not the
 session user.  I would also argue that if Alice has access to a foreign
 table owned by Bob, and Alice creates a view that selects from that
 table and grants select privilege on the view to Charlie, then when
 Charlie selects from the view the user mapping to use ought to be
 Alice's.  (If anyone thinks differently about that, speak up!)

Agreed that OuterUserId is wrong for user mapping.  Also agreed that
Charlie doesn't need his own mapping for the server, if he is
accessing via a valid view.

 To implement that for queries, we need code similar to what
 ExecCheckRTEPerms does, ie rte-checkAsUser ? rte-checkAsUser :
 GetUserId().  It's a bit of a pain to get hold of the RTE from
 postgresGetForeignRelSize or postgresBeginForeignScan, but it's doable.
 (Should we modify the APIs for these functions to make that easier?)

This issue seems not specific to postgres_fdw.  Currently
GetUserMapping takes userid and server's oid as parameters, but we
would able to hide the complex rule by replacing userid with RTE or
something.

 I think possibly postgresAcquireSampleRowsFunc should use the foreign
 table's owner regardless of the current user ID - if the user has
 permission to run ANALYZE then we don't really want the command to
 succeed or fail depending on exactly who the user is.  That's perhaps
 debatable, anybody have another theory?

+1.  This allows non-owner to ANALYZE foreign tables without having
per-user mapping, though public mapping also solves this issue.

In implementation level, postgresAcquireSampleRowsFunc has Relation of
the target table, so we can get owner's oid by reading
rd_rel-relowner.

 * AFAICT, the patch expects to use a single connection for all
 operations initiated under one foreign server + user mapping pair.
 I don't think this can possibly be workable.  For instance, we don't
 really want postgresIterateForeignScan executing the entire remote query
 to completion and stashing the results locally -- what if that's many
 megabytes?

It uses single-row-mode of libpq and TuplestoreState to keep result
locally, so it uses limited memory at a time.  If the result is larger
than work_mem, overflowed tuples are written to temp file.  I think
this is similar to materializing query results.

 It ought to be pulling the rows back a few at a time, and
 that's not going to work well if multiple scans are sharing the same
 connection.  (We might be able to dodge that by declaring a cursor
 for each scan, but I'm not convinced that such a solution will scale up
 to writable foreign tables, nested queries, subtransactions, etc.)

Indeed the FDW used CURSOR in older versions.  Sorry for that I have
not looked writable foreign table patch closely yet, but it would
require (may be multiple) remote update query executions during
scanning?

 I think we'd better be prepared to allow multiple similar connections.
 The main reason I'm bringing this up now is that it breaks the
 assumption embodied in postgres_fdw_get_connections() and
 postgres_fdw_disconnect() that foreign server + user mapping can
 constitute a unique key for identifying connections.  However ...

Main reason to use single connection is to make multiple results
retrieved from same server in a local query consistent.  Shared
snapshot might be helpful for this consistency issue, but I've not
tried that with FDW.

 * I find postgres_fdw_get_connections() and postgres_fdw_disconnect()
 to be a bad idea altogether.  These connections ought to be a hidden
 implementation matter, not something that the user has a view of, much
 less control over.  Aside from the previous issue, I believe it's a
 trivial matter to crash the patch as it now stands by applying
 postgres_fdw_disconnect() to a connection that's in active use.  I can
 see the potential value in being able to shut down connections when a
 session has stopped using them, but this is a pretty badly-designed
 approach to that.  I suggest that we just drop these functions for now
 and revisit that problem later.  (One idea is some sort of GUC setting
 to control how many connections can be held open speculatively for
 future use.)

Actually these functions follows dblink's similar functions, but
having them is a bad decision because FDW can't connect explicitly.
As you mentioned, postgres_fdw_disconnect is provided for clean
shutdown on remote side (I needed it in my testing).

I agree that separate the issue from FDW core.

 * deparse.c contains a depressingly large amount of duplication of logic
 from ruleutils.c, and can only need more as we expand the set of
 constructs that can be pushed to the remote end.  This 

Re: [HACKERS] FDW for PostgreSQL

2013-02-14 Thread Albe Laurenz
Shigeru Hanada wrote:
 Tom Lane wrote:
 It ought to be pulling the rows back a few at a time, and
 that's not going to work well if multiple scans are sharing the same
 connection.  (We might be able to dodge that by declaring a cursor
 for each scan, but I'm not convinced that such a solution will scale up
 to writable foreign tables, nested queries, subtransactions, etc.)
 
 Indeed the FDW used CURSOR in older versions.  Sorry for that I have
 not looked writable foreign table patch closely yet, but it would
 require (may be multiple) remote update query executions during
 scanning?

It would for example call ExecForeignUpdate after each call to
IterateForeignScan that produces a row that meets the UPDATE
condition.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW for PostgreSQL

2013-02-14 Thread Shigeru Hanada
On Thu, Feb 14, 2013 at 6:45 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Shigeru Hanada wrote:
 Tom Lane wrote:
 It ought to be pulling the rows back a few at a time, and
 that's not going to work well if multiple scans are sharing the same
 connection.  (We might be able to dodge that by declaring a cursor
 for each scan, but I'm not convinced that such a solution will scale up
 to writable foreign tables, nested queries, subtransactions, etc.)

 Indeed the FDW used CURSOR in older versions.  Sorry for that I have
 not looked writable foreign table patch closely yet, but it would
 require (may be multiple) remote update query executions during
 scanning?

 It would for example call ExecForeignUpdate after each call to
 IterateForeignScan that produces a row that meets the UPDATE
 condition.

Thanks!  It seems that ExecForeignUpdate needs another connection for
update query, or we need to retrieve all results at the first Iterate
call to prepare for possible subsequent update query.

-- 
Shigeru HANADA


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] libpq test suite

2013-02-14 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi.

Sorry for the question, but where can I find the libpq test suite?
I can not find it in the PostgreSQL sources; it seems that there are
only some examples, in src/test/examples.


I'm planning to add some new features to libpq:

* make PQsendPrepare send a Describe Portal protocol message
* add support for setting per column binary result format
* add direct support for portals

(I think I need these for a project I'm working on [1]),

How can I check if the new code does not break existing usage?


[1] A new Python PostgreSQL driver, implemented following
http://wiki.postgresql.org/wiki/Driver_development
and with many optimization (compared to psycopg2) enabled by the
use of the extended query protocol


Thanks  Manlio Perillo
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlEc0iAACgkQscQJ24LbaURiVQCfaANOEaIJHdq/ZoQYx1Hu8wZr
On8An202Fqc928K9NuDRen/0B06sN/RS
=Qngt
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq test suite

2013-02-14 Thread Albe Laurenz
Manlio Perillo wrote:
 Sorry for the question, but where can I find the libpq test suite?
 I can not find it in the PostgreSQL sources; it seems that there are
 only some examples, in src/test/examples.

The regression tests are in src/interfaces/libpq/test
and currently contain only URL parsing tests.

 I'm planning to add some new features to libpq:
 
 * make PQsendPrepare send a Describe Portal protocol message
 * add support for setting per column binary result format

I suggested exactly that here:
http://www.postgresql.org/message-id/d960cb61b694cf459dcfb4b0128514c208a4e...@exadv11.host.magwien.gv.at
and met resistance:
- one can use libpqtypes
- I couldn't find a convincing use case
- it clutters up the API

 * add direct support for portals
 
 (I think I need these for a project I'm working on [1]),
 
 How can I check if the new code does not break existing usage?
 
 
 [1] A new Python PostgreSQL driver, implemented following
 http://wiki.postgresql.org/wiki/Driver_development
 and with many optimization (compared to psycopg2) enabled by the
 use of the extended query protocol

I think that you'll need to explain in more detail why
your proposed additions would be necessary for your project.
Especially since many good drivers have been written against
libpq as it is.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
 Native implementation of \vt is terrible simple - and it is generic
 and usual task

I'm still a -1 on this.  We don't have anything like this today and I
don't think it's a good idea to try adding these kinds of
shortcuts-for-generic-SQL to psql's set of \ commands.  Being simple to
implement doesn't make it a good idea.

Thanks,

Stephen


signature.asc
Description: Digital signature


[RFC] ideas for a new Python DBAPI driver (was Re: [HACKERS] libpq test suite)

2013-02-14 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Il 14/02/2013 14:06, Albe Laurenz ha scritto:
 Manlio Perillo wrote:
 Sorry for the question, but where can I find the libpq test suite?
 I can not find it in the PostgreSQL sources; it seems that there are
 only some examples, in src/test/examples.
 
 The regression tests are in src/interfaces/libpq/test
 and currently contain only URL parsing tests.
 

Ok, thanks.

Since I'm not sure if I should add a new test here, I'll use the test
suite of my project, since it contains an (almost) 1:1 wrapper around libpq.

 I'm planning to add some new features to libpq:

 * make PQsendPrepare send a Describe Portal protocol message
 * add support for setting per column binary result format
 
 I suggested exactly that here:
 http://www.postgresql.org/message-id/d960cb61b694cf459dcfb4b0128514c208a4e...@exadv11.host.magwien.gv.at
 and met resistance:
 - one can use libpqtypes
 - I couldn't find a convincing use case
 - it clutters up the API
 

For my Python DBAPI2 PostgreSQL driver I plan the following optimizations:

1) always use PQsendQueryParams functions.

   This will avoid having to escape parameters, as it is done in
   psycopg2
   (IMHO it still use simple query protocol for compatibility purpose)

2) when the driver detects a Python string is being sent to the
   database, use binary format.

   As a special case, this will avoid having to use PQescapeByteaConn
   when sending binary string (e.g. byte strings in Python 3.x)

3) enable use of prepared statements, but only if the user requested it,
   using setinputsizes function (used to set the Oids of the parameters)

4) when using a prepared statement, check the Oids of the result tuple.

   In order to make this efficient, I proposed a patch to send a
   Describe Portal message in PQsendPrepare function.

   When the driver detects that one of the result column is a string
   type, set the result format for that column to binary.

   As a special case, this will avoid having to use PQunescapeBytea
   when receiving a bytea data.

   This is currently impossible, using libpq API.

5) when returning the result set of a query, after a call to
   cursor.fetchall(), do not convert all the data to Python objects.

   This will be done only on request.

   This should optimize memory usage, as reported in:
   http://wiki.postgresql.org/wiki/Python_PostgreSQL_Driver_TODO

6) make available the use of PQsetSingleRowMode, to optimize large
   result set (as an option to the connection.cursor method)

7) as a generalization of PQsetSingleRowMode, expose in libpq API some
   of protocol internal portal API.

   One possible idea is to add a PQsetRowSize function, that will set
   the size of the result set, to be used in the Execute protocol
   message (currently libpq always set it to 0, to get the entire
   result set, and it does not support the Portal Suspended message)

   This will avoid having to use named cursor, as it is done in psycopg.

   I'll try to make a patch to check if this is feasible, can be
   done efficiently, and the new API has a minimal impact on existing
   API

Note that I will have to code these features, in order to check they
will work as I expect.


 [...]

 [1] A new Python PostgreSQL driver, implemented following
 http://wiki.postgresql.org/wiki/Driver_development
 and with many optimization (compared to psycopg2) enabled by the
 use of the extended query protocol
 
 I think that you'll need to explain in more detail why
 your proposed additions would be necessary for your project.
 Especially since many good drivers have been written against
 libpq as it is.
 
 Yours,
 Laurenz Albe
 

Thanks   Manlio Perillo

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlEc81MACgkQscQJ24LbaURO9ACfctOREoaAtMDm06Sg+qv5jesj
iW0An1CVAOaHzYaSn+P1AIJvXpI7nVT0
=rK4j
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Merlin Moncure
On Thu, Feb 14, 2013 at 1:25 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 few year ago I proposed a implementation of macros - and I wrote a
 prototype - enhanced psql

 http://okbob.blogspot.cz/search?q=epsql

 but now I don't think so enhancing psql in this direction is good way.
 Enhanced console needs creating from scratch - it should be based on
 some interpret language - LUA or javascript - probably it can be
 better integrated to modern UI, can better do autocomplete - and this
 design is out of my time possibilities and out of my targets.

The main argument against psql macros II guess is that if you want
bash like features, well, use bash.

For my part I think that whenever psql enhancements are brought up the
first think to consider is 'should this functionality exist on the
server side'.   With proper stored procedures we get to write our own
stuff like:

CALL top10('foo');

which seems more general and just as terse.  So I think implementing
call syntax is probably topping my 'wanted feature' list.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Stephen Frost
* Merlin Moncure (mmonc...@gmail.com) wrote:
 CALL top10('foo');
 
 which seems more general and just as terse.  So I think implementing
 call syntax is probably topping my 'wanted feature' list.

We have that, it's called 'SELECT' and it's only 2 more characters..

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Pavel Stehule
2013/2/14 Merlin Moncure mmonc...@gmail.com:
 On Thu, Feb 14, 2013 at 1:25 AM, Pavel Stehule pavel.steh...@gmail.com 
 wrote:
 few year ago I proposed a implementation of macros - and I wrote a
 prototype - enhanced psql

 http://okbob.blogspot.cz/search?q=epsql

 but now I don't think so enhancing psql in this direction is good way.
 Enhanced console needs creating from scratch - it should be based on
 some interpret language - LUA or javascript - probably it can be
 better integrated to modern UI, can better do autocomplete - and this
 design is out of my time possibilities and out of my targets.

 The main argument against psql macros II guess is that if you want
 bash like features, well, use bash.

 For my part I think that whenever psql enhancements are brought up the
 first think to consider is 'should this functionality exist on the
 server side'.   With proper stored procedures we get to write our own
 stuff like:

 CALL top10('foo');

 which seems more general and just as terse.  So I think implementing
 call syntax is probably topping my 'wanted feature' list.

My proposal should not replace stored procedures.

The core of my proposal was using autocomplete for one often task.





 merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Andres Freund
On 2013-02-14 09:33:51 -0500, Stephen Frost wrote:
 * Merlin Moncure (mmonc...@gmail.com) wrote:
  CALL top10('foo');
  
  which seems more general and just as terse.  So I think implementing
  call syntax is probably topping my 'wanted feature' list.
 
 We have that, it's called 'SELECT' and it's only 2 more characters..

Well, I guess Merlin wanted actual stored procedures with full control
over transactions... Not exactly a small task.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Pavel Stehule
2013/2/14 Stephen Frost sfr...@snowman.net:
 * Merlin Moncure (mmonc...@gmail.com) wrote:
 CALL top10('foo');

 which seems more general and just as terse.  So I think implementing
 call syntax is probably topping my 'wanted feature' list.

 We have that, it's called 'SELECT' and it's only 2 more characters..

it is not true

SELECT * FROM foo LIMIT 10 -- 26 chars

CALL top('foo') -- 15 chars

\vt foo -- 7 chars

Regards

Pavel


 Thanks,

 Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
 it is not true

It most certainly is true- did you look at the command?

SELECT top10('foo');

Note that it's top10, implying that it'd return the first 10 records.
That's only 2 characters more than:

CALL top10('foo');

It's not as short as '\vt foo', but I never claimed that it was, nor do
I feel it's particularly valuable to shorten it down to that level.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
 My proposal should not replace stored procedures.

Stored procedures, with actual transaction-management capabilities, is a
completely different topic which isn't usefully involved here.

 The core of my proposal was using autocomplete for one often task.

TABLE already supports tab-completion.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Alvaro Herrera
Pavel Stehule escribió:
 2013/2/14 Stephen Frost sfr...@snowman.net:
  * Merlin Moncure (mmonc...@gmail.com) wrote:
  CALL top10('foo');
 
  which seems more general and just as terse.  So I think implementing
  call syntax is probably topping my 'wanted feature' list.
 
  We have that, it's called 'SELECT' and it's only 2 more characters..
 
 it is not true
 
 SELECT * FROM foo LIMIT 10 -- 26 chars
 
 CALL top('foo') -- 15 chars
 
 \vt foo -- 7 chars

table foo limit 10; -- can be tatab, saves 4 keystrokes

Needs autocompletion for limit, then it's 8 keystrokes plus table
name, no extraneous features needed.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Unarchived WALs deleted after crash

2013-02-14 Thread Jehan-Guillaume de Rorthais
Hi,

I am facing an unexpected behavior on a 9.2.2 cluster that I can
reproduce on current HEAD.

On a cluster with archive enabled but failing, after a crash of
postmaster, the checkpoint occurring before leaving the recovery mode
deletes any additional WALs, even those waiting to be archived.

Because of this, after recovering from the crash, previous PITR backup
can not be used to restore the instance to a time where archiving was
failing. Any slaves fed by WAL or lagging in SR need to be recreated.

AFAICT, this is not documented and I would expect the WALs to be
archived by the archiver process when the cluster exits the recovery step.

Here is a simple scenario to reproduce this.

Configuration:

  wal_level = archive
  archive_mode = on
  archive_command = '/bin/false'
  log_checkpoints = on

Scenario:
  createdb test
  psql -c 'create table test as select i, md5(i::text) from
generate_series(1,300) as i;' test
  kill -9 $(head -1 $PGDATA/postmaster.pid)
  pg_ctl start

Using this scenario, log files shows:

  LOG:  archive command failed with exit code 1
  DETAIL:  The failed archive command was: /bin/false
  WARNING:  transaction log file 00010001 could not
be archived: too many failures
  LOG:  database system was interrupted; last known up at 2013-02-14
16:12:58 CET
  LOG:  database system was not properly shut down; automatic recovery
in progress
  LOG:  crash recovery starts in timeline 1 and has target timeline 1
  LOG:  redo starts at 0/11400078
  LOG:  record with zero length at 0/13397190
  LOG:  redo done at 0/13397160
  LOG:  last completed transaction was at log time 2013-02-14
16:12:58.49303+01
  LOG:  checkpoint starting: end-of-recovery immediate
  LOG:  checkpoint complete: wrote 2869 buffers (17.5%); 0 transaction
log file(s) added, 9 removed, 7 recycled; write=0.023 s, sync=0.468 s,
total=0.739 s; sync files=2, longest=0.426 s, average=0.234 s
  LOG:  autovacuum launcher started
  LOG:  database system is ready to accept connections
  LOG:  archive command failed with exit code 1
  DETAIL:  The failed archive command was: /bin/false
  LOG:  archive command failed with exit code 1
  DETAIL:  The failed archive command was: /bin/false
  LOG:  archive command failed with exit code 1
  DETAIL:  The failed archive command was: /bin/false
  WARNING:  transaction log file 00010011 could not
be archived: too many failures

Before the kill, 00010001 was the WAL to archive.
After the kill, the checkpoint deleted 9 files before exiting recovery
mode and 00010011 become the first WAL to archive.
00010001 through 00010010 were
removed or recycled.

Is it expected ?
-- 
Jehan-Guillaume de Rorthais
http://www.dalibo.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Merlin Moncure
On Thu, Feb 14, 2013 at 8:43 AM, Stephen Frost sfr...@snowman.net wrote:
 * Pavel Stehule (pavel.steh...@gmail.com) wrote:
 My proposal should not replace stored procedures.

 Stored procedures, with actual transaction-management capabilities, is a
 completely different topic which isn't usefully involved here.

 The core of my proposal was using autocomplete for one often task.

 TABLE already supports tab-completion.

TABLE is fine.  My point is that often when discussing extensions to
psql are we really working around lack of stored procedures.  Talk of
stored procedures is not off topic, because they could directly
implement proposed $feature in a much more general way.  I am very
sanguine about implementation difficulties but please don't shoot the
messenger.

 SELECT top10('foo');

That doesn't work.  functions don't allow arbitrary returned columns.
CALL should support this.

Anyways, if we are counting characters,
TAtab fotab limit 10;   17
CAtab totab('foo');  15  (but different table name could
certainly swing it)

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade old cluster delete script

2013-02-14 Thread Bruce Momjian
On Thu, Feb 14, 2013 at 12:29:52AM -0500, Bruce Momjian wrote:
 You might remember this pg_upgrade bug report where the user complained
 that user-defined tablespaces _inside_ the old cluster directory were
 deleted by the old cluster delete script:
 
   
 http://www.postgresql.org/message-id/e1thpdm-00018c...@wrigleys.postgresql.org
 
 and my reply that we don't want to fix this:
 
   http://www.postgresql.org/message-id/20121004191201.ga11...@momjian.us
 
 However, creating a script that deletes data is never a good idea so I
 have created the attached patch to avoid creating the script in these
 cases, and issue a message to the user.
 
 The test isn't perfect but it is probably close enough.  I don't think
 it is worth unlinking a possibly matching filename in such cases.

OK, I have applied a modified version of this patch the uses the
port/path.c functions to do proper path comparison, and decided to
conditionally unlink the script file just in case.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW for PostgreSQL

2013-02-14 Thread Tom Lane
Kohei KaiGai kai...@kaigai.gr.jp writes:
 2013/2/14 Tom Lane t...@sss.pgh.pa.us:
 * deparse.c contains a depressingly large amount of duplication of logic
 from ruleutils.c, and can only need more as we expand the set of
 constructs that can be pushed to the remote end.  This doesn't seem like
 a maintainable approach.  Was there a specific reason not to try to use
 ruleutils.c for this?

 Previously, you suggested to implement its own logic for query deparsing,
 then Hanada-san rewrite the relevant code.
   http://www.postgresql.org/message-id/12181.1331223...@sss.pgh.pa.us

[ rereads that... ]  Hm, I did make some good points.  But having seen
the end result of this way, I'm still not very happy; it still looks
like a maintenance problem.  Maybe some additional flags in ruleutils.c
is the least evil way after all.  Needs more thought.

 Indeed, most of the logic is duplicated. However, it is to avoid bugs in
 some corner cases, for instance, function name is not qualified with
 schema even if this function is owned by different schema in remote side.

That particular reason doesn't seem to hold a lot of water when we're
restricting the code to only push over built-in functions/operators
anyway.

I find it tempting to think about setting search_path explicitly to
pg_catalog (only) on the remote side, whereupon we'd have to
explicitly schema-qualify references to user tables, but built-in
functions/operators would not need that (and it wouldn't really matter
if ruleutils did try to qualify them).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Stephen Frost
* Merlin Moncure (mmonc...@gmail.com) wrote:
 That doesn't work.  functions don't allow arbitrary returned columns.
 CALL should support this.

That's yet another discussion, though for a preview, you could just
return a single text column from the function in which you do whatever
formatting you want.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] 9.2.3 crashes during archive recovery

2013-02-14 Thread Ants Aasma
On Feb 13, 2013 10:29 PM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
 Hmm, I just realized a little problem with that approach. If you take a
base backup using an atomic filesystem backup from a running server, and
start archive recovery from that, that's essentially the same thing as
Kyotaro's test case.

Coincidentally I was wondering about the same thing when I was reviewing
our slave provisioning procedures. There didn't seem to be any
communication channel from pg_stop_backup for the slave to know when it was
safe to allow connections.

Maybe there should be some mechanism akin to backup label to communicate
the minimum recovery point? When the min recovery point file exists the
value inside it is used, when the recovery point is reached the file is
removed. pg_basebackup would just append the file to the archive. Custom
backup procedures could also use it to communicate the necessary WAL
location.

--
Ants Aasma


Re: [HACKERS] Alias hstore's ? to ~ so that it works with JDBC

2013-02-14 Thread Ants Aasma
On Feb 13, 2013 10:35 PM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
 We could support operator(?) as well; belt and suspenders. That would
help ODBC clients too.

+1 for the belt and suspenders approach. With {postgres qm} JDBC can work
with older PostgreSQL versions, not requiring applications to bump their
minimum db version. With operator(?) older JDBC clients can work too in
case the library version is fixed due to policies (I'm assuming here that
question marks already work within quoted identifiers/literals).

--
Ants Aasma


Re: [HACKERS] FDW for PostgreSQL

2013-02-14 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 On Thu, Feb 14, 2013 at 1:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 * AFAICT, the patch expects to use a single connection for all
 operations initiated under one foreign server + user mapping pair.
 I don't think this can possibly be workable.  For instance, we don't
 really want postgresIterateForeignScan executing the entire remote query
 to completion and stashing the results locally -- what if that's many
 megabytes?

 It uses single-row-mode of libpq and TuplestoreState to keep result
 locally, so it uses limited memory at a time.  If the result is larger
 than work_mem, overflowed tuples are written to temp file.  I think
 this is similar to materializing query results.

Well, yeah, but that doesn't make it an acceptable solution.  Consider
for instance SELECT * FROM huge_foreign_table LIMIT 10.  People are
not going to be satisfied if that pulls back the entire foreign table
before handing them the 10 rows.  Comparable performance problems can
arise even without LIMIT, for instance in handling of nestloop inner
scans.

 I think we'd better be prepared to allow multiple similar connections.

 Main reason to use single connection is to make multiple results
 retrieved from same server in a local query consistent.

Hmm.  That could be a good argument, although the current patch pretty
much destroys any such advantage by being willing to use READ COMMITTED
mode on the far end --- with that, you lose any right to expect
snapshot-consistent data anyway.  I'm inclined to think that maybe we
should always use at least REPEATABLE READ mode, rather than blindly
copying the local transaction mode.  Or maybe this should be driven by a
foreign-server option instead of looking at the local mode at all?

Anyway, it does seem like maybe we need to use cursors so that we can
have several active scans that we are pulling back just a few rows at a
time from.

I'm not convinced that that gets us out of the woods though WRT needing
only one connection.  Consider a query that is scanning some foreign
table, and it calls a plpgsql function, and that function (inside an
EXCEPTION block) does a query that scans another foreign table on the
same server.  This second query gets an error on the remote side.  If
the error is caught via the exception block, and the outer query
continues, what then?  We could imagine adding enough infrastructure
to establish a remote savepoint for each local subtransaction and clean
things up on failure, but no such logic is in the patch now, and I think
it wouldn't be too simple either.  The least painful way to make this
scenario work, given what's in the patch, is to allow such a
subtransaction to use a separate connection.

In any case, I'm pretty well convinced that the connection-bookkeeping
logic needs a major rewrite to have any hope of working in
subtransactions.  I'm going to work on that first and see where it leads.

 * I find postgres_fdw_get_connections() and postgres_fdw_disconnect()
 to be a bad idea altogether.

 I agree that separate the issue from FDW core.

OK, so we'll drop these from the current version of the patch and
revisit the problem of closing connections later.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Pavel Stehule
2013/2/14 Stephen Frost sfr...@snowman.net:
 * Merlin Moncure (mmonc...@gmail.com) wrote:
 That doesn't work.  functions don't allow arbitrary returned columns.
 CALL should support this.

 That's yet another discussion, though for a preview, you could just
 return a single text column from the function in which you do whatever
 formatting you want.

it is not easy - we have no available any formatting support on server side

so then you need to supply lot of libpq code

Pavel


 Thanks,

 Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Stephen Frost
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
 it is not easy - we have no available any formatting support on server side

Sure we do.

 so then you need to supply lot of libpq code

No, you don't.

This discussion isn't going to change my feelings on this particular
misfeature.  If others feel it's valuable and important then they can
certainly speak-up.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] BUG #7493: Postmaster messages unreadable in a Windows console

2013-02-14 Thread Peter Eisentraut
On 2/11/13 10:22 PM, Greg Stark wrote:
 On Sun, Feb 10, 2013 at 11:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 If we knew that postgresql.conf was stored in, say, UTF8, then it would
 probably be possible to perform encoding conversion to get string
 variables into the database encoding.  Perhaps we should allow some
 magic syntax to tell us the encoding of a config file?

 file_encoding = 'utf8'  # must precede any non-ASCII in the file
 
 If we're going to do that we might as well use the Emacs standard
 -*-coding: latin-1;-*-

Yes, or more generally perhaps what Python does:
http://docs.python.org/2.7/reference/lexical_analysis.html#encoding-declarations

(In Python 2, the default is ASCII, in Python 3, the default is UTF8.)

 But that said I'm not sure saying the whole file is in an encoding is
 the right approach. Paths are actually binary strings. any encoding is
 purely for display purposes anyways. Log line formats could be treated
 similarly if we choose.

Well, at some point someone is going to open a configuration file in an
editor, and the editor will usually only be table to deal with one
encoding per file.  So we need to make that work, even if technically,
different considerations apply to different settings.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 This discussion isn't going to change my feelings on this particular
 misfeature.  If others feel it's valuable and important then they can
 certainly speak-up.

I think the same --- a new backslash command for this is absolutely not
worth its weight compared to using TABLE foo.

We have identified at least two places where psql could be improved to
support the use of TABLE foo better, without adding any new syntax
(viz, teach the FETCH_COUNT logic about it and improve tab-completion
to know about the possibility of limit/offset clauses).  I think that's
a much more sensible path to pursue than inventing more features.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [RFC] pgstattuple/pgstatindex enhancement

2013-02-14 Thread Robert Haas
On Tue, Feb 12, 2013 at 10:22 AM, Satoshi Nagayasu sn...@uptime.jp wrote:
 (1) Fix pgstatindex arguments to work same as pgstattuple.

 As the document describes, pgstattuple accepts 'schema.table'
 expression and oid of the table, but pgstatindex doesn't.
 (because I didn't add that when I created pgstatindex...)

 http://www.postgresql.org/docs/devel/static/pgstattuple.html

 So, I'd like to change pgstatindex arguments to allow
 schema name and oid.

 Does it make sense?

Not sure.  It seems nice, but it's also a backward-compatibility
break.  So I don't know.

 (2) Enhance pgstattuple/pgstatindex to allow block sampling.

 Now, we have large tables and indexes in PostgreSQL, and these are
 growing day by day.

 pgstattuple and pgstatindex are both very important to keep database
 performance well, but doing full-scans on large tables and indexes
 would generate big performance impact.

 So, now I think pgstattuple and pgstatindex should support
 'block sampling' to collect block statistics with avoiding full-scans.

 With this block sampling feature, pgstattuple/pgstatindex would be
 able to collect block statistics from 1~10% of the blocks in the
 table/index if the table/index is large (maybe 10GB or more).

Now that sounds really nice.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BUG #7493: Postmaster messages unreadable in a Windows console

2013-02-14 Thread Craig Ringer
On 02/15/2013 12:45 AM, Peter Eisentraut wrote:
 On 2/11/13 10:22 PM, Greg Stark wrote:
 On Sun, Feb 10, 2013 at 11:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 If we knew that postgresql.conf was stored in, say, UTF8, then it would
 probably be possible to perform encoding conversion to get string
 variables into the database encoding.  Perhaps we should allow some
 magic syntax to tell us the encoding of a config file?

 file_encoding = 'utf8'  # must precede any non-ASCII in the file
 If we're going to do that we might as well use the Emacs standard
 -*-coding: latin-1;-*-
 Yes, or more generally perhaps what Python does:
 http://docs.python.org/2.7/reference/lexical_analysis.html#encoding-declarations

 (In Python 2, the default is ASCII, in Python 3, the default is UTF8.)
Not that Python also respects a BOM in a UTF-8 file, treating the BOM as
flagging the file as being UTF-8.

In addition, if the first bytes of the file are the UTF-8 byte-order
mark ('\xef\xbb\xbf'), the declared file encoding is UTF-8.

IMO we should do the same. If there's no explicit encoding declaration,
treat it as UTF-8 if there's a BOM and as the platform's local character
encoding otherwise.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sql_drop Event Trigger

2013-02-14 Thread Robert Haas
On Mon, Feb 11, 2013 at 9:53 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
 Robert Haas robertmh...@gmail.com writes:
 Robert, you specifically opposed to sql_drop and I just removed it
 from the patch. What do you think now? Also, should that be a follow-up
 patch to the current one for your reviewing purposes?

 Well, if it has a different firing point than ddl_command_end, then
 there could well be some point to having it after all.  But I'm far
 from convinced that the proposed firing point can be made safe without
 a major refactoring.  I think this is the sort of things where design
 before code ought to be the cardinal rule.

 Ok se we are in agreement here. I think we should see about getting the
 dropped_objects.3.patch.gz in (pending review), ...

Wait, I'm confused.  I had a note to myself to come back and review
this, but now that I look at it, I didn't think that patch was pending
review.  Alvaro, Tom, and I all made comments that seems to impinge
upon that design rather heavily.  No?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RFC] ideas for a new Python DBAPI driver (was Re: [HACKERS] libpq test suite)

2013-02-14 Thread Jonathan Rogers
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

A number of the described features sound quite useful. Is it not
practical to extend an existing library such as psycopg2? What method
will you use to call libpq functions? As you are no doubt aware,
psycopg2 uses the traditional CPython API but there is a fork which uses
ctypes instead, enabling it to work with CPython and Pypy.

Manlio Perillo wrote:

 For my Python DBAPI2 PostgreSQL driver I plan the following optimizations:
 
 2) when the driver detects a Python string is being sent to the
database, use binary format.

What exactly do you mean by Python string?


- -- 
Jonathan Ross Rogers
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iEYEARECAAYFAlEdHFAACgkQVmXTv6uMqqOlzgCgxcGtjwFqK1AMKAED9rK5YSOa
3AMAoJhw4197HPBrjpxG/iSLK43B1B3j
=RnaP
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2.3 crashes during archive recovery

2013-02-14 Thread Fujii Masao
On Thu, Feb 14, 2013 at 5:15 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 On 13.02.2013 17:02, Tom Lane wrote:

 Heikki Linnakangashlinnakan...@vmware.com  writes:

 At least in back-branches, I'd call this a pilot error. You can't turn a
 master into a standby just by creating a recovery.conf file. At least
 not if the master was not shut down cleanly first.
 ...
 I'm not sure that's worth the trouble, though. Perhaps it would be
 better to just throw an error if the control file state is
 DB_IN_PRODUCTION and a recovery.conf file exists.


 +1 for that approach, at least until it's clear there's a market for
 doing this sort of thing.  I think the error check could be
 back-patched, too.


 Hmm, I just realized a little problem with that approach. If you take a base
 backup using an atomic filesystem backup from a running server, and start
 archive recovery from that, that's essentially the same thing as Kyotaro's
 test case.

Yes. And the resource agent for streaming replication in Pacemaker (it's the
OSS clusterware) is the user of that archive recovery scenario, too. When it
starts up the server, it always creates the recovery.conf and starts the server
as the standby. It cannot start the master directly, IOW the server is always
promoted to the master from the standby. So when it starts up the server
after the server crashes, obviously it executes the same recovery scenario
(i.e., force archive recovery instead of crash one) as Kyotaro described.

The reason why that resource agent cannot start up the master directly is
that it manages three server states, called Master, Slave and Down. It can
move the server state from Down to Slave, and the reverse direction.
Also it can move the state from Slave to Master, and the reverse direction.
But there is no way to move the state between Down and Master directly.
This kind of the state transition model is isolated case in
clusterware, I think.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2.3 crashes during archive recovery

2013-02-14 Thread Fujii Masao
On Thu, Feb 14, 2013 at 5:52 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On 13 February 2013 09:04, Heikki Linnakangas hlinnakan...@vmware.com wrote:

 Without step 3, the server would perform crash recovery, and it would work.
 But because of the recovery.conf file, the server goes into archive
 recovery, and because minRecoveryPoint is not set, it assumes that the
 system is consistent from the start.

 Aside from the immediate issue with truncation, the system really isn't
 consistent until the WAL has been replayed far enough, so it shouldn't open
 for hot standby queries. There might be other, later, changes already
 flushed to data files. The system has no way of knowing how far it needs to
 replay the WAL to become consistent.

 At least in back-branches, I'd call this a pilot error. You can't turn a
 master into a standby just by creating a recovery.conf file. At least not if
 the master was not shut down cleanly first.

 If there's a use case for doing that, maybe we can do something better in
 HEAD. If the control file says that the system was running
 (DB_IN_PRODUCTION), but there is a recovery.conf file, we could do crash
 recovery first, until we reach the end of WAL, and go into archive recovery
 mode after that. We'd recover all the WAL files in pg_xlog as far as we can,
 same as in crash recovery, and only start restoring files from the archive
 once we reach the end of WAL in pg_xlog. At that point, we'd also consider
 the system as consistent, and start up for hot standby.

 I'm not sure that's worth the trouble, though. Perhaps it would be better to
 just throw an error if the control file state is DB_IN_PRODUCTION and a
 recovery.conf file exists. The admin can always start the server normally
 first, shut it down cleanly, and then create the recovery.conf file.

 Now I've read the whole thing...

 The problem is that we startup Hot Standby before we hit the min
 recovery point because that isn't recorded. For me, the thing to do is
 to make the min recovery point == end of WAL when state is
 DB_IN_PRODUCTION. That way we don't need to do any new writes and we
 don't need to risk people seeing inconsistent results if they do this.

+1

And if it's the standby case, the min recovery point can be set to the end
of WAL files located in the standby. IOW, we can regard the database as
consistent when we replay all the WAL files in local and try to connect to
the master.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RFC] ideas for a new Python DBAPI driver (was Re: [HACKERS] libpq test suite)

2013-02-14 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Il 14/02/2013 18:18, Jonathan Rogers ha scritto:
 A number of the described features sound quite useful. Is it not
 practical to extend an existing library such as psycopg2?

I suspect there are compatibility issues.

 What method
 will you use to call libpq functions? As you are no doubt aware,
 psycopg2 uses the traditional CPython API but there is a fork which uses
 ctypes instead, enabling it to work with CPython and Pypy.
 

I'm implementing a prototype version, in pure Python with ctypes.
When the prototype is ready, I will implement a CPython extension module
(since the purpose of writing yet another driver is to make it more
efficient than the current best driver).

I will also discuss about porting some of the features to psycopg2 (but
first I need to have a working driver).

 Manlio Perillo wrote:
 
 For my Python DBAPI2 PostgreSQL driver I plan the following optimizations:
 
 2) when the driver detects a Python string is being sent to the
database, use binary format.
 
 What exactly do you mean by Python string?
 

A Python string object.

The libpq interface will implement the functions
  pg_str_encode(string object) - bytes
  pg_str_decode(bytes, result object) - string object
as described in
http://wiki.postgresql.org/wiki/Driver_development

but with some differences.

The pg_str_encode function will return a tuple with the raw bytes and
the suggested parameter format.

As an example, pg_str_encode(Python 3.x byte string) will return the
byte string and 1 (since it is a binary string, and I want to avoid to
use PQescapeBytea function).

For the DBAPI interface, I plan to implement a generic API to map
PostgreSQL types to Python types (different from the one implemented in
psycopg2); something like:

pgtype = connection.create_type_map(
python type object, postgresql type name,
input text function, output text function,
input binary function - optional,
input binary function - optional)

but I have yet to decide how to detect the param format to use.

Maybe there will be only one output function, that will decide the best
format to be used.



Regards  Manlio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlEdIYQACgkQscQJ24LbaUSfBACfWR7eD6pdyipC3/fljUiuelx6
GV4An3agVt4tx0E/JDUvO0iM8/BiZT1o
=xckI
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Merlin Moncure
On Thu, Feb 14, 2013 at 10:32 AM, Pavel Stehule pavel.steh...@gmail.com wrote:
 2013/2/14 Stephen Frost sfr...@snowman.net:
 * Pavel Stehule (pavel.steh...@gmail.com) wrote:
 it is not true

 It most certainly is true- did you look at the command?

 SELECT top10('foo');

 Note that it's top10, implying that it'd return the first 10 records.
 That's only 2 characters more than:

 In current implementation you need a a polymorphic function

 and then you have to write

 SELECT (top10('foo')).*

 or

 SELECT * FROM top10('foo')

that can't work either -- at least not without a column select list.
there has to be some clue to output time from arguments to the
function.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal or just idea for psql - show first N rows from relation backslash statement

2013-02-14 Thread Merlin Moncure
On Thu, Feb 14, 2013 at 10:03 AM, Stephen Frost sfr...@snowman.net wrote:
 * Merlin Moncure (mmonc...@gmail.com) wrote:
 That doesn't work.  functions don't allow arbitrary returned columns.
 CALL should support this.

 That's yet another discussion, though for a preview, you could just
 return a single text column from the function in which you do whatever
 formatting you want.

sure -- I use this technique often although lately i've moved from
text to hstore/json.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Call for Google Summer of Code mentors, admins

2013-02-14 Thread Josh Berkus
Folks,

Once again, Google is holding Summer of Code.  We need to assess whether
we want to participate this year.

Questions:

- Who wants to mentor for GSOC?

- Who can admin for GSOC?  Thom?

- Please suggest project ideas for GSOC

- Students seeing this -- please speak up if you have projects you plan
to submit.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Too frequent checkpoints ?

2013-02-14 Thread Fujii Masao
On Mon, Feb 11, 2013 at 5:46 PM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
 On Fri, Feb 8, 2013 at 10:25 PM, Fujii Masao masao.fu...@gmail.com wrote:
 .
 BTW, the cause of the problem is that the following sequences happens.

 1. archive_timeout switches WAL file because checkpoint WAL record has
 has been written since last switch

 Thank you for explaining that. I also noticed that the WAL file switch
 happens after archive_timeout seconds irrespective of whether
 archive_mode is turned ON or not. This happens because we don't check
 if XLogArchivingActive() in CheckArchiveTimeout() function. It looks
 wrong to me.

+1 to fix this. I've not heard the use case where archive_timeout needs to
be used even in not archive mode...

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RFC] ideas for a new Python DBAPI driver (was Re: [HACKERS] libpq test suite)

2013-02-14 Thread Peter Eisentraut
On 2/14/13 9:23 AM, Manlio Perillo wrote:
 1) always use PQsendQueryParams functions.
 
This will avoid having to escape parameters, as it is done in
psycopg2
(IMHO it still use simple query protocol for compatibility purpose)

I think the reason this doesn't work is that in order to prepare a query
you need to know the parameter types, but you don't know that in Python,
or at least with the way the DB-API works.  For example, if you write

cur.execute(SELECT * FROM tbl WHERE a = %s AND b = %s, (val1, val2))

what types will you pass to PQsendQueryParams?

You can make some obvious heuristics, but this sort of thing can get
complicated pretty quickly.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Alvaro Herrera
Alvaro Herrera escribió:

 Hm, and I now also realize another bug in this patch: the global stats
 only include database entries for requested databases; but perhaps the
 existing files can serve later requestors just fine for databases that
 already had files; so the global stats file should continue to carry
 entries for them, with the old timestamps.

Actually the code already do things that way -- apologies.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Alvaro Herrera
Here's a ninth version of this patch.  (version 8 went unpublished).  I
have simplified a lot of things and improved some comments; I think I
understand much of it now.  I think this patch is fairly close to
committable, but one issue remains, which is this bit in
pgstat_write_statsfiles():

/* In any case, we can just throw away all the db requests, but we need 
to
 * write dummy files for databases without a stat entry (it would cause
 * issues in pgstat_read_db_statsfile_timestamp and pgstat wait 
timeouts).
 * This may happen e.g. for shared DB (oid = 0) right after initdb.
 */
if (!slist_is_empty(last_statrequests))
{
slist_mutable_iter  iter;

slist_foreach_modify(iter, last_statrequests)
{
DBWriteRequest *req = slist_container(DBWriteRequest, 
next,

  iter.cur);

/*
 * Create dummy files for requested databases without a 
proper
 * dbentry. It's much easier this way than dealing with 
multiple
 * timestamps, possibly existing but not yet written 
DBs etc.
 * */
if (!pgstat_get_db_entry(req-databaseid, false))
pgstat_write_db_dummyfile(req-databaseid);

pfree(req);
}

slist_init(last_statrequests);
}

The problem here is that creating these dummy entries will cause a
difference in autovacuum behavior.  Autovacuum will skip processing
databases with no pgstat entry, and the intended reason is that if
there's no pgstat entry it's because the database doesn't have enough
activity.  Now perhaps we want to change that, but it should be an
explicit decision taken after discussion and thought, not side effect
from an unrelated patch.

Hm, and I now also realize another bug in this patch: the global stats
only include database entries for requested databases; but perhaps the
existing files can serve later requestors just fine for databases that
already had files; so the global stats file should continue to carry
entries for them, with the old timestamps.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services
*** a/src/backend/postmaster/pgstat.c
--- b/src/backend/postmaster/pgstat.c
***
*** 38,43 
--- 38,44 
  #include access/xact.h
  #include catalog/pg_database.h
  #include catalog/pg_proc.h
+ #include lib/ilist.h
  #include libpq/ip.h
  #include libpq/libpq.h
  #include libpq/pqsignal.h
***
*** 66,73 
   * Paths for the statistics files (relative to installation's $PGDATA).
   * --
   */
! #define PGSTAT_STAT_PERMANENT_FILENAME		global/pgstat.stat
! #define PGSTAT_STAT_PERMANENT_TMPFILE		global/pgstat.tmp
  
  /* --
   * Timer definitions.
--- 67,75 
   * Paths for the statistics files (relative to installation's $PGDATA).
   * --
   */
! #define PGSTAT_STAT_PERMANENT_DIRECTORY		pg_stat
! #define PGSTAT_STAT_PERMANENT_FILENAME		pg_stat/global.stat
! #define PGSTAT_STAT_PERMANENT_TMPFILE		pg_stat/global.tmp
  
  /* --
   * Timer definitions.
***
*** 115,120  int			pgstat_track_activity_query_size = 1024;
--- 117,124 
   * Built from GUC parameter
   * --
   */
+ char	   *pgstat_stat_directory = NULL;
+ int			pgstat_stat_dbfile_maxlen = 0;
  char	   *pgstat_stat_filename = NULL;
  char	   *pgstat_stat_tmpname = NULL;
  
***
*** 219,229  static int	localNumBackends = 0;
   */
  static PgStat_GlobalStats globalStats;
  
! /* Last time the collector successfully wrote the stats file */
! static TimestampTz last_statwrite;
  
! /* Latest statistics request time from backends */
! static TimestampTz last_statrequest;
  
  static volatile bool need_exit = false;
  static volatile bool got_SIGHUP = false;
--- 223,238 
   */
  static PgStat_GlobalStats globalStats;
  
! /* Write request info for each database */
! typedef struct DBWriteRequest
! {
! 	Oid			databaseid;		/* OID of the database to write */
! 	TimestampTz request_time;	/* timestamp of the last write request */
! 	slist_node	next;
! } DBWriteRequest;
  
! /* Latest statistics request times from backends */
! static slist_head	last_statrequests = SLIST_STATIC_INIT(last_statrequests);
  
  static volatile bool need_exit = false;
  static volatile bool got_SIGHUP = false;
***
*** 252,262  static void pgstat_sighup_handler(SIGNAL_ARGS);
  static PgStat_StatDBEntry *pgstat_get_db_entry(Oid databaseid, bool create);
  static PgStat_StatTabEntry *pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry,
  	 Oid tableoid, bool create);
! static void pgstat_write_statsfile(bool permanent);
! static HTAB 

Re: [RFC] ideas for a new Python DBAPI driver (was Re: [HACKERS] libpq test suite)

2013-02-14 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Il 14/02/2013 20:01, Peter Eisentraut ha scritto:
 On 2/14/13 9:23 AM, Manlio Perillo wrote:
 1) always use PQsendQueryParams functions.

This will avoid having to escape parameters, as it is done in
psycopg2
(IMHO it still use simple query protocol for compatibility purpose)
 
 I think the reason this doesn't work is that in order to prepare a query
 you need to know the parameter types, but you don't know that in Python,
 or at least with the way the DB-API works.

Hint: .setinputsizes.

In my implementation, prepared queries are **only** used if user calls
setinputsizes; if setinputsizes is not called, preparing a query can
cause performance loss, so it is better to not prepare one.

  For example, if you write
 
 cur.execute(SELECT * FROM tbl WHERE a = %s AND b = %s, (val1, val2))
 
 what types will you pass to PQsendQueryParams?
 

1) if setinputsizes is used, use the type specified here.

2) if setinputsizes is not used, query the driver's type catalog, to
   get the postgresql type oid associated with a Python object; e.g.:

   pg_type = connection.get_type_by_object(val1)
   buf, format = pg_type.output_function(val1)
   param_oid = pg_type.oid

 You can make some obvious heuristics, but this sort of thing can get
 complicated pretty quickly.

A non trivial case if when val is a list, that should be mapped to a
PostgreSQL array.

However, you can always set the oid to 0, and let PostgreSQL deduce the
type, as it is done in psycopg2.  If user called setinputsizes, we are
happy.


Regards   Manlio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlEdO54ACgkQscQJ24LbaURFWACgkG3dkDYUA3tHooiXpGDf8hm0
Fz0AnjhFDwJU/XSqtgPyHwuHw9+GiRlv
=m68+
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RFC] ideas for a new Python DBAPI driver (was Re: [HACKERS] libpq test suite)

2013-02-14 Thread Marko Tiikkaja

On 14/02/2013 20:01, Peter Eisentraut wrote:

On 2/14/13 9:23 AM, Manlio Perillo wrote:

1) always use PQsendQueryParams functions.

This will avoid having to escape parameters, as it is done in
psycopg2
(IMHO it still use simple query protocol for compatibility purpose)


I think the reason this doesn't work is that in order to prepare a query
you need to know the parameter types, but you don't know that in Python,
or at least with the way the DB-API works.  For example, if you write

cur.execute(SELECT * FROM tbl WHERE a = %s AND b = %s, (val1, val2))

what types will you pass to PQsendQueryParams?


Pardon me if this is obvious, but why would you need to pass any types 
at all?  Assuming we're still talking about PQsendQueryParams and not an 
explicit prepare/execute cycle..



Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Josh Berkus
I saw discussion about this on this thread, but I'm not able to figure
out what the answer is: how does this work with moving the stats file,
for example to a RAMdisk?  Specifically, if the user sets
stats_temp_directory, does it continue to work the way it does now?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Alvaro Herrera
Josh Berkus wrote:
 I saw discussion about this on this thread, but I'm not able to figure
 out what the answer is: how does this work with moving the stats file,
 for example to a RAMdisk?  Specifically, if the user sets
 stats_temp_directory, does it continue to work the way it does now?

Of course.  You get more files than previously, but yes.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unarchived WALs deleted after crash

2013-02-14 Thread Daniel Farina
On Thu, Feb 14, 2013 at 7:45 AM, Jehan-Guillaume de Rorthais
j...@dalibo.com wrote:
 Hi,

 I am facing an unexpected behavior on a 9.2.2 cluster that I can
 reproduce on current HEAD.

 On a cluster with archive enabled but failing, after a crash of
 postmaster, the checkpoint occurring before leaving the recovery mode
 deletes any additional WALs, even those waiting to be archived.

I believe I have encountered this recently, but didn't get enough
chance to work with it to correspond.  For me, the cause was
out-of-disk on the file system that exclusively contained WAL,
backlogged because archiving fell behind writing.  This causes the
cluster to crash -- par for the course -- but also an archive gap was
created.  At the time I thought there was some kind of bug in dealing
with out of space issues in the archiver (the .ready bookkeeping), but
the symptoms I saw seem like they might be explained by your report,
too.

--
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Alvaro Herrera
Alvaro Herrera escribió:
 Here's a ninth version of this patch.  (version 8 went unpublished).  I
 have simplified a lot of things and improved some comments; I think I
 understand much of it now.  I think this patch is fairly close to
 committable, but one issue remains, which is this bit in
 pgstat_write_statsfiles():
 
   /* In any case, we can just throw away all the db requests, but we need 
 to
* write dummy files for databases without a stat entry (it would cause
* issues in pgstat_read_db_statsfile_timestamp and pgstat wait 
 timeouts).
* This may happen e.g. for shared DB (oid = 0) right after initdb.
*/

I think the real way to handle this is to fix backend_read_statsfile().
It's using the old logic of considering existance of the file, but of
course now the file might not exist at all and that doesn't mean we need
to continue kicking the collector to write it.  We need a mechanism to
figure that the collector is just not going to write the file no matter
how hard we kick it ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] sql_drop Event Trigger

2013-02-14 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 Wait, I'm confused.  I had a note to myself to come back and review
 this, but now that I look at it, I didn't think that patch was pending
 review.  Alvaro, Tom, and I all made comments that seems to impinge
 upon that design rather heavily.  No?

The current design follows exactly your comments and design requests.
Tom and Álvaro comments are the ones you did answer to saying that it's
not 9.3 material, but next release at best, subject to heavy refactoring.

What did I miss?
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Alvaro Herrera
Alvaro Herrera escribió:
 Here's a ninth version of this patch.  (version 8 went unpublished).  I
 have simplified a lot of things and improved some comments; I think I
 understand much of it now.  I think this patch is fairly close to
 committable, but one issue remains, which is this bit in
 pgstat_write_statsfiles():

I've marked this as Waiting on author for the time being.  I'm going to
review/work on other patches now, hoping that Tomas will post an updated
version in time for it to be considered for 9.3.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-02-14 Thread Sîrbu Nicolae-Cezar
Hello,

Well i'm interested in PostgreSQL for GSOC, i'm not sure for the project
yet. But i'm looking forward in meeting the mentors and speak with them
what could be implemented over the summer.

Thanks,
Sirbu Nicolae-Cezar


Re: [HACKERS] JSON Function Bike Shedding

2013-02-14 Thread Merlin Moncure
On Wed, Feb 13, 2013 at 11:40 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Andrew Dunstan wrote:

 On 02/13/2013 12:07 PM, David E. Wheeler wrote:
 On Feb 13, 2013, at 8:36 AM, Andrew Dunstan and...@dunslane.net wrote:

 I think Merlin's suggestion if unwrap might be good. Or simply 
 elements() might work.
 Perhaps unwrap() returns a set and elements() returns an array?

 Now you're adding functionality. Let's just keep this to the
 question of names.

 I agree with that, but it seems a good idea to leave names available for
 future functionality, where reasonable.

Given all that, is there consensus on the names so that Andrew knows
what changes to make?

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-02-14 Thread Thom Brown
On 14 February 2013 18:02, Josh Berkus j...@agliodbs.com wrote:
 Folks,

 Once again, Google is holding Summer of Code.  We need to assess whether
 we want to participate this year.

 Questions:

 - Who wants to mentor for GSOC?

 - Who can admin for GSOC?  Thom?

I don't mind being an admin again.

-- 
Thom


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BUG #7493: Postmaster messages unreadable in a Windows console

2013-02-14 Thread Noah Misch
On Thu, Feb 14, 2013 at 11:11:13AM +0400, Alexander Law wrote:
 Hello,
 Alexander Law exclus...@gmail.com writes:
 Please look at the following l10n bug:
 http://www.postgresql.org/message-id/502a26f1.6010...@gmail.com
 and the proposed patch.
 With your proposed change, the problem will resurface in an actual SQL_ASCII
 database.  At the problem's root is write_console()'s assumption that 
 messages
 are in the database encoding.  pg_bind_textdomain_codeset() tries to make 
 that
 so, but it only works for encodings with a pg_enc2gettext_tbl entry.  That
 excludes SQL_ASCII, MULE_INTERNAL, and others.  write_console() needs to
 behave differently in such cases.
 Thank you for the notice. So it seems that DatabaseEncoding variable  
 alone can't present a database encoding (for communication with a  
 client) and current process messages encoding (for logging messages) at  
 once. There should be another variable, something like  
 CurrentProcessEncoding, that will be set to OS encoding at start and can  
 be changed to encoding of a connected database (if  
 bind_textdomain_codeset succeeded).

I'd call it MessageEncoding unless it corresponds with similar rigor to a
broader concept.

 On Tue, Feb 12, 2013 at 03:22:17AM +, Greg Stark wrote:
 But that said I'm not sure saying the whole file is in an encoding is
 the right approach. Paths are actually binary strings. any encoding is
 purely for display purposes anyways.
 For Unix, yes.  On Windows, they're ultimately UTF16 strings; some system 
 APIs
 accept paths in the Windows ANSI code page and convert to UTF16 internally.
 Nonetheless, good point.
 Yes, and if postresql.conf not going to be UTF16 encoded, it seems  
 natural to use ANSI code page on Windows to write such paths in it.
 So the paths should be written in OS encoding, which is accepted by OS  
 functions, such as fopen. (This is what we have now.)

To the contrary, we would do better to use _wfopen() after converting from the
encoding at hand to UTF16.  We should have the goal of removing our dependence
on the Windows ANSI code page, not tightening our bonds thereto.  As long as
PostgreSQL uses fopen() on Windows, it will remain possible to create a file
that PostgreSQL cannot open.  Making the full transition is probably a big
job, and we don't need to get there in one patch.  Try, however, to avoid
patch designs that increase the distance left to cover.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/ports/pgcheckdir.c - Ignore dot directories...

2013-02-14 Thread Bruce Momjian
On Tue, Feb  5, 2013 at 08:49:17AM -0500, Peter Eisentraut wrote:
 On 2/5/13 7:32 AM, Kevin Grittner wrote:
  Sean Chittenden s...@chittenden.org wrote:
  
   Currently src/port/pgcheckdir.c will reject non-empty
   directories, which is an issue during initdb(1) when PGDATA is
   also the mount point for filesystems that support snapshots (e.g.
   ZFS or UFS2).
   Granted it's not hard to create a subdirectory, initdb there and
   move the contents of the files around, it's extra work that
   shouldn't be required.
  I feel that it is very bad practice to use the mount point as the
  PGDATA directory.  It forcloses a lot of reasonable actions that
  someone managing the database server might want to take. 
 
 Yes, a variant of this particular patch gets rejected about once every
 18 months.

Agreed.  The attached patch modifies pg_check_dir() to report about
invisible and lost+found directory entries, and give more helpful
messages to the user.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c
new file mode 100644
index 2ea3f6e..03eadcd
*** a/src/bin/initdb/initdb.c
--- b/src/bin/initdb/initdb.c
*** setup_signals(void)
*** 3144,3150 
  void
  create_data_directory(void)
  {
! 	switch (pg_check_dir(pg_data))
  	{
  		case 0:
  			/* PGDATA not there, must create it */
--- 3144,3152 
  void
  create_data_directory(void)
  {
! 	int ret;
! 
! 	switch ((ret = pg_check_dir(pg_data)))
  	{
  		case 0:
  			/* PGDATA not there, must create it */
*** create_data_directory(void)
*** 3179,3193 
  			break;
  
  		case 2:
  			/* Present and not empty */
  			fprintf(stderr,
  	_(%s: directory \%s\ exists but is not empty\n),
  	progname, pg_data);
! 			fprintf(stderr,
! 	_(If you want to create a new database system, either remove or empty\n
! 	  the directory \%s\ or run %s\n
! 	  with an argument other than \%s\.\n),
! 	pg_data, progname, pg_data);
  			exit(1);			/* no further message needed */
  
  		default:
--- 3181,3205 
  			break;
  
  		case 2:
+ 		case 3:
+ 		case 4:
  			/* Present and not empty */
  			fprintf(stderr,
  	_(%s: directory \%s\ exists but is not empty\n),
  	progname, pg_data);
! 			if (ret == 2)
! fprintf(stderr,
! 		_(It contains a dot-prefixed/invisible file.\n));
! 			else if (ret == 3)
! fprintf(stderr,
! 		_(It contains a lost+found directory.\n
! 		Using the top-level directory of a mount point is not recommended.\n));
! 			if (ret != 3)
! fprintf(stderr,
! 		_(If you want to create a new database system, either remove or empty\n
! 		  the directory \%s\ or run %s\n
! 		  with an argument other than \%s\.\n),
! 		pg_data, progname, pg_data);
  			exit(1);			/* no further message needed */
  
  		default:
*** create_xlog_symlink(void)
*** 3206,3211 
--- 3218,3224 
  	if (strcmp(xlog_dir, ) != 0)
  	{
  		char	   *linkloc;
+ 		int			ret;
  
  		/* clean up xlog directory name, check it's absolute */
  		canonicalize_path(xlog_dir);
*** create_xlog_symlink(void)
*** 3216,3222 
  		}
  
  		/* check if the specified xlog directory exists/is empty */
! 		switch (pg_check_dir(xlog_dir))
  		{
  			case 0:
  /* xlog directory not there, must create it */
--- 3229,3235 
  		}
  
  		/* check if the specified xlog directory exists/is empty */
! 		switch ((ret = pg_check_dir(xlog_dir)))
  		{
  			case 0:
  /* xlog directory not there, must create it */
*** create_xlog_symlink(void)
*** 3255,3268 
  break;
  
  			case 2:
  /* Present and not empty */
  fprintf(stderr,
  		_(%s: directory \%s\ exists but is not empty\n),
  		progname, xlog_dir);
! fprintf(stderr,
!  _(If you want to store the transaction log there, either\n
!    remove or empty the directory \%s\.\n),
! 		xlog_dir);
  exit_nicely();
  
  			default:
--- 3268,3291 
  break;
  
  			case 2:
+ 			case 3:
+ 			case 4:
  /* Present and not empty */
  fprintf(stderr,
  		_(%s: directory \%s\ exists but is not empty\n),
  		progname, xlog_dir);
! if (ret == 2)
! 	fprintf(stderr,
! 			_(It contains a dot-prefixed/invisible file.\n));
! else if (ret == 3)
! 	fprintf(stderr,
! 			_(It contains a lost+found directory.\n
! 			Using the top-level directory of a mount point is not recommended.\n));
! if (ret != 3)
! 	fprintf(stderr,
! 	 _(If you want to store the transaction log there, either\n
! 	   remove or empty the directory \%s\.\n),
! 			xlog_dir);
  exit_nicely();
  
  			default:
diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c
new file mode 100644
index b6f7744..fb5a1bd
*** 

Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Tomas Vondra
On 14.2.2013 20:43, Josh Berkus wrote:
 I saw discussion about this on this thread, but I'm not able to figure
 out what the answer is: how does this work with moving the stats file,
 for example to a RAMdisk?  Specifically, if the user sets
 stats_temp_directory, does it continue to work the way it does now?

No change in this respect - you can still use RAMdisk, and you'll
actually need less space because the space requirements decreased due to
breaking the single file into multiple pieces.

We're using it this way (on a tmpfs filesystem) and it works like a charm.

regards
Tomas



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Tomas Vondra
First of all, big thanks for working on this patch and not only
identifying the issues but actually fixing them.

On 14.2.2013 20:23, Alvaro Herrera wrote:
 Here's a ninth version of this patch.  (version 8 went unpublished).  I
 have simplified a lot of things and improved some comments; I think I
 understand much of it now.  I think this patch is fairly close to
 committable, but one issue remains, which is this bit in
 pgstat_write_statsfiles():
 
...

 
 The problem here is that creating these dummy entries will cause a
 difference in autovacuum behavior.  Autovacuum will skip processing
 databases with no pgstat entry, and the intended reason is that if
 there's no pgstat entry it's because the database doesn't have enough
 activity.  Now perhaps we want to change that, but it should be an
 explicit decision taken after discussion and thought, not side effect
 from an unrelated patch.

I don't see how that changes the autovacuum behavior. Can you explain
that a bit more?

As I see it, with the old (single-file version) the autovacuum worker
would get exacly the same thing, i.e. no stats at all.

Which is exacly what autovacuum worker gets with the new code, except
that the check for last statfile timestamp uses the per-db file, so we
need to write it. This way the worker is able to read the timestamp, is
happy about it because it gets a fresh file although it gets no stats later.

Where is the behavior change? Can you provide an example?

kind regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] I think we need PRE_COMMIT events for (Sub)XactCallbacks

2013-02-14 Thread Tom Lane
I'm trying to whack the postgres_fdw patch into committable shape, with
one eye on the writable-foreign-tables patch that's right behind it in
the queue.  One thing I've come across is that the timing of remote
commits is a mess.  As-is in the submitted patches, we'll issue a commit
to the remote server as soon as the executor shuts down a query.  I'm
not too thrilled with this, since given

begin;
update my_foreign_table set ... ;
update my_foreign_table set ... ;
update my_foreign_table set ... ;
rollback;

a reasonable person would expect that the remote updates are rolled
back.  But as it stands each UPDATE is committed instantly on
completion.

The only facility that postgres_fdw would have for doing it differently
is to plug into the XactCallback or ResourceReleaseCallback hooks, and
the problem with both of those is they are post-commit hooks, which
means it's too late to throw an error.  Now foreign-side errors at
commit are not exactly difficult to foresee --- for instance, one of
our updates might have violated a deferred foreign-key constraint on the
remote side, and we won't hear about that till we try to commit.  If we
then report that error in one of the existing hooks, it'll become a
PANIC.  No good.

So I think we need to add a pre-commit event to the set of events that
XactCallbacks are called for, probably at this spot in CommitTransaction:

/*
 * Close open portals (converting holdable ones into static portals).
 * If there weren't any, we are done ... otherwise loop back to check
 * if they queued deferred triggers.  Lather, rinse, repeat.
 */
if (!PreCommit_Portals(false))
break;
}

+   CallXactCallbacks(XACT_EVENT_PRE_COMMIT);
+
/*
 * The remaining actions cannot call any user-defined code, so it's safe
 * to start shutting down within-transaction services.  But note that most
 * of this stuff could still throw an error, which would switch us into
 * the transaction-abort path.
 */

and similarly in PrepareTransaction.  We're probably also going to need
a pre-subcommit event in CommitSubTransaction --- maybe we could get
away without that, but I'm not convinced, and we might as well change
both the XactCallback and SubXactCallback APIs at the same time.

Note that this doesn't come near a true two-phase-commit protocol;
there will still be a window wherein we've done COMMIT on the remote
side but the local side could fail and decide to roll back.  However,
the possible errors in that window are limited and unusual, so it's
not clear to me that it's worth working harder than this.

Any objections?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Tomas Vondra
On 14.2.2013 22:24, Alvaro Herrera wrote:
 Alvaro Herrera escribió:
 Here's a ninth version of this patch.  (version 8 went unpublished).  I
 have simplified a lot of things and improved some comments; I think I
 understand much of it now.  I think this patch is fairly close to
 committable, but one issue remains, which is this bit in
 pgstat_write_statsfiles():
 
 I've marked this as Waiting on author for the time being.  I'm going to
 review/work on other patches now, hoping that Tomas will post an updated
 version in time for it to be considered for 9.3.

Sadly I have no idea how to fix that, and I think the solution you
suggested in the previous messages does not actually do the trick :-(

Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] I think we need PRE_COMMIT events for (Sub)XactCallbacks

2013-02-14 Thread Simon Riggs
On 14 February 2013 23:49, Tom Lane t...@sss.pgh.pa.us wrote:

 Any objections?

Makes sense to me.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/ports/pgcheckdir.c - Ignore dot directories...

2013-02-14 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Agreed.  The attached patch modifies pg_check_dir() to report about
 invisible and lost+found directory entries, and give more helpful
 messages to the user.

I'm not terribly thrilled with special-casing 'lost+found' like that,
since it's an extremely filesystem-dependent thing that even today
probably only applies to a minority of our installed platforms.

The special case for dotfiles might be useful, not because of any
connection to mount points but just because someone might forget
that such could be lurking in a directory that looks empty.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/ports/pgcheckdir.c - Ignore dot directories...

2013-02-14 Thread Bruce Momjian
On Thu, Feb 14, 2013 at 07:21:27PM -0500, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Agreed.  The attached patch modifies pg_check_dir() to report about
  invisible and lost+found directory entries, and give more helpful
  messages to the user.
 
 I'm not terribly thrilled with special-casing 'lost+found' like that,
 since it's an extremely filesystem-dependent thing that even today
 probably only applies to a minority of our installed platforms.
 
 The special case for dotfiles might be useful, not because of any
 connection to mount points but just because someone might forget
 that such could be lurking in a directory that looks empty.

Yeah, I agree on both points.  I am not sure the patch is worth it just
the dot output.

Want a crazy idea?  '.' and '..' have different major device numbers on
the top directory of a mount point.  We could test for that and
prevent/warn about creating data directories on top-level directories of
mount points.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] CREATE RULE _RETURN and toast tables

2013-02-14 Thread Andres Freund
Hi,

While investigating an anti-wraparound shutdown issue of Peter
H. Ezetta (cced) on IRC the issue came up that when you:

CREATE TABLE foo(id int, text text);
CREATE RULE _RETURN AS ON SELECT TO foo DO INSTEAD SELECT 1::int AS id, 
''::text AS text;

a) the view keeps its relfrozenxid value
b) the toast table remains

Peter is running (or rather migrating away from) 8.3 and in 8.3 toast
tables cannot be vacuumed by

1) manual VACUUMS, since vacuum() passes RELKIND_RELATION to
   vacuum_rel() which thus errors out when vacuuming either the view or
   the toast table directly:
if (onerel-rd_rel-relkind != expected_relkind)
{
ereport(WARNING,
(errmsg(skipping \%s\ --- cannot vacuum 
indexes, views, or special system tables,

RelationGetRelationName(onerel;

2) autovacuum recognizes that the toast table needs vacuuming but uses
the following brute force trick to search for the table to find the
relation to vacuum:
foreach(cell, toast_oids)
{
Oid toastoid = lfirst_oid(cell);
ListCell   *cell2;

foreach(cell2, table_toast_list)
{
av_relation *ar = lfirst(cell2);

if (ar-ar_toastrelid == toastoid)
{
table_oids = lappend_oid(table_oids, 
ar-ar_relid);
break;
}
}
}

due to no respective element being in in table_toast_list nothing is
vacuumed and you cannot escape the situation. Not very nice. I wonder if
we should do something about it even due 8.3 is formally out of support,
not being able to migrate away from 8.3 because it shutdown is kinda
bad.

Due to some lucky coding 8.4+'s autovacuum (I tested only HEAD, but the
code in 8.4 looks fine) manages to vacuum the toast relation even though
no main table exists for it as it only consults the mapping for autovac
options. Its now also allowed to directly vacuum toast tables.

The current behaviour doesn't seem to be a terribly good idea. I propose
to drop the toast table and reset the relfrozenxid in DefineQueryRewrite
in the RelisBecomingView case. Currently the code only does:
 *
 * XXX what about getting rid of its TOAST table?  For now, we don't.
 */
if (RelisBecomingView)
{
RelationDropStorage(event_relation);
DeleteSystemAttributeTuples(event_relid);
}

Dropping the toast table seems like its important, it currently only
works by accident, I really doubt everbody working on (auto-)vacuum is
aware of that case.
I would also vote for resetting relfrozenxid of the main relation, but
thats more of a cosmetical issue.

Opinions?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [RFC] ideas for a new Python DBAPI driver (was Re: [HACKERS] libpq test suite)

2013-02-14 Thread Andrew McNamara
For my Python DBAPI2 PostgreSQL driver I plan the following optimizations:

I suggest you have a look at my Python ocpgdb driver:

http://code.google.com/p/ocpgdb/

It uses the v3 binary protocol exclusively (to avoid the usual escaping
security issues). A number of gotchyas were discovered along the way -
in particular, you must be a lot more careful about types (as you note
in a later reply). There were also some issues with the v3 protocol,
most of which have been fixed now.

ocpgdb does not implement everything, just the bits I needed. That said,
other people/projects are using it in production, and it's proven to be
fast and stable.

-- 
Andrew McNamara, Senior Developer, Object Craft
http://www.object-craft.com.au/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE RULE _RETURN and toast tables

2013-02-14 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 due to no respective element being in in table_toast_list nothing is
 vacuumed and you cannot escape the situation. Not very nice. I wonder if
 we should do something about it even due 8.3 is formally out of support,

Out of support is out of support.  We're certainly not going to update
8.3 to fix corner cases that escaped notice for the five years it was in
support.  (And it's not true that you can't get out of it --- if nothing
else, you could manually update the toast table's relfrozenxid value.)

 The current behaviour doesn't seem to be a terribly good idea. I propose
 to drop the toast table and reset the relfrozenxid in DefineQueryRewrite
 in the RelisBecomingView case.

Yeah, probably worth doing.  At the time we thought that that code path
was just a short-term legacy thing for loading ancient pg_dump files.
However, given that even modern pg_dumps will use this syntax if
necessary to break circular dependencies for views, we're probably never
going to be rid of it completely.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE RULE _RETURN and toast tables

2013-02-14 Thread Andres Freund
On 2013-02-14 20:47:11 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  due to no respective element being in in table_toast_list nothing is
  vacuumed and you cannot escape the situation. Not very nice. I wonder if
  we should do something about it even due 8.3 is formally out of support,
 
 Out of support is out of support.  We're certainly not going to update
 8.3 to fix corner cases that escaped notice for the five years it was in
 support.

Well, its going to get more likely with age... But sure, I have no probelm

  (And it's not true that you can't get out of it --- if nothing
 else, you could manually update the toast table's relfrozenxid value.)

Yea, thats what we ended up with. For the benefit of people searching
for the problem, if you hit strange wraparound issues that cannot be
fixed in 8.3 you can escape the issue with:

UPDATE pg_class
SET relfrozenxid = (txid_current() % (1::bigint32))::text::xid
WHERE NOT relfrozenxid = '0' AND relkind = 't'
AND pg_class.oid IN ( SELECT reltoastrelid FROM pg_class WHERE relkind 
= 'v')
RETURNING *;

  The current behaviour doesn't seem to be a terribly good idea. I propose
  to drop the toast table and reset the relfrozenxid in DefineQueryRewrite
  in the RelisBecomingView case.
 
 Yeah, probably worth doing.  At the time we thought that that code path
 was just a short-term legacy thing for loading ancient pg_dump files.
 However, given that even modern pg_dumps will use this syntax if
 necessary to break circular dependencies for views, we're probably never
 going to be rid of it completely.

Yep, thats what I thought. Will write something up.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE RULE _RETURN and toast tables

2013-02-14 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2013-02-14 20:47:11 -0500, Tom Lane wrote:
 Yeah, probably worth doing.  At the time we thought that that code path
 was just a short-term legacy thing for loading ancient pg_dump files.
 However, given that even modern pg_dumps will use this syntax if
 necessary to break circular dependencies for views, we're probably never
 going to be rid of it completely.

 Yep, thats what I thought. Will write something up.

BTW, it strikes me that we *could* get pg_dump to stop doing this if we
wanted.  Instead of the CREATE TABLE/CREATE RULE hack, we could have it
create a dummy view with the right rowtype like so:

CREATE VIEW v AS
  SELECT null::typename1 AS colname1,
 null::typename2 AS colname2, ... ;

then dump whatever had the circular-dependency issue with the view's
rowtype, and finally use CREATE OR REPLACE VIEW to replace the dummy
definition with the proper one.

This wouldn't really have any short-term benefit --- in particular, it
doesn't relieve the pressure to fix DefineQueryRewrite as you propose.
The advantage is that in ten years or so there would be no pg_dump files
anywhere using CREATE RULE _RETURN, and so we could hope to eventually
deprecate that syntax.  Which would let us get rid of the
RelIsBecomingView code path, and maybe have a bit more wiggle room to
remove or redesign the rule system.

That payoff is a little bit too far off to motivate me to do anything in
this line personally, but in case anybody else is more excited about it,
I thought I'd get the idea into the archives.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE RULE _RETURN and toast tables

2013-02-14 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
 That payoff is a little bit too far off to motivate me to do anything in
 this line personally, but in case anybody else is more excited about it,
 I thought I'd get the idea into the archives.

Any objection to making it a TODO?  Might be a bit light for a GSOC
project, but perhaps a beginner (or really modivated student who wanted
to show that they are willing, able, and excited to contribute..) will
pick it up.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] CREATE RULE _RETURN and toast tables

2013-02-14 Thread Tom Lane
Stephen Frost sfr...@snowman.net writes:
 * Tom Lane (t...@sss.pgh.pa.us) wrote:
 That payoff is a little bit too far off to motivate me to do anything in
 this line personally, but in case anybody else is more excited about it,
 I thought I'd get the idea into the archives.

 Any objection to making it a TODO?

None here.  I was thinking it might be a useful finger exercise for
someone who wanted to learn about pg_dump.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE RULE _RETURN and toast tables

2013-02-14 Thread Stephen Frost
* Tom Lane (t...@sss.pgh.pa.us) wrote:
  Any objection to making it a TODO?
 
 None here.  I was thinking it might be a useful finger exercise for
 someone who wanted to learn about pg_dump.

Done.  Thanks.

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-02-14 Thread Alvaro Herrera
Josh Berkus wrote:
 Folks,
 
 Once again, Google is holding Summer of Code.  We need to assess whether
 we want to participate this year.
 
 Questions:
 
 - Who wants to mentor for GSOC?

I am open to being a mentor.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Call for Google Summer of Code mentors, admins

2013-02-14 Thread Stephen Frost
* Josh Berkus (j...@agliodbs.com) wrote:
 - Who wants to mentor for GSOC?

I could be a mentor.

 - Who can admin for GSOC?  Thom?
 
 - Please suggest project ideas for GSOC

Will think on this.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [pgsql-advocacy] Call for Google Summer of Code mentors, admins

2013-02-14 Thread Atri Sharma
I forgot to mark all.

I could be a co mentor, helping in overall coordination and supporting all the 
students in getting familiar with the project and community etc.

Atri

Sent from my iPad

On 14-Feb-2013, at 23:32, Josh Berkus j...@agliodbs.com wrote:

 Folks,
 
 Once again, Google is holding Summer of Code.  We need to assess whether
 we want to participate this year.
 
 Questions:
 
 - Who wants to mentor for GSOC?
 
 - Who can admin for GSOC?  Thom?
 
 - Please suggest project ideas for GSOC
 
 - Students seeing this -- please speak up if you have projects you plan
 to submit.
 
 -- 
 Josh Berkus
 PostgreSQL Experts Inc.
 http://pgexperts.com
 
 
 -- 
 Sent via pgsql-advocacy mailing list (pgsql-advoc...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-advocacy


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Call for Google Summer of Code mentors, admins

2013-02-14 Thread Atri Sharma
I forgot to mark all.

I could be a co mentor, helping in overall coordination and supporting all the 
students in getting familiar with the project and community etc.

Atri

Sent from my iPad

On 15-Feb-2013, at 8:34, Stephen Frost sfr...@snowman.net wrote:

 * Josh Berkus (j...@agliodbs.com) wrote:
 - Who wants to mentor for GSOC?
 
 I could be a mentor.
 
 - Who can admin for GSOC?  Thom?
 
 - Please suggest project ideas for GSOC
 
 Will think on this.
 
Thanks,
 
Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: PATCH: Split stats file per database WAS: [HACKERS] autovacuum stress-testing our system

2013-02-14 Thread Alvaro Herrera
Tomas Vondra escribió:

 I don't see how that changes the autovacuum behavior. Can you explain
 that a bit more?

It might be that I'm all wet on this.  I'll poke at it some more.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


archive_timeout behaviour when archive_mode is off (was Re: [HACKERS] Too frequent checkpoints ?)

2013-02-14 Thread Pavan Deolasee
(changing subject)

On Thu, Feb 14, 2013 at 11:48 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Mon, Feb 11, 2013 at 5:46 PM, Pavan Deolasee
 I also noticed that the WAL file switch
 happens after archive_timeout seconds irrespective of whether
 archive_mode is turned ON or not. This happens because we don't check
 if XLogArchivingActive() in CheckArchiveTimeout() function. It looks
 wrong to me.

 +1 to fix this. I've not heard the use case where archive_timeout needs to
 be used even in not archive mode...


Ok, I will write a patch to fix this. I wonder if this is worth
backpatching though. The code is like that for a long time and the
fact that we haven't heard any complaints about it, may be its not
worth fixing in the stable branches. But I wonder if anyone else
thinks otherwise.

I think we should also update the documentation to be clear that
archive_timeout and archive_command are used only when archive_mode is
turned on.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] I think we need PRE_COMMIT events for (Sub)XactCallbacks

2013-02-14 Thread Pavan Deolasee
On Fri, Feb 15, 2013 at 5:19 AM, Tom Lane t...@sss.pgh.pa.us wrote:


 +   CallXactCallbacks(XACT_EVENT_PRE_COMMIT);
 +
 /*
  * The remaining actions cannot call any user-defined code, so it's safe
  * to start shutting down within-transaction services.  But note that most
  * of this stuff could still throw an error, which would switch us into
  * the transaction-abort path.
  */

 and similarly in PrepareTransaction.  We're probably also going to need
 a pre-subcommit event in CommitSubTransaction --- maybe we could get
 away without that, but I'm not convinced, and we might as well change
 both the XactCallback and SubXactCallback APIs at the same time.


I noticed you added a pre event for commit/prepare/subcommit. That
looks good. Is there a case to add it for abort/subabort too ? I
wonder if we would want to do some cleanup on the foreign servers
before the transaction is abort-recorded on the main server. For
example, if someone wants to implement a 2PC using transaction
callbacks and need a mechanism to rollback prepared transactions
because some foreign server refused to prepare, I'm not sure if she
can use  XACT_EVENT_ABORT because that callback is called while
interrupts are disabled and so it may not be safe to communicate with
the foreign servers.

 Note that this doesn't come near a true two-phase-commit protocol;
 there will still be a window wherein we've done COMMIT on the remote
 side but the local side could fail and decide to roll back.  However,
 the possible errors in that window are limited and unusual, so it's
 not clear to me that it's worth working harder than this.


How about supporting all three modes such as 1. the current behaviour
of immediately committing at the end of a statement, 2. a full 2PC and
3. what you are proposing. The first will be fast because you don't
need additional round-trip at commit/abort but is terrible WRT data
consistency. The second is most reliable, but will have increased
transaction commit time. The third is a nice balance and can be the
default.

I understand this might be too much work for now. But it will awesome
if we can have all machinery in place to support these configurable
modes in future.

Thanks,
Pavan
-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A question about the psql \copy command

2013-02-14 Thread Amit Kapila
On Thursday, February 07, 2013 6:15 PM Etsuro Fujita wrote:
 Through the work on the patch [1], I had a question about the psql
 \copy command.  We are permitted 1) but not permitted 2):
 1) \copy foo from stdin ;
 2) \copy foo from stdin;
 Is this intentional?  I think it would be better to allow for 2).
 Attached is a patch.

Verified that attached patch resolves the problem mentioned by you.
Ran the regression to ensure that it should not break any existing syntax.

Although this is minor bug, but IMO it will improve consistency among \
commands, because other works fine with ;  

With Regards,
Amit Kapila.
 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers