Re: [HACKERS] setting separate values of replication parameters to each standby to provide more granularity

2013-10-01 Thread Pavan Deolasee
On Thu, Sep 26, 2013 at 6:00 PM, Samrat Revagade
revagade.sam...@gmail.comwrote:

 Hi,

 How about providing more granularity to replication, by setting separate
 values of replication parameters to each standby
 for example:
 standby1.wal_sender_timeout= 50s
 standby2.wal_sender_timeout= 40s

 The idea is to allow configuration of standby servers such that  they have
 there own set of replication parameters as per requirements.


How does this interplay with the synchronous_standby_names parameter ? Or
do you think that becomes irrelevant if we do like what you are suggesting
above ? AFAIK synchronous_standby_names was added because we wanted to give
master control on which standbys can become SYNC standbys and in what order
of priority. But if we do what you are suggesting above, I wonder if we can
just have an explicit knob  to make a standby a SYNC standby. Say something
like: standby1.sync_mode = SYNC|ASYNC.

The other purpose of synchronous_standby_names is to define order of
preference for sync standbys. Even that can then be explicitly specified
using this mechanism.

I am not sure but may be we can even allow to set synchronous_commit at
each standby level. So depending on the synchronous_commit setting of each
standby and their priorities also explicitly defined by this mechanism,
master will decide which standbys to wait for at the commit time.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


Re: [HACKERS] review: psql and pset without any arguments

2013-10-01 Thread Pavel Stehule
Hello all

I am thinking so almost all is done

I fixed a help and appended a simple test

But it is a cosmetic changes.

Comments?

Regards

Pavel Stehule



2013/9/30 Gilles Darold gilles.dar...@dalibo.com

 Le 30/09/2013 17:35, Peter Eisentraut a écrit :
  Please remove the tabs from the SGML files.

 Done. I've also fixed the typo reported by Ian. Here is the attached v4
 patch.

 Thanks a lot for your review.

 Regards,

 --
 Gilles Darold
 Administrateur de bases de données
 http://dalibo.com - http://dalibo.org




patch_pset_v5.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] patch: pset autocomplete bugfix

2013-10-01 Thread Pavel Stehule
Hello

When I did a review of \pset improvements, I found so not all possible
options are supported by autocomplete. Here is fix

Regards

Pavel Stehule


pset-autocomplete-fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-10-01 Thread Sameer Thakur
On Tue, Oct 1, 2013 at 12:48 AM, Daniel Farina-5 [via PostgreSQL]
ml-node+s1045698n5772887...@n5.nabble.com wrote:

 On Sep 30, 2013 4:39 AM, Sameer Thakur [hidden email] wrote:

  Also, for onlookers, I have changed this patch around to do the
  date-oriented stuff but want to look it over before stapling it up and
  sending it.  If one cannot wait, one can look at
  https://github.com/fdr/postgres/tree/queryid.  The squashed-version of
  that history contains a reasonable patch I think, but a re-read often
  finds something for me and I've only just completed it yesterday.
 

 I did the following
 1. Forked from fdr/postgres
 2. cloned branch queryid
 3. squashed
 22899c802571a57cfaf0df38e6c5c366b5430c74
 d813096e29049667151a49fc5e5cf3d6bbe55702
 picked
 be2671a4a6aa355c5e8ae646210e6c8e0b84ecb5
 4. usual make/make install/create extension pg_stat_statements.
 (pg_stat_statements.max=100).
 5. select * from pg_stat_statements_reset(), select * from
 pgbench_tellers.
 result below:

 userid | dbid  |  session_start   |introduced
|   query   |  query_id
   | calls | total_time |
  rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied |
 shared_blks_written | local_blks_hit | local_blks_read |
 local_blks_dirtied | local_blks_written | t
 emp_blks_read | temp_blks_written | blk_read_time | blk_write_time

 +---+--+---+---+-+---++

 --+-+--+-+-++-+++--
 --+---+---+
  10 | 12900 | 2013-09-30 16:55:22.285113+05:30 | 1970-01-01
 05:30:00+05:30 | select * from pg_stat_statements_reset(); |
 2531907647060518039 | 1 |  0 |
 1 |   0 |0 |   0 |
   0 |  0 |   0 |
 0 |  0 |
 0 | 0 | 0 |  0
  10 | 12900 | 2013-09-30 16:55:22.285113+05:30 | 1970-01-01
 05:30:00+05:30 | select * from pgbench_tellers ;   |
 7580333025384382649 | 1 |  0 |
10 |   1 |0 |   0 |
   0 |  0 |   0 |
 0 |  0 |
 0 | 0 | 0 |  0
 (2 rows)


 I understand session_start and verified that it changes with each
 database restart to reflect current time.

 It should only restart when the statistics file cannot be loaded.

 I am not sure why introduced

 keeps showing the same 1970-01-01 05:30:00+05:30 value. I thought it
 reflected the (most recent) time query statements statistics is added
 to hashtable. Is this a bug?
 Will continue to test and try and understand the code.

 Yes, a bug.  There are a few calls to pgss store and I must be submitting a
 zero value for the introduction time in one of those cases.

 Heh, I thought that was fixed, but maybe I broke something.  Like I said;
 preliminary. At the earliest I can look at this Wednesday, but feel free to
 amend and resubmit including my changes if you feel inclined and get to it
 first.

In pg_stat_statements.c line 1440
changed
if (instr == NULL)
to
if (instr == NULL || INSTR_TIME_IS_ZERO(instr-starttime))

This seemed to do the trick. I will continue to test some more.
regards
Sameer




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-stat-statements-calls-under-estimation-propagation-tp5738128p5772930.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Re: [HACKERS] pg_dump/restore encoding woes

2013-10-01 Thread Amit Khandekar
On 25 September 2013 12:49, Amit Khandekar
amit.khande...@enterprisedb.comwrote:


 0003-Convert-object-names-to-**archive-encoding-before-matc.**patch

 Use iconv(3) in pg_restore to do encoding conversion in the client. This
 involves a lot of autoconf changes that I'm not 100% sure about, other than
 that it's pretty straightforward.


  I haven't looked into this one yet.


I have verified that the *.m4 files that you have included match the ones
available through gettext. In general, the iconv-related config changes
look good.

---

I wanted to compare the new configure file that is included in the patch
with one that I tried generating myself from the new configure.in file
modified by your patch. It gives me :
Autoconf version 2.63 is required.
My autoconf is 2.68. Do we have to use 2.63 only ?

---
pg_restore.c:
cli_encoding can be declared in the if block.

---
pg_restore.c:
get_command_line_encoding() is called twice here :
cli_encoding = get_command_line_encoding();
 if (cli_encoding  0  AH-encoding != get_command_line_encoding())

---
pg_restore.c:
You actually intended to use convert_simple_string_list() rather than
convert_string() below:
convert_string(cd, opts-schemaNames);
convert_string(cd, opts-functionNames);
 convert_string(cd, opts-indexNames);
convert_string(cd, opts-triggerNames);






 - Heikki


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers





Re: [HACKERS] Documentation for SET var_name FROM CURRENT

2013-10-01 Thread Amit Kapila
On Tue, Oct 1, 2013 at 10:25 AM, David Johnston pol...@yahoo.com wrote:
 Amit Kapila-2 wrote
 While reading documentation for SET command, I observed that FROM
 CURRENT syntax and its description is missing from SET command's
 syntax page (http://www.postgresql.org/docs/devel/static/sql-set.html).

 Do you think that documentation should be updated for the same or is
 there any reason why it is not documented?

 It is documented as part of CREATE FUNCTION since its use is only valid in
 that context.

   Not only as part of CREATE FUNCTION, but as part of ALTER
DATABASE, ALTER ROLE, ALTER FUNCTION syntax as well. In all these
places other syntax of SET is also used and described.
I think you are right that syntax SET .. FROM CURRENT is mainly used
in context with few other SQL statements, but as it is part of SET
command, so isn't it better to mention the syntax on SET page and may
be explain a bit about its usage?

 The paragraph with the link to CREATE FUNCTION seems
 sufficient to notify and direct people to the needed description for this.

   After explaining the usage in short, may be can provide links to
all other statements where it can be used, but I think syntax SET ..
FROM CURRENT should be there with SET command's other syntax.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minmax indexes

2013-10-01 Thread Robert Haas
On Mon, Sep 30, 2013 at 1:20 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 You can almost create a bounding box opclass in the current implementation,
 by mapping  operator to contains and  to not contains. But there's no
 support for creating a new, larger, bounding box on insert. It will just
 replace the max with the new value if it's greater than, when it should
 create a whole new value to store in the index that covers both the old and
 the new values. (or less than? I'm not sure which way those operators would
 work..)

This sounds an awful lot like GiST's union operation.  Actually,
following the GiST model of having union and consistent operations
might be a smart way to go.  Then the exact index semantics could be
decided by the opclass.  This might not even be that much extra code;
the existing consistent and union functions for GiST are pretty short.
 That way, it'd be easy to add new opclasses with somewhat different
behavior; the common thread would be that every opclass of this new AM
works by summarizing a physical page range into a single indexed
value.  You might call the AM something like summary or sparse and
then have minmax_ops for your first opclass.

 In fact, even with regular b-tree operators, over integers for example, you
 don't necessarily want to store both min and max. If you only ever perform
 queries like WHERE col  ?, there's no need to track the min value. So to
 make this really general, you should be able to create an index on only the
 minimum or maximum. Or if you want both, you can store them as separate
 index columns. Something like:

 CREATE INDEX minindex ON table (col ASC); -- For min
 CREATE INDEX minindex ON table (col DESC);  -- For max
 CREATE INDEX minindex ON table (col ASC, col DESC); -- For both

This doesn't seem very general, since you're relying on the fact that
ASC and DESC map to  and .  It's not clear what you'd write here if
you wanted to optimize #$ and @!.  But something based on opclasses
will work, since each opclass can support an arbitrary set of
operators.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cmpact commits and changeset extraction

2013-10-01 Thread Robert Haas
On Mon, Sep 30, 2013 at 5:34 PM, Andres Freund and...@2ndquadrant.com wrote:
 What's wrong with #1?

 It seems confusing that a changeset stream in database #1 will contain
 commits (without corresponding changes) from database #2. Seems like aaa
 pola violation to me.

I don't really see the problem.  A transaction could be empty for lots
of reasons; it may have obtained an XID without writing any data, or
whatever it's changed may be outside the bounds of logical rep.  Maybe
you should just skip replay of transactions with no useful content.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] Cpu usage 100% on slave. s_lock problem.

2013-10-01 Thread Andres Freund
On 2013-10-01 03:51:50 +0300, Ants Aasma wrote:
 On Sat, Sep 28, 2013 at 12:53 AM, Andres Freund and...@2ndquadrant.com 
 wrote:
  What confuses me is that pg_read_barrier() is just a compiler barrier on
  x86[-64] in barrier.h. According to my knowledge it needs to be an
  lfence or the full barrier?
  The linked papers from Paul McKenney - which are a great read - seem to
  agree?
 
 x86 provides a pretty strong memory model, the only (relevant) allowed
 reordering is that stores may be delayed beyond subsequent loads. A
 simple and functionally accurate model of this is to ignore all caches
 and think of the system as a bunch of CPU's accessing the main memory
 through a shared bus, but each CPU has a small buffer that stores
 writes done by that CPU. Intel and AMD have performed feats of black
 magic in the memory pipelines that this isn't a good performance
 model, but for correctness it is valid. The exceptions are few
 unordered memory instructions that you have to specifically ask for
 and non-writeback memory that concerns the kernel. (See section 8.2 in
 [1] for details) The note by McKenney stating that lfence is required
 for a read barrier is for those unordered instructions. I don't think
 it's necessary in PostgreSQL.

I am actually referring to his x86 description where he states smp_rmb()
is defined as lock xadd;. But I've since done some research and that was
removed for most x86 platforms after intel and AMD relaxed the
requirements to match the actually implemented reality.

For reference:
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=b6c7347fffa655a3000d9d41640d222c19fc3065

 As for the specific patch being discussed here. The read barrier is in
 the wrong place and with the wrong comment, and the write side is
 assuming that SpinLockAcquire() is a write barrier, which it isn't
 documented to do at the moment.

Lots of things we do pretty much already assume it is one - and I have a
hard time imagining any spinlock implementation that isn't a write
barrier. In fact it's the only reason we use spinlocks in quite some
places.
What we are missing is that it's not guaranteed to be a compiler barrier
on all platforms :(. But that's imo a bug.

 The correct way to think of this is
 that StartupXLOG() does a bunch of state modifications and then
 advertises the fact that it's done by setting
 xlogctl-SharedRecoveryInProgress = false; The state modifications
 should better be visible to anyone seeing that last write, so you need
 one write barrier between the state modifications and setting the
 flag.

SpinLockAcquire() should provide that.

 On the other side, after reading the flag as false in
 RecoveryInProgress() the state modified by StartupXLOG() may be
 investigated, those loads better not happen before we read the flag.

Agreed.

 So we need a read barrier somewhere *after* reading the flag in
 RecoveryInProgress() and reading the shared memory structures, and in
 theory a full barrier if we are going to be writing data. In practice
 x86 is covered thanks to it's memory model, Power is covered thanks to
 the control dependency and ARM would need a read barrier, but I don't
 think any real ARM CPU does speculative stores as that would be
 insane.

Does there necessarily have to be a visible control dependency?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freezing without write I/O

2013-10-01 Thread Andres Freund
On 2013-10-01 04:47:42 +0300, Ants Aasma wrote:
 I still think we should have a macro for the volatile memory accesses.
 As a rule, each one of those needs a memory barrier, and if we
 consolidate them, or optimize them out, the considerations why this is
 safe should be explained in a comment. Race prone memory accesses
 should stick out like a sore thumb.

Agreed. The wait free LW_SHARED thing[1] I posted recently had a simple

#define pg_atomic_read(atomic) (*(volatile uint32 *)(atomic))

That should be sufficient and easily greppable, right?

I generally think we need to provide some primitives for doing atomic
stuff. There's lots of stuff that's not easy to accelerate further without.

Greetings,

Andres Freund

[1] 
http://archives.postgresql.org/message-id/20130926225545.GB26663%40awork2.anarazel.de

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cmpact commits and changeset extraction

2013-10-01 Thread Andres Freund
On 2013-10-01 06:20:20 -0400, Robert Haas wrote:
 On Mon, Sep 30, 2013 at 5:34 PM, Andres Freund and...@2ndquadrant.com wrote:
  What's wrong with #1?
 
  It seems confusing that a changeset stream in database #1 will contain
  commits (without corresponding changes) from database #2. Seems like aaa
  pola violation to me.
 
 I don't really see the problem.  A transaction could be empty for lots
 of reasons; it may have obtained an XID without writing any data, or
 whatever it's changed may be outside the bounds of logical rep.

Sure. But all of them will have had a corresponding action in the
database. If your replication stream suddenly sees commits that you
cannot connect to any application activity... And it would depend on the
kind of commit, you won't see it if a non-compact commit was used.
It also means we need to do work to handle that commit. If you have a
busy and a less so database and you're only replicating the non-busy
one, that might be noticeable.

 Maybe you should just skip replay of transactions with no useful
 content.

Yes, I have thought about that as well. But I dislike it - how do we
define no useful content? If the user did a SELECT * FROM foo FOR
UPDATE, maybe it was done to coordinate stuff with the standby and the
knowledge about that commit is required?
It doesn't really seem our responsibility to detect that.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freezing without write I/O

2013-10-01 Thread Ants Aasma
On Tue, Oct 1, 2013 at 2:13 PM, Andres Freund and...@2ndquadrant.com wrote:
 Agreed. The wait free LW_SHARED thing[1] I posted recently had a simple

 #define pg_atomic_read(atomic) (*(volatile uint32 *)(atomic))

 That should be sufficient and easily greppable, right?

Looks good enough for me. I would consider using a naming scheme that
accounts for possible future uint64 atomics.

Regards,
Ants Aasma
-- 
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] Cpu usage 100% on slave. s_lock problem.

2013-10-01 Thread Andres Freund
On 2013-10-01 14:31:11 +0300, Ants Aasma wrote:
  The correct way to think of this is
  that StartupXLOG() does a bunch of state modifications and then
  advertises the fact that it's done by setting
  xlogctl-SharedRecoveryInProgress = false; The state modifications
  should better be visible to anyone seeing that last write, so you need
  one write barrier between the state modifications and setting the
  flag.
 
  SpinLockAcquire() should provide that.
 
 Yes. It's notable that in this case it's a matter of correctness that
 the global state modifications do *not* share the critical section
 with the flag update. Otherwise the flag update may become visible
 before the state updates.

I think we're currently essentially assuming that not only
SpinLockAcquire() is a barrier but also that SpinLockRelease() is
one... - which is actually far less likely to be true.

  So we need a read barrier somewhere *after* reading the flag in
  RecoveryInProgress() and reading the shared memory structures, and in
  theory a full barrier if we are going to be writing data. In practice
  x86 is covered thanks to it's memory model, Power is covered thanks to
  the control dependency and ARM would need a read barrier, but I don't
  think any real ARM CPU does speculative stores as that would be
  insane.
 
  Does there necessarily have to be a visible control dependency?
 
 Unfortunately no

That's what I thought :(

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PERFORM] Cpu usage 100% on slave. s_lock problem.

2013-10-01 Thread Ants Aasma
On Tue, Oct 1, 2013 at 2:08 PM, Andres Freund and...@2ndquadrant.com wrote:
 As for the specific patch being discussed here. The read barrier is in
 the wrong place and with the wrong comment, and the write side is
 assuming that SpinLockAcquire() is a write barrier, which it isn't
 documented to do at the moment.

 Lots of things we do pretty much already assume it is one - and I have a
 hard time imagining any spinlock implementation that isn't a write
 barrier. In fact it's the only reason we use spinlocks in quite some
 places.
 What we are missing is that it's not guaranteed to be a compiler barrier
 on all platforms :(. But that's imo a bug.

Agree on both counts.

 The correct way to think of this is
 that StartupXLOG() does a bunch of state modifications and then
 advertises the fact that it's done by setting
 xlogctl-SharedRecoveryInProgress = false; The state modifications
 should better be visible to anyone seeing that last write, so you need
 one write barrier between the state modifications and setting the
 flag.

 SpinLockAcquire() should provide that.

Yes. It's notable that in this case it's a matter of correctness that
the global state modifications do *not* share the critical section
with the flag update. Otherwise the flag update may become visible
before the state updates.

 So we need a read barrier somewhere *after* reading the flag in
 RecoveryInProgress() and reading the shared memory structures, and in
 theory a full barrier if we are going to be writing data. In practice
 x86 is covered thanks to it's memory model, Power is covered thanks to
 the control dependency and ARM would need a read barrier, but I don't
 think any real ARM CPU does speculative stores as that would be
 insane.

 Does there necessarily have to be a visible control dependency?

Unfortunately no, the compiler is quite free to hoist loads and even
stores out from the conditional block losing the control dependency.
:( It's quite unlikely to do so as it would be a very large code
motion and it probably has no reason to do it, but I don't see
anything that would disallow it. I wonder if we should just emit a
full fence there for platforms with weak memory models. Trying to
enforce the control dependency seems quite fragile.

Regards,
Ants Aasma
-- 
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-10-01 Thread Sameer Thakur
On Tue, Oct 1, 2013 at 12:48 AM, Daniel Farina-5 [via PostgreSQL]
ml-node+s1045698n5772887...@n5.nabble.com wrote:

 On Sep 30, 2013 4:39 AM, Sameer Thakur [hidden email] wrote:

  Also, for onlookers, I have changed this patch around to do the
  date-oriented stuff but want to look it over before stapling it up and
  sending it.  If one cannot wait, one can look at
  https://github.com/fdr/postgres/tree/queryid.  The squashed-version of
  that history contains a reasonable patch I think, but a re-read often
  finds something for me and I've only just completed it yesterday.
 

 I did the following
 1. Forked from fdr/postgres
 2. cloned branch queryid
 3. squashed
 22899c802571a57cfaf0df38e6c5c366b5430c74
 d813096e29049667151a49fc5e5cf3d6bbe55702
 picked
 be2671a4a6aa355c5e8ae646210e6c8e0b84ecb5
 4. usual make/make install/create extension pg_stat_statements.
 (pg_stat_statements.max=100).
 5. select * from pg_stat_statements_reset(), select * from
 pgbench_tellers.
 result below:

 userid | dbid  |  session_start   |introduced
|   query   |  query_id
   | calls | total_time |
  rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied |
 shared_blks_written | local_blks_hit | local_blks_read |
 local_blks_dirtied | local_blks_written | t
 emp_blks_read | temp_blks_written | blk_read_time | blk_write_time

 +---+--+---+---+-+---++

 --+-+--+-+-++-+++--
 --+---+---+
  10 | 12900 | 2013-09-30 16:55:22.285113+05:30 | 1970-01-01
 05:30:00+05:30 | select * from pg_stat_statements_reset(); |
 2531907647060518039 | 1 |  0 |
 1 |   0 |0 |   0 |
   0 |  0 |   0 |
 0 |  0 |
 0 | 0 | 0 |  0
  10 | 12900 | 2013-09-30 16:55:22.285113+05:30 | 1970-01-01
 05:30:00+05:30 | select * from pgbench_tellers ;   |
 7580333025384382649 | 1 |  0 |
10 |   1 |0 |   0 |
   0 |  0 |   0 |
 0 |  0 |
 0 | 0 | 0 |  0
 (2 rows)


 I understand session_start and verified that it changes with each
 database restart to reflect current time.

 It should only restart when the statistics file cannot be loaded.

This seems to work fine.
1. Started the instance
2. Executed pg_stat_statements_reset(), select * from
pgbench_history,select* from pgbench_tellers. Got the following in
pg_stat_statements view
userid | dbid  |  session_start   |
introduced|   query   |
   query_id   | calls | tota
l_time | rows | shared_blks_hit | shared_blks_read |
shared_blks_dirtied | shared_blks_written | local_blks_hit |
local_blks_read | local_blks_dirtied | local_blks_wri
tten | temp_blks_read | temp_blks_written | blk_read_time | blk_write_time
+---+--+--+---+--+---+-
---+--+-+--+-+-++-++---
-++---+---+
 10 | 12900 | 2013-10-01 17:43:26.667074+05:30 | 2013-10-01
17:43:43.724301+05:30 | select * from pgbench_history;|
-165801328395488047 | 1 |
 0 |0 |   0 |0 |
0 |   0 |  0 |   0 |
   0 |
   0 |  0 | 0 | 0 |  0
 10 | 12900 | 2013-10-01 17:43:26.667074+05:30 | 2013-10-01
17:43:37.379785+05:30 | select * from pgbench_tellers;|
8376871363863945311 | 1 |
 0 |   10 |   0 |1 |
0 |   0 |  0 |   0 |
   0 |
   0 |  0 | 0 | 0 |  0
 10 | 12900 | 2013-10-01 17:43:26.667074+05:30 | 2013-10-01
17:43:26.667178+05:30 | select * from pg_stat_statements_reset(); |
-1061018443194138344 | 1 |
 0 |1 |   0 |0 |
0 |   0 |  0 |   0 |
   0 |
   0 |  0 | 0 | 0 |  0
(3 rows)

Then restarted the server and saw pg_stat_statements view again.

userid | dbid  |  session_start   |

Re: [HACKERS] pgbench - exclude pthread_create() from connection start timing (fwd)

2013-10-01 Thread Fabien COELHO


[oops, resent because stalled, wrong From!]

Hello Noah,


Thread create time seems to be expensive as well, maybe up 0.1
seconds under some conditions (?). Under --rate, this create delay
means that throttling is laging behind schedule by about that time,
so all the first transactions are trying to catch up.


threadRun() already initializes throttle_trigger with a fresh timestamp.
Please detail how the problem remains despite that.


Indeed, I did this kludge because I could not rely on the before fork start 
time as it was (possibly) creating a rush at the beginning of the run under 
--rate.


The version I submitted takes the start time after the thread is created, and 
use it directly for throttling, so the start time is taken once per thread and 
used instead of retaking it because the first one cannot be relied on.



[...]


Fine detailed analysis!


Opinions, other ideas?


I do not think that there is a clean and simple way to take the start/stop 
period into account when computing global performances of a run. The TPC-C 
benchmark tells to ignore the warmup/closure period, whatever they are, and 
only perform measures within the steady state. However the full graph must be 
provided when the bench is provided.


About better measures: If I could rely on having threads, I would simply 
synchronise the threads at the beginning so that they actually start after they 
are all created, and one thread would decide when to stop and set a shared 
volatile variable to stop all transactions more or less at once. In this case, 
the thread start time would be taken just after the synchronization, and maybe 
only by thread 0 would be enough.


Note that this is pretty standard stuff with threads, ISTM that it would solve 
most of the issues, *but* this is not possible with the thread fork emulation 
implemented by pgbench, which really means no threads at all.


A possible compromise would be to do just that when actual threads are used, 
and let it more or less as it is when fork emulation is on...


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation for SET var_name FROM CURRENT

2013-10-01 Thread David Johnston
Amit Kapila-2 wrote
 On Tue, Oct 1, 2013 at 10:25 AM, David Johnston lt;

 polobo@

 gt; wrote:
 Amit Kapila-2 wrote
 While reading documentation for SET command, I observed that FROM
 CURRENT syntax and its description is missing from SET command's
 syntax page (http://www.postgresql.org/docs/devel/static/sql-set.html).

 Do you think that documentation should be updated for the same or is
 there any reason why it is not documented?

 It is documented as part of CREATE FUNCTION since its use is only valid
 in
 that context.
 
Not only as part of CREATE FUNCTION, but as part of ALTER
 DATABASE, ALTER ROLE, ALTER FUNCTION syntax as well. In all these
 places other syntax of SET is also used and described.
 I think you are right that syntax 
 SET .. FROM CURRENT
  is mainly used
 in context with few other SQL statements, but as it is part of SET
 command, so isn't it better to mention the syntax on SET page and may
 be explain a bit about its usage?
 
 The paragraph with the link to CREATE FUNCTION seems
 sufficient to notify and direct people to the needed description for
 this.
 
After explaining the usage in short, may be can provide links to
 all other statements where it can be used, but I think syntax 
 SET ..
 FROM CURRENT
  should be there with SET command's other syntax.

FROM CURRENT is only valid as part of the SET sub-command attached to CREATE
FUNCTION.

Yes, a number of SQL commands take a sub-command called SET.  These are not
the same as the Top-level SQL SET command and have their own rules and
syntax defined on the parent command's page.  They share a key word to make
the grammar and usage saner but these are semantically different statements.

A paragraph cross-referencing where SET sub-commands exist has merit but
since the main SET command does not accept FROM CURRENT it (FC) should not
be included in its page directly.

If you want to put forth an actual documentation change and get a concrete
opinion then by all means.  If you want someone else to do it I'm detailing
why that is unlikely to happen.

The link to section 18.1 from the SET command documentation covers most of
the other relevant info about how to go about configuring the system.  The
SQL command reference is generally a much more narrow scope focusing on the
syntax of the specific command listed.  The current documentation for SET
conforms to this reality.

We both know how the settings sub-system works.  I don't see many novice
questions, though my only exposure is the mailing list, about this kind of
thing so the documentation seems effective.  Other than supposed
completeness are there other reasons you feel a change regarding FROM
CURRENT or SETtings in general need modification?

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Documentation-for-SET-var-name-FROM-CURRENT-tp5772920p5772958.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-10-01 Thread Alvaro Herrera
Since back branches releases are getting closer, I would like to push
this to all supported branches.  To avoid a compatibility nightmare in
case the new die-on-delayed-renegotiation behavior turns out not to be
so great, I think it would be OK to set the error level to WARNING in
all branches but master (and reset the byte count, to avoid filling the
log).  I would also add a CONTEXT line with the current counter value
and the configured limit, and a HINT to report to pg-hackers.  That way
we will hopefully have more info on problems in the field.

Anybody opposed to this?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Fix pg_isolation_regress to work outside its build directory

2013-10-01 Thread Andres Freund
Hi,

isolation_main.c executes isolationtester with:

snprintf(psql_cmd + offset, sizeof(psql_cmd) - offset,
 SYSTEMQUOTE \./isolationtester\ \dbname=%s\  \%s\  \%s\

That obviously fails if pg_isolation_tester is invoked when CWD is not
its build directory. That's rather annoying if one wants to write an
isolation test for a contrib module or similar.
The attached patch fixes that using port.h's find_other_exec().

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
From 80e5a1de55e3533da5b4b1b4aed7533e3d497bfa Mon Sep 17 00:00:00 2001
From: Andres Freund and...@anarazel.de
Date: Tue, 1 Oct 2013 15:29:19 +0200
Subject: [PATCH] Fix pg_isolation_regress to work outside its build directory

---
 src/test/isolation/isolation_main.c  | 16 ++--
 src/test/isolation/isolationtester.c |  5 -
 src/test/regress/pg_regress.c|  2 +-
 src/test/regress/pg_regress.h|  2 +-
 src/test/regress/pg_regress_main.c   |  2 +-
 5 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/src/test/isolation/isolation_main.c b/src/test/isolation/isolation_main.c
index 48a0e12..b3a8ff0 100644
--- a/src/test/isolation/isolation_main.c
+++ b/src/test/isolation/isolation_main.c
@@ -12,6 +12,9 @@
 
 #include pg_regress.h
 
+char isolation_exec[MAXPGPATH];
+#define PG_ISOLATION_VERSIONSTR isolationtester (PostgreSQL)  PG_VERSION \n
+
 /*
  * start an isolation tester process for specified file (including
  * redirection), and return process ID
@@ -58,7 +61,8 @@ isolation_start_test(const char *testname,
 		   %s , launcher);
 
 	snprintf(psql_cmd + offset, sizeof(psql_cmd) - offset,
-			 SYSTEMQUOTE \./isolationtester\ \dbname=%s\  \%s\  \%s\ 21 SYSTEMQUOTE,
+			 SYSTEMQUOTE \%s\ \dbname=%s\  \%s\  \%s\ 21 SYSTEMQUOTE,
+			 isolation_exec,
 			 dblist-str,
 			 infile,
 			 outfile);
@@ -76,8 +80,16 @@ isolation_start_test(const char *testname,
 }
 
 static void
-isolation_init(void)
+isolation_init(int argc, char **argv)
 {
+	/* look for isolationtester binary */
+	if (find_other_exec(argv[0], isolationtester,
+		PG_ISOLATION_VERSIONSTR, isolation_exec) != 0)
+	{
+		fprintf(stderr, _(could not find proper isolationtester binary\n));
+		exit(2);
+	}
+
 	/* set default regression database name */
 	add_stringlist_item(dblist, isolationtest);
 }
diff --git a/src/test/isolation/isolationtester.c b/src/test/isolation/isolationtester.c
index f280779..7f4c364 100644
--- a/src/test/isolation/isolationtester.c
+++ b/src/test/isolation/isolationtester.c
@@ -85,13 +85,16 @@ main(int argc, char **argv)
 	PQExpBufferData wait_query;
 	int			opt;
 
-	while ((opt = getopt(argc, argv, n)) != -1)
+	while ((opt = getopt(argc, argv, nV)) != -1)
 	{
 		switch (opt)
 		{
 			case 'n':
 dry_run = true;
 break;
+			case 'V':
+puts(isolationtester (PostgreSQL)  PG_VERSION);
+exit(0);
 			default:
 fprintf(stderr, Usage: isolationtester [-n] [CONNINFO]\n);
 return EXIT_FAILURE;
diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
index b632326..94fc0b9 100644
--- a/src/test/regress/pg_regress.c
+++ b/src/test/regress/pg_regress.c
@@ -1968,7 +1968,7 @@ regression_main(int argc, char *argv[], init_function ifunc, test_function tfunc
 	 * We call the initialization function here because that way we can set
 	 * default parameters and let them be overwritten by the commandline.
 	 */
-	ifunc();
+	ifunc(argc, argv);
 
 	if (getenv(PG_REGRESS_DIFF_OPTS))
 		pretty_diff_opts = getenv(PG_REGRESS_DIFF_OPTS);
diff --git a/src/test/regress/pg_regress.h b/src/test/regress/pg_regress.h
index 5eb4d41..a5798c9 100644
--- a/src/test/regress/pg_regress.h
+++ b/src/test/regress/pg_regress.h
@@ -30,7 +30,7 @@ typedef PID_TYPE(*test_function) (const char *,
 		  _stringlist **,
 		  _stringlist **,
 		  _stringlist **);
-typedef void (*init_function) (void);
+typedef void (*init_function) (int argc, char **argv);
 
 extern char *bindir;
 extern char *libdir;
diff --git a/src/test/regress/pg_regress_main.c b/src/test/regress/pg_regress_main.c
index f13e27f..147e24c 100644
--- a/src/test/regress/pg_regress_main.c
+++ b/src/test/regress/pg_regress_main.c
@@ -84,7 +84,7 @@ psql_start_test(const char *testname,
 }
 
 static void
-psql_init(void)
+psql_init(int argc, char **argv)
 {
 	/* set default regression database name */
 	add_stringlist_item(dblist, regression);
-- 
1.8.3.251.g1462b67


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - exclude pthread_create() from connection start timing

2013-10-01 Thread Fabien COELHO


Hello Noah,


Thread create time seems to be expensive as well, maybe up 0.1
seconds under some conditions (?). Under --rate, this create delay
means that throttling is laging behind schedule by about that time,
so all the first transactions are trying to catch up.


threadRun() already initializes throttle_trigger with a fresh timestamp.
Please detail how the problem remains despite that.


Indeed, I did this kludge because I could not rely on the before fork 
start time as it was (possibly) creating a rush at the beginning of the 
run under --rate.


The version I submitted takes the start time after the thread is created, 
and use it directly for throttling, so the start time is taken once per 
thread and used instead of retaking it because the first one cannot be 
relied on.



[...]


Fine detailed analysis!


Opinions, other ideas?


I do not think that there is a clean and simple way to take the start/stop 
period into account when computing global performances of a run. The TPC-C 
benchmark tells to ignore the warmup/closure period, whatever they are, 
and only perform measures within the steady state. However the full graph 
must be provided when the bench is provided.


About better measures: If I could rely on having threads, I would simply 
synchronise the threads at the beginning so that they actually start after 
they are all created, and one thread would decide when to stop and set a 
shared volatile variable to stop all transactions more or less at once. In 
this case, the thread start time would be taken just after the 
synchronization, and maybe only by thread 0 would be enough.


Note that this is pretty standard stuff with threads, ISTM that it would 
solve most of the issues, *but* this is not possible with the thread fork 
emulation implemented by pgbench, which really means no threads at all.


A possible compromise would be to do just that when actual threads are 
used, and let it more or less as it is when fork emulation is on...


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Robert Haas
Review comments on 0004:

- In heap_insert and heap_multi_insert, please rewrite the following
comment for clarity: add record for the buffer without actual content
thats removed if fpw is done for that buffer.
- In heap_delete, the assignment to need_tuple_data() need not
separately check RelationNeedsWAL(), as RelationIsLogicallyLogged()
does that.
- It seems that HeapSatisfiesHOTandKeyUpdate is now
HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
afraid that something unscalable is happening to this function.  On a
related node, any overhead added here costs broadly; I'm not sure if
there's enough to worry about.
- MarkCurrentTransactionIdLoggedIfAny has superfluous braces.
- AssignTransactionId changes Mustn't to May not, which seems like
an entirely pointless change.
- You've removed a blank line just before IsSystemRelation; this is an
unnecessary whitespace change.
- Do none of the callers of IsSystemRelation() care about the fact
that you've considerably changed the semantics?
- RelationIsDoingTimetravel is still a crappy name.  How about
RelationRequiredForLogicalDecoding?  And maybe the reloption
treat_as_catalog_table can become required_for_logical_decoding.
- I don't understand the comment in xl_heap_new_cid to the effect that
the combocid isn't needed for decoding.  How not?
- xlogreader.h includes an additional header with no other changes.
Doesn't seem right.
- relcache.h has a cuddled curly brace.

Review comments on 0003:

I have no problem with caching the primary key in the relcache, or
with using that as the default key for logical decoding, but I'm
extremely uncomfortable with the fallback strategy when no primary key
exists.  Choosing any old unique index that happens to present itself
as the primary key feels wrong to me.  The choice of key is
user-visible.  If we say, update the row with a = 1 to
(a,b,c)=(2,2,2), that's different than saying update the row with b =
1 to (a,b,c)=(2,2,2).  Suppose the previous contents of the target
table are (a,b,c)=(1,2,3) and (a,b,c)=(2,1,4).  You get different
answers depending on which you choose.  I think multi-master
replication just isn't going to work unless the two sides agree on the
key, and I think you'll get strange conflicts unless that key is
chosen by the user according to their business logic.

In single-master replication, being able to pick the key is clearly
not essential for correctness, but it's still desirable, because if
the system picks the wrong key, the change stream will in the end
get the database to the right state, but it may do so by turning one
record into a different one from the user's perspective.

All in all, it seems to me that we shouldn't try to punt.  Maybe we
should have something that works like ALTER TABLE name CLUSTER ON
index_name to configure which index should be used for logical
replication.  Possibly this same syntax could be used as ALTER
MATERIALIZED VIEW to set the candidate key for that case.

What happens if new unique indexes are created or old ones dropped
while logical replication is running?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cmpact commits and changeset extraction

2013-10-01 Thread Robert Haas
On Tue, Oct 1, 2013 at 7:26 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-10-01 06:20:20 -0400, Robert Haas wrote:
 On Mon, Sep 30, 2013 at 5:34 PM, Andres Freund and...@2ndquadrant.com 
 wrote:
  What's wrong with #1?
 
  It seems confusing that a changeset stream in database #1 will contain
  commits (without corresponding changes) from database #2. Seems like aaa
  pola violation to me.

 I don't really see the problem.  A transaction could be empty for lots
 of reasons; it may have obtained an XID without writing any data, or
 whatever it's changed may be outside the bounds of logical rep.

 Sure. But all of them will have had a corresponding action in the
 database. If your replication stream suddenly sees commits that you
 cannot connect to any application activity... And it would depend on the
 kind of commit, you won't see it if a non-compact commit was used.
 It also means we need to do work to handle that commit. If you have a
 busy and a less so database and you're only replicating the non-busy
 one, that might be noticeable.

Maybe.  The original reason we added compact commits was because we
thought that making unlogged tables logged and visca/versa was going
to require adding still more stuff to the commit record.  I'm no
longer sure that's the case and, in any case, nobody's worked out the
design details.  But I can't help thinking more stuff's likely to come
up in the future.  I'm certainly willing to entertain proposals for
restructuring that, but I don't really want to just throw it out.

 Maybe you should just skip replay of transactions with no useful
 content.

 Yes, I have thought about that as well. But I dislike it - how do we
 define no useful content?

The only action we detected for that XID was the commit itself.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation for SET var_name FROM CURRENT

2013-10-01 Thread Amit Kapila
On Tue, Oct 1, 2013 at 6:27 PM, David Johnston pol...@yahoo.com wrote:
 Amit Kapila-2 wrote
 On Tue, Oct 1, 2013 at 10:25 AM, David Johnston 

 polobo@

  wrote:
 Amit Kapila-2 wrote
 While reading documentation for SET command, I observed that FROM
 CURRENT syntax and its description is missing from SET command's
 syntax page (http://www.postgresql.org/docs/devel/static/sql-set.html).

 Do you think that documentation should be updated for the same or is
 there any reason why it is not documented?

 It is documented as part of CREATE FUNCTION since its use is only valid
 in
 that context.

Not only as part of CREATE FUNCTION, but as part of ALTER
 DATABASE, ALTER ROLE, ALTER FUNCTION syntax as well. In all these
 places other syntax of SET is also used and described.
 I think you are right that syntax
 SET .. FROM CURRENT
  is mainly used
 in context with few other SQL statements, but as it is part of SET
 command, so isn't it better to mention the syntax on SET page and may
 be explain a bit about its usage?

 The paragraph with the link to CREATE FUNCTION seems
 sufficient to notify and direct people to the needed description for
 this.

After explaining the usage in short, may be can provide links to
 all other statements where it can be used, but I think syntax
 SET ..
 FROM CURRENT
  should be there with SET command's other syntax.

 FROM CURRENT is only valid as part of the SET sub-command attached to CREATE
 FUNCTION.

 Yes, a number of SQL commands take a sub-command called SET.  These are not
 the same as the Top-level SQL SET command and have their own rules and
 syntax defined on the parent command's page.  They share a key word to make
 the grammar and usage saner but these are semantically different statements.

 A paragraph cross-referencing where SET sub-commands exist has merit but
 since the main SET command does not accept FROM CURRENT it (FC) should not
 be included in its page directly.

   Do you mean to say that SET var_name FROM CURRENT doesn't work,
so we don't need on main page?



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-10-01 Thread Robert Haas
On Tue, Oct 1, 2013 at 9:16 AM, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
 Since back branches releases are getting closer, I would like to push
 this to all supported branches.  To avoid a compatibility nightmare in
 case the new die-on-delayed-renegotiation behavior turns out not to be
 so great, I think it would be OK to set the error level to WARNING in
 all branches but master (and reset the byte count, to avoid filling the
 log).  I would also add a CONTEXT line with the current counter value
 and the configured limit, and a HINT to report to pg-hackers.  That way
 we will hopefully have more info on problems in the field.

 Anybody opposed to this?

Yes, warning suck.  If things just failed, users would fix them, but
instead they fill up their hard disk, and then things fail much later,
usually when they are asleep in bed.

If we can't feel comfortable with an ERROR, let's not do it at all.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-10-01 Thread Magnus Hagander
On Tue, Oct 1, 2013 at 4:17 PM, Robert Haas robertmh...@gmail.com wrote:
 On Tue, Oct 1, 2013 at 9:16 AM, Alvaro Herrera alvhe...@2ndquadrant.com 
 wrote:
 Since back branches releases are getting closer, I would like to push
 this to all supported branches.  To avoid a compatibility nightmare in
 case the new die-on-delayed-renegotiation behavior turns out not to be
 so great, I think it would be OK to set the error level to WARNING in
 all branches but master (and reset the byte count, to avoid filling the
 log).  I would also add a CONTEXT line with the current counter value
 and the configured limit, and a HINT to report to pg-hackers.  That way
 we will hopefully have more info on problems in the field.

 Anybody opposed to this?

 Yes, warning suck.  If things just failed, users would fix them, but
 instead they fill up their hard disk, and then things fail much later,
 usually when they are asleep in bed.

 If we can't feel comfortable with an ERROR, let's not do it at all.

In principle, I agree.

However, if we want to do this as a temporary measure to judge impact,
we could do WARNING now and flip it to ERROR in the next minor
release.

However, do we think we'll actually *get* any reports in of it if we
do that? As in would we trust the input? If we do, the it might be
worth doing that. If we don't believe we'll get any input from the
WARNINGs anyway, we might as well flip it to an ERROR. But if we do
flip it to an ERROR, we should have a way to disable that error if, as
Alvaro puts it, we end up breaking too many things.


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-10-01 Thread Robert Haas
On Tue, Oct 1, 2013 at 10:19 AM, Magnus Hagander mag...@hagander.net wrote:
 If we can't feel comfortable with an ERROR, let's not do it at all.

 In principle, I agree.

 However, if we want to do this as a temporary measure to judge impact,
 we could do WARNING now and flip it to ERROR in the next minor
 release.

I can't imagine that whacking the behavior around multiple times is
going to please very many people.  And, from a practical standpoint,
the time between minor releases is typically on the order of ~3
months.  That's not very long.  The situations in which trouble occurs
are likely to obscure, and a lot of users don't apply every minor
release, or don't apply them in a timely fashion.  So I can't see that
this strategy would increase our confidence very much anyway.  In
other words...

 However, do we think we'll actually *get* any reports in of it if we
 do that? As in would we trust the input?

...no.

  If we do, the it might be
 worth doing that. If we don't believe we'll get any input from the
 WARNINGs anyway, we might as well flip it to an ERROR. But if we do
 flip it to an ERROR, we should have a way to disable that error if, as
 Alvaro puts it, we end up breaking too many things.

A way of disabling the error seems like an awfully good idea.  Since I
know my audience, I won't suggest the obvious method of accomplishing
that goal, but I think we all know what it is.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Andres Freund
Hi,

On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
 - AssignTransactionId changes Mustn't to May not, which seems like
 an entirely pointless change.

It was Musn't before ;). I am not sure why I changed it to May not
instead of Mustn't.

 - Do none of the callers of IsSystemRelation() care about the fact
 that you've considerably changed the semantics?

Afaics no. I think the semantics are actually consistent until somebody
manually creates a relation in pg_catalog (using allow_...). And in that
case the new semantics actually seem more useful.

 - RelationIsDoingTimetravel is still a crappy name.  How about
 RelationRequiredForLogicalDecoding?  And maybe the reloption
 treat_as_catalog_table can become required_for_logical_decoding.

Fine with me.

 - I don't understand the comment in xl_heap_new_cid to the effect that
 the combocid isn't needed for decoding.  How not?

We don't use the combocid for antything - since we have the original
cmin/cmax, we can just use those and ignore the value of the combocid itself.

 - xlogreader.h includes an additional header with no other changes.
 Doesn't seem right.

Hm. I seem to remember having a reason for that, but for the heck can't
see it anymore...

 I have no problem with caching the primary key in the relcache, or
 with using that as the default key for logical decoding, but I'm
 extremely uncomfortable with the fallback strategy when no primary key
 exists.  Choosing any old unique index that happens to present itself
 as the primary key feels wrong to me.
 [stuff I don't disagree with]

People lobbied vigorously to allow candidate keys before. I personally
would never want to use anything but an actual primary key for
replication, but there's other usecases than replication.

I think it's going to be the domain of the replication solution to
enforce the presence of primary keys. I.e. they should (be able to) use
event triggers or somesuch to enforce it...

 All in all, it seems to me that we shouldn't try to punt.  Maybe we
 should have something that works like ALTER TABLE name CLUSTER ON
 index_name to configure which index should be used for logical
 replication.  Possibly this same syntax could be used as ALTER
 MATERIALIZED VIEW to set the candidate key for that case.

I'd be fine with that, but I am also not particularly interested in it
because I personally don't see much of a usecase.
For replication ISTM the only case where there would be no primary key
is a) initial load b) replacing the primary key by another index.

 What happens if new unique indexes are created or old ones dropped
 while logical replication is running?

Should just work, but I'll make sure the tests cover this.

The output plugin needs to lookup the current index used, and it will
use a consistent syscache state and thus will find the same index.
In bdr the output plugin simply includes the name of the index used in
the replication stream to make sure things are somewhat consistent.

Will fix or think about the rest.

Thanks,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI freezing bug

2013-10-01 Thread Kevin Grittner
Andres Freund and...@2ndquadrant.com wrote:

 A better solution probably is to promote tuple-level locks if
 they exist to a relation level one upon freezing I guess?

It would be sufficient to promote the tuple lock to a page lock.
It would be pretty easy to add a function to predicate.c which
would accept a Relation and HeapTuple, check for a predicate lock
for the tuple, and add a page lock if found (which will
automatically clear the tuple lock).  This new function would be
called when a tuple was chosen for freezing.  Since freezing always
causes WAL-logging and disk I/O, the cost of a couple hash table
operations should not be noticeable.

This seems like a bug fix which should be back-patched to 9.1, yes?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI freezing bug

2013-10-01 Thread Andres Freund
On 2013-10-01 07:41:46 -0700, Kevin Grittner wrote:
 Andres Freund and...@2ndquadrant.com wrote:
 
  A better solution probably is to promote tuple-level locks if
  they exist to a relation level one upon freezing I guess?
 
 It would be sufficient to promote the tuple lock to a page lock.
 It would be pretty easy to add a function to predicate.c which
 would accept a Relation and HeapTuple, check for a predicate lock
 for the tuple, and add a page lock if found (which will
 automatically clear the tuple lock).  This new function would be
 called when a tuple was chosen for freezing.  Since freezing always
 causes WAL-logging and disk I/O, the cost of a couple hash table
 operations should not be noticeable.

Yea, not sure why I was thinking of table level locks.

 This seems like a bug fix which should be back-patched to 9.1, yes?

Yes.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation for SET var_name FROM CURRENT

2013-10-01 Thread David Johnston
David Johnston wrote
 A paragraph cross-referencing where SET sub-commands exist has merit but
 since the main SET command does not accept FROM CURRENT it (FC) should not
 be included in its page directly.

It is strange that this actually does work - at least in 9.0 - given that
SET ... FROM CURRENT does not seem to have any usage outside of its
interaction as part of the CREATE FUNCTION command.

Is there some use-case I am not seeing?

Since the command works I would agree that it should be provided in the
syntax section for SET and that a comment be added that says generally
that its presence is an historical artifact and has no real use as part of
the top-level command.  Its intended use is in conjunction with the CREATE
FUNCTION command.  Alternative wordings to describe uses I am not seeing are
good too.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Documentation-for-SET-var-name-FROM-CURRENT-tp5772920p5772977.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Robert Haas
On Tue, Oct 1, 2013 at 10:31 AM, Andres Freund and...@2ndquadrant.com wrote:
 I have no problem with caching the primary key in the relcache, or
 with using that as the default key for logical decoding, but I'm
 extremely uncomfortable with the fallback strategy when no primary key
 exists.  Choosing any old unique index that happens to present itself
 as the primary key feels wrong to me.
 [stuff I don't disagree with]

 People lobbied vigorously to allow candidate keys before. I personally
 would never want to use anything but an actual primary key for
 replication, but there's other usecases than replication.

I like allowing candidate keys; I just don't like assuming that any
old one we select will be as good as any other.

 All in all, it seems to me that we shouldn't try to punt.  Maybe we
 should have something that works like ALTER TABLE name CLUSTER ON
 index_name to configure which index should be used for logical
 replication.  Possibly this same syntax could be used as ALTER
 MATERIALIZED VIEW to set the candidate key for that case.

 I'd be fine with that, but I am also not particularly interested in it
 because I personally don't see much of a usecase.
 For replication ISTM the only case where there would be no primary key
 is a) initial load b) replacing the primary key by another index.

The latter is the case I'd be principally concerned about.  I once had
to change the columns that formed the key for a table being used in a
production web application; fortunately, it has traditionally not
mattered much whether a unique index is the primary key, so creating a
new unique index and dropping the old primary key was good enough.
But I would have wanted to control the point at which we changed our
notion of what the candidate key was, I think.

One other thought: you could just log the whole old tuple if there's
no key available.  That would let this work on tables that don't have
indexes.  Replaying the changes might be horribly complex and slow,
but extracting them would work.  If a replication plugin got old
tuple, new tuple with no information on keys, it could find *a* tuple
(not all tuples) that match the old tuple exactly and update each
column to the value from new tuple.  From a correctness point of view,
there's no issue there; it's all about efficiency.  But the user can
solve that problem whenever they like by indexing the destination
table.  It need not even be a unique index, so long as it's reasonably
selective.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] C question about bitmasks in datetime.c

2013-10-01 Thread Bruce Momjian
I see a few cases of this code in src/backend/utils/adt/datetime.c:

else if ((fmask  DTK_DATE_M) != DTK_DATE_M)

Wouldn't this be clearer as:

else if (fmask  DTK_DATE_M)

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] C question about bitmasks in datetime.c

2013-10-01 Thread Andres Freund
On 2013-10-01 11:15:36 -0400, Bruce Momjian wrote:
 I see a few cases of this code in src/backend/utils/adt/datetime.c:
 
 else if ((fmask  DTK_DATE_M) != DTK_DATE_M)
 
 Wouldn't this be clearer as:
 
 else if (fmask  DTK_DATE_M)

That doesn't have the same meaning. The latter is trueif only one bit of
DTK_DATE_M is set, while the former requires all bits to be set.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Documentation for SET var_name FROM CURRENT

2013-10-01 Thread Alvaro Herrera
Amit Kapila escribió:
 While reading documentation for SET command, I observed that FROM
 CURRENT syntax and its description is missing from SET command's
 syntax page (http://www.postgresql.org/docs/devel/static/sql-set.html).
 
 Do you think that documentation should be updated for the same or is
 there any reason why it is not documented?

Did you look at the commit message that introduced it?

commit e7889b83b7059e776f0a3d76bbbdd98687f4592c
Author: Tom Lane t...@sss.pgh.pa.us
Date:   Mon Sep 3 18:46:30 2007 +

Support SET FROM CURRENT in CREATE/ALTER FUNCTION, ALTER DATABASE, ALTER 
ROLE.
(Actually, it works as a plain statement too, but I didn't document that
because it seems a bit useless.)  Unify VariableResetStmt with
VariableSetStmt, and clean up some ancient cruft in the representation of
same.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] C question about bitmasks in datetime.c

2013-10-01 Thread Bruce Momjian
On Tue, Oct  1, 2013 at 05:17:35PM +0200, Andres Freund wrote:
 On 2013-10-01 11:15:36 -0400, Bruce Momjian wrote:
  I see a few cases of this code in src/backend/utils/adt/datetime.c:
  
  else if ((fmask  DTK_DATE_M) != DTK_DATE_M)
  
  Wouldn't this be clearer as:
  
  else if (fmask  DTK_DATE_M)
 
 That doesn't have the same meaning. The latter is trueif only one bit of
 DTK_DATE_M is set, while the former requires all bits to be set.

Oh, I see it now --- DTK_DATE_M is not a bit but rather a set of bits,
and they are testing if _all_ are set.  Thank you!

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-01 Thread Andres Freund
On 2013-10-01 10:07:19 -0400, Robert Haas wrote:
 - It seems that HeapSatisfiesHOTandKeyUpdate is now
 HeapSatisfiesHOTandKeyandCandidateKeyUpdate.  Considering I think this
 was merely HeapSatisfiesHOTUpdate a year ago, it's hard not to be
 afraid that something unscalable is happening to this function.  On a
 related node, any overhead added here costs broadly; I'm not sure if
 there's enough to worry about.

Ok, I had to think a bit, but now I remember why I think these changes
are not really problem: Neither the addition of keys nor candidate keys
will add any additional comparisons since the columns compared for
candidate keys are a subset of the set of key columns which in turn are a
subset of the columns checked for HOT. Right?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNNEST with multiple args, and TABLE with multiple funcs

2013-10-01 Thread Heikki Linnakangas
I've spent some time reviewing this patch - looks pretty good! I'm not 
through yet, but I wanted to post an update. Attached is a new version, 
with some modifications I made. Notably:


I added a new struct to hold the per-function executor state - tupdesc, 
tuplestore, rowcount and slot - instead of the lists that need to be 
kept in sync. This is more readable.


I replaced the CreateTupleDescCopyMany() function with a function called 
TupleDescCopyEntry(), which initializes a single attribute like 
TupleDescInitEntry(), but copies the information from another tupledesc. 
It is used in a loop to construct the composite tupledec in multiple 
function or ordinality case. This is more flexible, no need to create 
the dummy single-attribute TupleDesc for ordinality anymore, for example.


I refactored the grammar a bit; the way func_table rule returned a one- 
or two element list to essentially pass up a flag was an ugly hack.


Below are a couple of more comments:

On 13.09.2013 22:03, Andrew Gierth wrote:

***
*** 3529,3534  static Expr *
--- 3539,3545 
  simplify_function(Oid funcid, Oid result_type, int32 result_typmod,
  Oid result_collid, Oid input_collid, List 
**args_p,
  bool funcvariadic, bool process_args, bool 
allow_non_const,
+ FuncExpr *orig_funcexpr,
  eval_const_expressions_context *context)
  {
List   *args = *args_p;


The new argument needs to be explained in the comment above.


! para
!  The special table function literalUNNEST/literal may be called with
!  any number of array parameters, and returns a corresponding number of
!  columns, as if literalUNNEST/literal
!  (see xref linkend=functions-array) had been called on each parameter
!  separately and combined using the literalTABLE/literal construct. The
!  number of result columns is determined by the sum of the arities of the
!  array element types; that is to say, any array of composite type is
!  expanded into separate result columns for each field of the type.
!  Likewise, the number of rows returned is determined by the largest array
!  parameter, with smaller values padded with NULLs.
! /para


Arities, really? :-). Please reword that into more plain English.

I'm going to put this aside for a day or two now, but will get back to 
it later to finish the review and commit (unless someone beats me to 
it). Meanwhile, if you could do something about that comment and manual 
paragraph above, and re-review the changes I made, that would be great.


- Heikki


tablefunc-20131001-heikki-1.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UNNEST with multiple args, and TABLE with multiple funcs

2013-10-01 Thread Andrew Gierth
 Heikki == Heikki Linnakangas hlinnakan...@vmware.com writes:

 Heikki I've spent some time reviewing this patch - looks pretty
 Heikki good! I'm not through yet, but I wanted to post an
 Heikki update. Attached is a new version, with some modifications I
 Heikki made. Notably:

 Heikki I refactored the grammar a bit

And broke the ability to do  TABLE(unnest(a,b,c), otherfunc(d)).

Yes, you can still do TABLE(unnest(a), unnest(b), unnest(c), otherfunc(d))
but I view this as a significant loss of functionality; I will see about
fixing that.

I am still looking at the other changes.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.2

2013-10-01 Thread Steve Singer

On 09/30/2013 06:44 PM, Andres Freund wrote:

Hi,

The series from friday was a bit too buggy - obviously I was too
tired. So here's a new one:


With this series I've also noticed
#2  0x007741a7 in ExceptionalCondition (
conditionName=conditionName@entry=0x7c2908 !(!(tuple-t_infomask  
0x1000)), errorType=errorType@entry=0x7acc70 FailedAssertion,

fileName=fileName@entry=0x91767e tqual.c,
lineNumber=lineNumber@entry=1608) at assert.c:54
54abort();


 0x007a4432 in HeapTupleSatisfiesMVCCDuringDecoding (
htup=0x10bfe48, snapshot=0x108b3d8, buffer=310) at tqual.c:1608
#4  0x0049d6b7 in heap_hot_search_buffer (tid=tid@entry=0x10bfe4c,
relation=0x7fbebbcd89c0, buffer=310, snapshot=0x10bfda0,
heapTuple=heapTuple@entry=0x10bfe48,
all_dead=all_dead@entry=0x7fff4aa3866f \001\370\375\v\001,
first_call=1 '\001') at heapam.c:1756
#5  0x004a8174 in index_fetch_heap (scan=scan@entry=0x10bfdf8)
at indexam.c:539
#6  0x004a82a8 in index_getnext (scan=0x10bfdf8,
direction=direction@entry=ForwardScanDirection) at indexam.c:622
#7  0x004a6fa9 in systable_getnext (sysscan=sysscan@entry=0x10bfd48)
at genam.c:343
#8  0x0076df40 in RelidByRelfilenode (reltablespace=0,
relfilenode=529775) at relfilenodemap.c:214
---Type return to continue, or q return to quit---
#9  0x00664ad7 in ReorderBufferCommit (rb=0x1082d98,
xid=optimized out, commit_lsn=4638756800, end_lsn=optimized out,
commit_time=commit_time@entry=433970378426176) at reorderbuffer.c:1320

In addition to some of the other ones I've posted about.



* fix pg_recvlogical makefile (Thanks Steve)
* fix two commits not compiling properly without later changes (Thanks Kevin)
* keep track of commit timestamps
* fix bugs with option passing in test_logical_decoding
* actually parse option values in test_decoding instead of just using the
   option name
* don't use anonymous structs in unions. That's compiler specific (msvc
   and gcc) before C11 on which we can't rely. That unfortunately will
   break output plugins because ReorderBufferChange need to qualify
   old/new tuples now
* improve error handling/cleanup in test_logical_decoding
* some minor cleanups

Patches attached, git tree updated.

Greetings,

Andres Freund







--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSL renegotiation

2013-10-01 Thread Andres Freund
On 2013-10-01 10:27:14 -0400, Robert Haas wrote:
 On Tue, Oct 1, 2013 at 10:19 AM, Magnus Hagander mag...@hagander.net wrote:
  If we can't feel comfortable with an ERROR, let's not do it at all.
 
  In principle, I agree.
 
  However, if we want to do this as a temporary measure to judge impact,
  we could do WARNING now and flip it to ERROR in the next minor
  release.
 
 I can't imagine that whacking the behavior around multiple times is
 going to please very many people.

If we have to whack it around, it's because there's a bug in either our
usage of openssl or in openssl itself. Neither is particularly unlikely,
but it's not like not erroring or warning out will stop that.

The alternate reason for getting the WARNING/ERROR is that there is
somebody MITMing the connection and explicitly corrupting renegotiation
packets. But that's a reason for making it an ERROR immediately, not
making it silent. I think from the security POV it's pretty clear that we
should make it an error.

So, imo the question is why are we uncomfortable making it an ERROR
immediately? I think the most likely reason for problems is users having
configured ssl_renegotiation_limit absurdly low, like less than what the
particular algorithms used actually need. What about clamping it to 1MB
if != 0? We can't make it an actual error in the backbranches...

   If we do, the it might be
  worth doing that. If we don't believe we'll get any input from the
  WARNINGs anyway, we might as well flip it to an ERROR. But if we do
  flip it to an ERROR, we should have a way to disable that error if, as
  Alvaro puts it, we end up breaking too many things.
 
 A way of disabling the error seems like an awfully good idea.  Since I
 know my audience, I won't suggest the obvious method of accomplishing
 that goal, but I think we all know what it is.

In which scenario would you want to do that? The way to prevent the
ERROR it is to disable renegotiation entirely. And that's already
possible. Anything else seems like papering over security issues.

I guess I am voting for making renegotiation failure an ERROR everywhere
and silently clamp renegotiation_limit to a lower bound of 1MB in the
backbranches while making the (0, 1MB) an error in HEAD.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2013-10-01 Thread Alvaro Herrera
Michael Paquier escribió:

 Btw, taking the problem from another viewpoint... This feature has now
 3 patches, the 2 first patches doing only code refactoring. Could it
 be possible to have a look at those ones first? Straight-forward
 things should go first, simplifying the core feature evaluation.

I have pushed the first half of the first patch for now, revising it
somewhat: I renamed the functions and put them in lmgr.c instead of
procarray.c.

I think the second half of that first patch (WaitForOldSnapshots) should
be in index.c, not procarray.c either.  I didn't look at the actual code
in there.

I already shipped Michael fixed versions of the remaining patches
adjusting them to the changed API.  I expect him to post them here.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] error out when building pg_xlogdump with pgxs

2013-10-01 Thread Alvaro Herrera
Cédric Villemain wrote:

 Andres, I was answering your question.
 Short and re-phrased:
  * we should not abuse make USE_PGXS to test the contrib build
  * I believe your patch is correct to issue an error when trying to build 
 pg_xlogdump with PGXS, it is not possible, dot.

There being no disagreement, I have pushed the original patch.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-10-01 Thread Daniel Farina
On Tue, Oct 1, 2013 at 5:32 AM, Sameer Thakur samthaku...@gmail.com wrote:
 On Tue, Oct 1, 2013 at 12:48 AM, Daniel Farina-5 [via PostgreSQL]
 [hidden email] wrote:


 On Sep 30, 2013 4:39 AM, Sameer Thakur [hidden email] wrote:

  Also, for onlookers, I have changed this patch around to do the
  date-oriented stuff but want to look it over before stapling it up and
  sending it.  If one cannot wait, one can look at
  https://github.com/fdr/postgres/tree/queryid.  The squashed-version of
  that history contains a reasonable patch I think, but a re-read often
  finds something for me and I've only just completed it yesterday.
 

 I did the following
 1. Forked from fdr/postgres
 2. cloned branch queryid
 3. squashed
 22899c802571a57cfaf0df38e6c5c366b5430c74
 d813096e29049667151a49fc5e5cf3d6bbe55702
 picked
 be2671a4a6aa355c5e8ae646210e6c8e0b84ecb5
 4. usual make/make install/create extension pg_stat_statements.
 (pg_stat_statements.max=100).
 5. select * from pg_stat_statements_reset(), select * from
 pgbench_tellers.
 result below:

 userid | dbid  |  session_start   |introduced
|   query   |  query_id
   | calls | total_time |
  rows | shared_blks_hit | shared_blks_read | shared_blks_dirtied |
 shared_blks_written | local_blks_hit | local_blks_read |
 local_blks_dirtied | local_blks_written | t
 emp_blks_read | temp_blks_written | blk_read_time | blk_write_time


 +---+--+---+---+-+---++


 --+-+--+-+-++-+++--
 --+---+---+
  10 | 12900 | 2013-09-30 16:55:22.285113+05:30 | 1970-01-01
 05:30:00+05:30 | select * from pg_stat_statements_reset(); |
 2531907647060518039 | 1 |  0 |
 1 |   0 |0 |   0 |
   0 |  0 |   0 |
 0 |  0 |
 0 | 0 | 0 |  0
  10 | 12900 | 2013-09-30 16:55:22.285113+05:30 | 1970-01-01
 05:30:00+05:30 | select * from pgbench_tellers ;   |
 7580333025384382649 | 1 |  0 |
10 |   1 |0 |   0 |
   0 |  0 |   0 |
 0 |  0 |
 0 | 0 | 0 |  0
 (2 rows)


 I understand session_start and verified that it changes with each
 database restart to reflect current time.

 It should only restart when the statistics file cannot be loaded.

 This seems to work fine.
 1. Started the instance
 2. Executed pg_stat_statements_reset(), select * from
 pgbench_history,select* from pgbench_tellers. Got the following in
 pg_stat_statements view
 userid | dbid  |  session_start   |
 introduced|   query   |
query_id   | calls | tota
 l_time | rows | shared_blks_hit | shared_blks_read |
 shared_blks_dirtied | shared_blks_written | local_blks_hit |
 local_blks_read | local_blks_dirtied | local_blks_wri
 tten | temp_blks_read | temp_blks_written | blk_read_time | blk_write_time
 +---+--+--+---+--+---+-
 ---+--+-+--+-+-++-++---
 -++---+---+
  10 | 12900 | 2013-10-01 17:43:26.667074+05:30 | 2013-10-01
 17:43:43.724301+05:30 | select * from pgbench_history;|
 -165801328395488047 | 1 |
  0 |0 |   0 |0 |
 0 |   0 |  0 |   0 |
0 |
0 |  0 | 0 | 0 |  0
  10 | 12900 | 2013-10-01 17:43:26.667074+05:30 | 2013-10-01
 17:43:37.379785+05:30 | select * from pgbench_tellers;|
 8376871363863945311 | 1 |
  0 |   10 |   0 |1 |
 0 |   0 |  0 |   0 |
0 |
0 |  0 | 0 | 0 |  0
  10 | 12900 | 2013-10-01 17:43:26.667074+05:30 | 2013-10-01
 17:43:26.667178+05:30 | select * from pg_stat_statements_reset(); |
 -1061018443194138344 | 1 |
  0 |1 |   0 |0 |
 0 |   0 |  0 |   0 |
0 |
0 |  0 | 0 | 0 |  0
 (3 rows)

 Then restarted the server and 

Re: [HACKERS] 9.4 HEAD: select() failed in postmaster

2013-10-01 Thread Alvaro Herrera
MauMau escribió:

 AbortStartTime  0 is also necessary to avoid sending SIGKILL
 repeatedly. I sent the attached patch during the original
 discussion.  The below fragment is relevant:

Can you please send a fixup patch to what's already committed?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] docbook-xsl version for release builds

2013-10-01 Thread Alvaro Herrera
Magnus Hagander wrote:
 On Thu, Aug 22, 2013 at 8:13 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Peter Eisentraut pete...@gmx.net writes:
  On Fri, 2013-07-12 at 12:30 +0200, Magnus Hagander wrote:
  Given that, I'm fine with just bumping the version on borka to that
  version. Any objections?
 
  This was not done for 9.3rc1, AFAICT.  Let's please do it for the next
  release builds.
 
  Um ... touching borka's toolchain post-rc1 sure sounds like a recipe
  for making ourselves look like idiots in a high-profile release.
  Wouldn't it be better to wait till after 9.3.0?
 
 I agree that doing it after the RC is a bad idea. We should probably
 try to do it more or less directly after the release though, so we
 (I..) don't forget it again...

Did we get around to doing this?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] pg_upgrade: support for btrfs copy-on-write clones

2013-10-01 Thread Oskari Saarenmaa
Add file cloning as an alternative data transfer method to pg_upgrade.
Currently only btrfs is supported, but copy-on-write cloning is also
available on at least ZFS.  Cloning must be requested explicitly and if
it isn't supported by the operating system or filesystem a fatal error
is thrown.

This provides upgrade performance similar to link mode while allowing
the old cluster to be used even after the new one has been started.

Signed-off-by: Oskari Saarenmaa o...@ohmu.fi
---
 configure|   5 +-
 configure.in |   7 ++-
 contrib/pg_upgrade/check.c   |   3 +
 contrib/pg_upgrade/file.c| 125 +--
 contrib/pg_upgrade/option.c  |   7 +++
 contrib/pg_upgrade/pg_upgrade.h  |  13 ++--
 contrib/pg_upgrade/relfilenode.c |  31 --
 doc/src/sgml/pgupgrade.sgml  |   7 +++
 src/include/pg_config.h.in   |   3 +
 9 files changed, 141 insertions(+), 60 deletions(-)

diff --git a/configure b/configure
index c685ca3..5087463 100755
--- a/configure
+++ b/configure
@@ -10351,7 +10351,10 @@ done
 
 
 
-for ac_header in crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h 
langinfo.h poll.h pwd.h sys/ioctl.h sys/ipc.h sys/poll.h sys/pstat.h 
sys/resource.h sys/select.h sys/sem.h sys/shm.h sys/socket.h sys/sockio.h 
sys/tas.h sys/time.h sys/un.h termios.h ucred.h utime.h wchar.h wctype.h
+for ac_header in crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h 
langinfo.h \
+linux/btrfs.h poll.h pwd.h sys/ioctl.h sys/ipc.h sys/poll.h sys/pstat.h \
+sys/resource.h sys/select.h sys/sem.h sys/shm.h sys/socket.h sys/sockio.h \
+sys/tas.h sys/time.h sys/un.h termios.h ucred.h utime.h wchar.h wctype.h
 do
 as_ac_Header=`$as_echo ac_cv_header_$ac_header | $as_tr_sh`
 if { as_var=$as_ac_Header; eval test \\${$as_var+set}\ = set; }; then
diff --git a/configure.in b/configure.in
index 82771bd..609aa73 100644
--- a/configure.in
+++ b/configure.in
@@ -982,7 +982,12 @@ AC_SUBST(OSSP_UUID_LIBS)
 ##
 
 dnl sys/socket.h is required by AC_FUNC_ACCEPT_ARGTYPES
-AC_CHECK_HEADERS([crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h 
langinfo.h poll.h pwd.h sys/ioctl.h sys/ipc.h sys/poll.h sys/pstat.h 
sys/resource.h sys/select.h sys/sem.h sys/shm.h sys/socket.h sys/sockio.h 
sys/tas.h sys/time.h sys/un.h termios.h ucred.h utime.h wchar.h wctype.h])
+AC_CHECK_HEADERS([crypt.h dld.h fp_class.h getopt.h ieeefp.h ifaddrs.h \
+  langinfo.h linux/btrfs.h poll.h pwd.h sys/ioctl.h \
+  sys/ipc.h sys/poll.h sys/pstat.h sys/resource.h \
+  sys/select.h sys/sem.h sys/shm.h sys/socket.h \
+  sys/sockio.h sys/tas.h sys/time.h sys/un.h termios.h \
+  ucred.h utime.h wchar.h wctype.h])
 
 # On BSD, test for net/if.h will fail unless sys/socket.h
 # is included first.
diff --git a/contrib/pg_upgrade/check.c b/contrib/pg_upgrade/check.c
index 0376fcb..2a52dd8 100644
--- a/contrib/pg_upgrade/check.c
+++ b/contrib/pg_upgrade/check.c
@@ -151,6 +151,9 @@ check_new_cluster(void)
if (user_opts.transfer_mode == TRANSFER_MODE_LINK)
check_hard_link();
 
+   if (user_opts.transfer_mode == TRANSFER_MODE_CLONE)
+   check_clone_file();
+
check_is_super_user(new_cluster);
 
/*
diff --git a/contrib/pg_upgrade/file.c b/contrib/pg_upgrade/file.c
index dfeb79f..fc935b7 100644
--- a/contrib/pg_upgrade/file.c
+++ b/contrib/pg_upgrade/file.c
@@ -8,11 +8,16 @@
  */
 
 #include postgres_fe.h
+#include pg_config.h
 
 #include pg_upgrade.h
 
 #include fcntl.h
 
+#ifdef HAVE_LINUX_BTRFS_H
+# include sys/ioctl.h
+# include linux/btrfs.h
+#endif
 
 
 #ifndef WIN32
@@ -23,21 +28,42 @@ static int  win32_pghardlink(const char *src, const char 
*dst);
 
 
 /*
- * copyAndUpdateFile()
+ * upgradeFile()
  *
- * Copies a relation file from src to dst.  If pageConverter is non-NULL, 
this function
- * uses that pageConverter to do a page-by-page conversion.
+ * Transfer a relation file from src to dst using one of the supported
+ * methods.  If the on-disk format of the new cluster is bit-for-bit
+ * compatible with the on-disk format of the old cluster we can simply link
+ * each relation to perform a true in-place upgrade.  Otherwise we must copy
+ * (either block-by-block or using a copy-on-write clone) the data from old
+ * cluster to new cluster and then perform the conversion.
  */
 const char *
-copyAndUpdateFile(pageCnvCtx *pageConverter,
- const char *src, const char *dst, bool force)
+upgradeFile(transferMode transfer_mode, const char *src,
+   const char *dst, pageCnvCtx *pageConverter)
 {
if (pageConverter == NULL)
{
-   if (pg_copy_file(src, dst, force) == -1)
-   return getErrorText(errno);
-   else
-   return NULL;
+   int rc = -1;
+
+   switch (transfer_mode)
+   {
+

[HACKERS] No Index-Only Scan on Partial Index

2013-10-01 Thread David E. Wheeler
Hackers,

I was trying to figure out why a query was not doing an index-only scan on a 
partial index, when Josh Berkus pointed to this issue, reported by Merlin 
Moncure:

  
http://www.postgresql.org/message-id/CAHyXU0x1OGao48WajAfUsbXqkUDLf=_6ho6hlmb8dsfkwda...@mail.gmail.com

In short, the planner needs the column from the where clause included in the 
index to decide it can do an index-only scan. This test case demonstrates the 
truth of this finding:

CREATE TABLE try (
id INT NOT NULL,
label  TEXT NOT NULL,
active BOOLEAN DEFAULT TRUE
);

INSERT INTO try
SELECT i
 ,  (ARRAY['foo','bar','baz','wig'])[floor((random()*4))::int + 1]
 , (i % 100) = 0
  FROM generate_series(1, 10) i;

VACUUM FREEZE TRY;

CREATE INDEX idx_try_active ON try(id) WHERE active;

-- Does a bitmap heap scan.
EXPLAIN (ANALYZE, FORMAT YAML)
SELECT id FROM try WHERE active;

DROP INDEX idx_try_active;
CREATE INDEX idx_try_active ON try(label, id, active) WHERE active;

-- Does an index-only scan.
EXPLAIN (ANALYZE, FORMAT YAML)
SELECT id FROM try WHERE active;

DROP TABLE try;

The first query does a bitmap heap scan, but after the index that includes the 
active column is added, it does an index-only scan.

However, this does not quite match my case. In my case, I'm using an immutable 
function call in the index where clause:

CREATE INDEX idx_try_active ON try(id, upper_inf(irange)) WHERE 
upper_inf(irange);

I am unable to get the planner do to an index-only scan with this index no 
matter what I do. Here’s the full test case:

CREATE TABLE try (
id INT   NOT NULL,
label  TEXT  NOT NULL,
irange INT4RANGE NOT NULL
);

INSERT INTO try
SELECT i
 ,  (ARRAY['foo','bar','baz','wig'])[floor((random()*4))::int + 1]
 , int4range(1, CASE WHEN random()  0.01 THEN NULL ELSE 2 END)
  FROM generate_series(1, 10) i;

VACUUM FREEZE TRY;

CREATE INDEX idx_try_active ON try(id) WHERE upper_inf(irange);

-- Does a bitmap heap scan.
EXPLAIN (ANALYZE, FORMAT YAML)
SELECT id FROM try WHERE upper_inf(irange);

DROP INDEX idx_try_active;
CREATE INDEX idx_try_active ON try(label, id, upper_inf(irange)) WHERE 
upper_inf(irange);

-- Also does a bitmap heap scan.
EXPLAIN (ANALYZE, FORMAT YAML)
SELECT id FROM try WHERE upper_inf(irange);

DROP TABLE try;

So is there something about using a function in a conditional index that 
prevents index-only scans? Tested on 9.2 and 9.3, BTW.

Thanks,

David

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Index-Only Scan on Partial Index

2013-10-01 Thread Merlin Moncure
On Tue, Oct 1, 2013 at 5:35 PM, David E. Wheeler da...@justatheory.com wrote:
 Hackers,

 I was trying to figure out why a query was not doing an index-only scan on a 
 partial index, when Josh Berkus pointed to this issue, reported by Merlin 
 Moncure:

   
 http://www.postgresql.org/message-id/CAHyXU0x1OGao48WajAfUsbXqkUDLf=_6ho6hlmb8dsfkwda...@mail.gmail.com

 In short, the planner needs the column from the where clause included in the 
 index to decide it can do an index-only scan. This test case demonstrates the 
 truth of this finding:

 CREATE TABLE try (
 id INT NOT NULL,
 label  TEXT NOT NULL,
 active BOOLEAN DEFAULT TRUE
 );

 INSERT INTO try
 SELECT i
  ,  (ARRAY['foo','bar','baz','wig'])[floor((random()*4))::int + 1]
  , (i % 100) = 0
   FROM generate_series(1, 10) i;

 VACUUM FREEZE TRY;

 CREATE INDEX idx_try_active ON try(id) WHERE active;

 -- Does a bitmap heap scan.
 EXPLAIN (ANALYZE, FORMAT YAML)
 SELECT id FROM try WHERE active;

 DROP INDEX idx_try_active;
 CREATE INDEX idx_try_active ON try(label, id, active) WHERE active;

 -- Does an index-only scan.
 EXPLAIN (ANALYZE, FORMAT YAML)
 SELECT id FROM try WHERE active;

 DROP TABLE try;

 The first query does a bitmap heap scan, but after the index that includes 
 the active column is added, it does an index-only scan.

 However, this does not quite match my case. In my case, I'm using an 
 immutable function call in the index where clause:

 CREATE INDEX idx_try_active ON try(id, upper_inf(irange)) WHERE 
 upper_inf(irange);

 I am unable to get the planner do to an index-only scan with this index no 
 matter what I do. Here’s the full test case:

 CREATE TABLE try (
 id INT   NOT NULL,
 label  TEXT  NOT NULL,
 irange INT4RANGE NOT NULL
 );

 INSERT INTO try
 SELECT i
  ,  (ARRAY['foo','bar','baz','wig'])[floor((random()*4))::int + 1]
  , int4range(1, CASE WHEN random()  0.01 THEN NULL ELSE 2 END)
   FROM generate_series(1, 10) i;

 VACUUM FREEZE TRY;

 CREATE INDEX idx_try_active ON try(id) WHERE upper_inf(irange);

 -- Does a bitmap heap scan.
 EXPLAIN (ANALYZE, FORMAT YAML)
 SELECT id FROM try WHERE upper_inf(irange);

 DROP INDEX idx_try_active;
 CREATE INDEX idx_try_active ON try(label, id, upper_inf(irange)) WHERE 
 upper_inf(irange);

 -- Also does a bitmap heap scan.
 EXPLAIN (ANALYZE, FORMAT YAML)
 SELECT id FROM try WHERE upper_inf(irange);

 DROP TABLE try;

 So is there something about using a function in a conditional index that 
 prevents index-only scans? Tested on 9.2 and 9.3, BTW.

I don't think it has anything to do with the conditional index -- it's
the functional based.  For some reason postgres always wants to post
filter (note the filter step below):

postgres=# create index on try(upper_inf(irange));
CREATE INDEX
Time: 12.001 ms
postgres=# explain select * from try where upper_inf(irange);
  QUERY PLAN
---
 Index Scan using try_upper_inf_idx on try  (cost=0.00..9.25 rows=33 width=40)
   Index Cond: (upper_inf(irange) = true)
   Filter: upper_inf(irange)

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] No Index-Only Scan on Partial Index

2013-10-01 Thread David E. Wheeler
On Oct 1, 2013, at 3:56 PM, Merlin Moncure mmonc...@gmail.com wrote:

 I don't think it has anything to do with the conditional index -- it's
 the functional based.  For some reason postgres always wants to post
 filter (note the filter step below):
 
 postgres=# create index on try(upper_inf(irange));
 CREATE INDEX
 Time: 12.001 ms
 postgres=# explain select * from try where upper_inf(irange);
  QUERY PLAN
 ---
 Index Scan using try_upper_inf_idx on try  (cost=0.00..9.25 rows=33 width=40)
   Index Cond: (upper_inf(irange) = true)
   Filter: upper_inf(irange)

Hrm. I get a seq scan for that query:

create index on try(upper_inf(irange));
explain select * from try where upper_inf(irange);
QUERY PLAN 
---
 Seq Scan on try  (cost=0.00..1887.00 rows=3 width=68)
   Filter: upper_inf(irange)

True also if I just select the irange. Is the filter the issue, here?

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-10-01 Thread Bruce Momjian
On Fri, Sep 27, 2013 at 10:42:17AM +, Haribabu kommi wrote:
 If the changes are very high to deal all scenarios,
 
 I feel it is better do it only in scenarios where the use cases needs it, 
 until
 it is not confusing users.
 
 The rest can be documented.
 
 Any other opinions/suggestions welcome.

I have reviewed this patch and it is good.  The problem is guessing if a
number with 5+ digits is YMD, HMS, or a year.  I have created a modified
patch, attached, assumes a 5-digit number is a year, because YMD and HMS
require at least six digits, and used your date/time test to control the
other cases.  I also added a few more regression tests.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/src/backend/utils/adt/datetime.c b/src/backend/utils/adt/datetime.c
new file mode 100644
index f39353f..48bf3db
*** a/src/backend/utils/adt/datetime.c
--- b/src/backend/utils/adt/datetime.c
*** DecodeDateTime(char **field, int *ftype,
*** 1161,1167 
  		if (dterr  0)
  			return dterr;
  	}
! 	else if (flen  4)
  	{
  		dterr = DecodeNumberField(flen, field[i], fmask,
    tmask, tm,
--- 1161,1177 
  		if (dterr  0)
  			return dterr;
  	}
! 	/*
! 	 * Is this a YMD or HMS specification, or a year number?
! 	 * YMD and HMS are required to be six digits or more, so
! 	 * if it is 5 digits, it is a year.  If it is six or more
! 	 * more digits, we assume it is YMD or HMS unless no date
! 	 * and no time values have been specified.  This forces
! 	 * 6+ digit years to be at the end of the string, or to use
! 	 * the ISO date specification.
! 	 */
! 	else if (flen = 6  (!(fmask  DTK_DATE_M) ||
! 			 !(fmask  DTK_TIME_M)))
  	{
  		dterr = DecodeNumberField(flen, field[i], fmask,
    tmask, tm,
*** DecodeNumberField(int len, char *str, in
*** 2647,2675 
  	/* No decimal point and no complete date yet? */
  	else if ((fmask  DTK_DATE_M) != DTK_DATE_M)
  	{
! 		/* mmdd? */
! 		if (len == 8)
! 		{
! 			*tmask = DTK_DATE_M;
! 
! 			tm-tm_mday = atoi(str + 6);
! 			*(str + 6) = '\0';
! 			tm-tm_mon = atoi(str + 4);
! 			*(str + 4) = '\0';
! 			tm-tm_year = atoi(str + 0);
! 
! 			return DTK_DATE;
! 		}
! 		/* yymmdd? */
! 		else if (len == 6)
  		{
  			*tmask = DTK_DATE_M;
! 			tm-tm_mday = atoi(str + 4);
! 			*(str + 4) = '\0';
! 			tm-tm_mon = atoi(str + 2);
! 			*(str + 2) = '\0';
! 			tm-tm_year = atoi(str + 0);
! 			*is2digits = TRUE;
  
  			return DTK_DATE;
  		}
--- 2657,2676 
  	/* No decimal point and no complete date yet? */
  	else if ((fmask  DTK_DATE_M) != DTK_DATE_M)
  	{
! 		if (len = 6)
  		{
  			*tmask = DTK_DATE_M;
! 			/*
! 			 * Start from end and consider first 2 as Day, next 2 as Month,
! 			 * and the rest as Year.
! 			 */
! 			tm-tm_mday = atoi(str + (len - 2));
! 			*(str + (len - 2)) = '\0';
! 			tm-tm_mon = atoi(str + (len - 4));
! 			*(str + (len - 4)) = '\0';
! 			tm-tm_year = atoi(str);
! 			if ((len - 4) == 2)
! *is2digits = TRUE;
  
  			return DTK_DATE;
  		}
*** DecodeNumberField(int len, char *str, in
*** 2686,2692 
  			*(str + 4) = '\0';
  			tm-tm_min = atoi(str + 2);
  			*(str + 2) = '\0';
! 			tm-tm_hour = atoi(str + 0);
  
  			return DTK_TIME;
  		}
--- 2687,2693 
  			*(str + 4) = '\0';
  			tm-tm_min = atoi(str + 2);
  			*(str + 2) = '\0';
! 			tm-tm_hour = atoi(str);
  
  			return DTK_TIME;
  		}
*** DecodeNumberField(int len, char *str, in
*** 2697,2703 
  			tm-tm_sec = 0;
  			tm-tm_min = atoi(str + 2);
  			*(str + 2) = '\0';
! 			tm-tm_hour = atoi(str + 0);
  
  			return DTK_TIME;
  		}
--- 2698,2704 
  			tm-tm_sec = 0;
  			tm-tm_min = atoi(str + 2);
  			*(str + 2) = '\0';
! 			tm-tm_hour = atoi(str);
  
  			return DTK_TIME;
  		}
diff --git a/src/test/regress/expected/timestamptz.out b/src/test/regress/expected/timestamptz.out
new file mode 100644
index 6581b5e..9f4f7a4
*** a/src/test/regress/expected/timestamptz.out
--- b/src/test/regress/expected/timestamptz.out
*** SELECT '' AS to_char_11, to_char(d1, 'FM
*** 1675,1677 
--- 1675,1699 
  | 2001 1 1 1 1 1 1
  (66 rows)
  
+ CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz);
+ -- Test year field value with len  4
+ INSERT INTO TIMESTAMPTZ_TST VALUES(1, 'Sat Mar 12 23:58:48 1000 IST');
+ INSERT INTO TIMESTAMPTZ_TST VALUES(2, 'Sat Mar 12 23:58:48 1 IST');
+ INSERT INTO TIMESTAMPTZ_TST VALUES(3, 'Sat Mar 12 23:58:48 10 IST');
+ INSERT INTO TIMESTAMPTZ_TST VALUES(3, '1 Mar 12 23:58:48 IST');
+ INSERT INTO TIMESTAMPTZ_TST VALUES(4, '10312 23:58:48 IST');
+ INSERT INTO TIMESTAMPTZ_TST VALUES(4, '100312 23:58:48 IST');
+ --Verify data
+ SELECT * FROM TIMESTAMPTZ_TST ORDER BY a;
+  a |   b
+ 

Re: [HACKERS] pg_stat_statements: calls under-estimation propagation

2013-10-01 Thread Peter Geoghegan
On Sun, Sep 29, 2013 at 10:58 PM, Daniel Farina dan...@fdr.io wrote:
 I remember hacking that out for testing sake.

 I can only justify it as a foot-gun to prevent someone from being
 stuck restarting the database to get a reasonable number in there.
 Let's CC Peter; maybe he can remember some thoughts about that.

On reflection I think it was entirely to do with the testing of the
patch.  I don't think that the minimum value of pg_stat_statements has
any real significance.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Completing PL support for Event Triggers

2013-10-01 Thread Peter Eisentraut
On Fri, 2013-09-13 at 22:40 +0200, Dimitri Fontaine wrote:
 Please find attached to this email three patches covering the missing PL
 support for Event Triggers: pltcl, plperl and plpython.

For plperl, the previous reviews mostly apply analogously.  In addition,
I have these specific points:

- In plperl_event_trigger_build_args(), the hv_ksplit() call is probably
unnecessary.

- plperl_call_perl_event_trigger_func and plperl_call_perl_trigger_func
as well as plperl_event_trigger_handler and plperl_trigger_handler have
a lot of overlap could perhaps be combined or refactored differently.

- In plperl_event_trigger_handler(), a point is being made about setting
up current_call_data before SPI_connect.  Other handler functions don't
do this, though.  It's not quite clear to me on first readong why it's
done differently here.

- Like I pointed out for the pltcl patch, calling trigger functions as
event triggers and vice versa is not detected in
compile_plperl_function, but I guess this can only happen if you
manually butcher the function after you have set up the trigger.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Revive line type

2013-10-01 Thread Peter Eisentraut
On Wed, 2013-09-25 at 14:26 +0530, Jeevan Chalke wrote:
 So no issues from my side.
 
 However, do we still need this in close_pl() ?
 
 #ifdef NOT_USED
 if (FPeq(line-A, -1.0)  FPzero(line-B))
 {/* vertical */
 }
 #endif

No, that can be removed.

 Also close_sl, close_lb and dist_lb are NOT yet implemented. It will
 be good if we have those. But I don't think we should wait for those
 functions to be implemented.

Right, those are separate projects.





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] information schema parameter_default implementation

2013-10-01 Thread Peter Eisentraut
On Wed, 2013-09-18 at 20:13 +0530, Amit Khandekar wrote:
 What's the reason behind calling pg_has_role(proowner, 'USAGE') before
 calling pg_get_function_arg_default() ? : 
 
  CASE WHEN pg_has_role(proowner, 'USAGE')
 THEN pg_get_function_arg_default(p_oid, (ss.x).n)
 ELSE NULL END 
 
 There is already a pg_has_role() filter added while fetching the
 pg_proc entries   :   FROM pg_namespace n, pg_proc p
 WHERE n.oid = p.pronamespace AND
 (pg_has_role(p.proowner, 'USAGE') OR
  has_function_privilege(p.oid, 'EXECUTE'))) AS ss 
 
 So the proc oid  in pg_get_function_arg_default(p_oid, (ss.x).n)
 belongs to a procedure for which the current user has USAGE
 privilege. 

No, the pg_proc entry belongs to a function for which the current user
is the owner *or* has EXECUTE privilege.  The default, however, is only
shown to the owner.  This is per SQL standard.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_upgrade: Split off pg_fatal() from pg_log()

2013-10-01 Thread Peter Eisentraut
On Sun, 2013-09-15 at 18:27 +0200, Marko Tiikkaja wrote:
 I think the reasoning behind this patch is sound.  However, I would like 
 to raise a couple of small questions:
 
1) Is there a reason for the fmt string not being const char*?  You 
 changed it for pg_log_v(), but not for pg_log() and pg_fatal().

Good catch.  I think I just left the existing code alone.  I'll fix it.

2) Allowing PG_FATAL to be passed to pg_log() seems weird, but I 
 don't feel strongly about that.

Yeah, it's a bit weird.  It's just because it all ends up in pg_log_v().
We could have pg_log() error about it, but that seems a bit excessive.
It's not a public API or anything.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pluggable compression support

2013-10-01 Thread Daniel Farina
On Mon, Sep 30, 2013 at 1:49 PM, Huchev hugochevr...@gmail.com wrote:
 How come any compressor which could put some competition to pglz is
 systematically pushed out of the field on the ground of unverifiable legal
 risks ?

Because pglz has been around for a while and has not caused patent
trouble. The risks have been accepted and the downsides have not
materialized. Were pglz were being written and distributed starting
today, perhaps your reasoning would be more compelling, but as-is the
pglz ship has sailed for quite some time and empirically it has not
been a problem.

That said, I hope the findings are in favor of lz4 or snappy
integration.  It does seem lz4 has picked up a slight edge.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] relscan_details.h

2013-10-01 Thread Bruce Momjian
On Tue, Sep 17, 2013 at 05:54:04PM -0300, Alvaro Herrera wrote:
  I don't want to be too dogmatic in opposing this; I accept that we
  should, from time to time, refactor things.  If we don't, superflouous
  dependencies will probably proliferate over time.  But personally, I'd
  rather do these changes in bulk every third release or so and keep
  them to a minimum in between times, so that we're not forcing people
  to constantly decorate their extensions with new #if directives.
 
 We can batch things, sure, but I don't think doing it only once every
 three years is the right tradeoff.  There's not all that much of this
 refactoring, after all.

Agreed, we should go ahead and make the changes.  People who use our
code for plugins are already going to have a boat-load of stuff to
change in each major release, and doing this isn't going to add a lot of
burden, and it will give us cleaner code for the future.

If we had not made massive cleanup changes years ago, our code would not
be as good as it is today.  By avoiding cleanup to reduce the burden on
people who use our code, we are positioning our code on a slow decline
in clarity.

What I don't want to do is get into a mode where every code cleanup has
to be verified that it isn't going to excessively burden outside code
users.  Cleanup is hard enough, and adding another check to that process
makes cleanup even less likely.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] relscan_details.h

2013-10-01 Thread Peter Geoghegan
On Tue, Sep 17, 2013 at 12:59 PM, Robert Haas robertmh...@gmail.com wrote:
 Personally, I'm not particularly in favor of these kinds of changes.

+1. Experience has shown this kind of effort to be a tarpit. It turns
out that refactoring away compiler dependencies has this kind of
fractal complexity - the more you look at it, the less sense it makes.

I would be in favor of this if there were compelling gains in either
compile time (and like others, I have a pretty high bar for compelling
here), or if the refactoring effort remedied a clear modularity
violation. Though I think it has to happen every once in a while, I'd
suggest about every 5 years as the right interval.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers