Re: FW: [HACKERS] Allow replacement of bloated primary key indexes without foreign key rebuilds

2012-07-13 Thread Amit Kapila
 

 From: Gurjeet Singh [mailto:singh.gurj...@gmail.com] 
 Sent: Friday, July 13, 2012 4:24 AM
On Tue, Jul 10, 2012 at 2:40 AM, Amit Kapila amit.kap...@huawei.com
wrote:

 Having to drop foreign key constraints before this command, and recreate
them afterwards makes this command useless to most database setups. I feel
sorry 

  that no one brought this up when we were implementing the feature;
maybe we could've done something about it right then.

 

Will it impact user such that it will block its operation or something
similar or it is a usability issue?


 Yes, it will have to take an exclusive lock on the index, and possibly the
table too, but the operation should be quick to be even noticeable in low
load 

 conditions.

Which index you are referring here, is it primary key table index?

According to what I have debugged, the locks are taken on foreign key table,
constraint object and dependent triggers.



 However, if the x-lock is waiting for some other long running query to
finish, then lock queuing logic in Postgres will make new queries to wait
for this x-lock to  be taken and released before any new query can begin
processing. This is my recollection of the logic from an old conversation,
others can weigh in to confirm.

 

 Syntax options:

 ALTER TABLE tbl REPLACE [CONSTRAINT constr] {PRIMARY KEY | UNIQUE} USING
INDEX new_index;

 ALTER INDEX ind REPLACE WITH new_index;

After this new syntax there will be 2 ways for users to do the replacement
of index, won't it confuse users for which syntax to use?

 

 

 Yes, I forgot to mention this in the original post. This feature will be a
superset of the feature we introduced in ALTER TABLE. I don't see a way
around that, 

 except for slowly deprecating the older feature. 

 

After new implementation, there will be no need to perform any operation for
table with foreign key and hence reduce the lock time for same as well.

However after implementation of Reindex Concurrently, this feature will also
needs to be deprecated which might not happen soon but still I feel it
should be considered whether providing new syntax and implementation is
really required by users.  

 

 

With Regards,

Amit Kapila.



Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Hampus Wessman

Hi all,

Here are some (slightly too long) thoughts about this.

Shaun Thomas skrev 2012-07-12 22:40:

On 07/12/2012 12:02 PM, Bruce Momjian wrote:


Well, the problem also exists if add it as an internal database
feature --- how long do we wait to consider the standby dead, how do
we inform administrators, etc.


True. Though if there is no secondary connected, either because it's not
there yet, or because it disconnected, that's an easy check. It's the
network lag/stall detection that's tricky.


It is indeed tricky to detect this. If you don't get an (immediate) 
reply from the secondary (and you never do!), then all you can do is 
wait and *eventually* (after how long? 250ms? 10s?) assume that there is 
no connection between them. The conclusion may very well be wrong 
sometimes. A second problem is that we still don't know if this is 
caused by some kind of network problems or if it's caused by the 
secondary not running. It's perfectly possible that both servers are 
working, but just can't communicate at the moment.


The thing is that what we do next (at least if our data is important and 
why otherwise use synchronous replication of any kind...) depends on 
what *did* happen. Assume that we have two database servers. At any time 
we need at most one primary database to be running. Without that 
requirement our data can get messed up completely... If HA is important 
to us, we may choose to do a failover to the secondary (and live without 
replication for the moment) if the primary fails. With synchronous 
repliction, we can do this without losing any data. If the secondary 
also dies, then we do lose data (and we'll know it!), but it might be an 
acceptable risk. If the secondary isn't permanently damaged, then we 
might even be able to get the data back after some down time. Ok, so 
that's one way to reconfigure the database servers on a failure. If the 
secondary fails instead, then we can do similarly and remove it from the 
cluster (or in other words, disable synchronous replication to the 
secondary). Again, we don't lose any data by doing this. We're taking a 
certain risk, however. We can't safely do a failover to the secondary 
anymore... So if the primary fails now, then the only way not to lose 
data is to hope that we can get it back from the failed machine (the 
failure may be temporary).


There's also the third possibility, of course, that the two servers are 
both up and running, but they can't communicate over the network at the 
moment (this is, by the way, a difference from RAID, I guess). What do 
we do then? Well, we still need at most one primary database server. 
We'll have to (somehow, which doesn't matter as much) decide which 
database to keep and consider the other one down. Then we can just do 
as above (with all the same implications!). Is it always a good idea to 
keep the primary? No! What if you (as a stupid example) pull the network 
cable from the primary (or maybe turn off a switch so that it's isolated 
from most of the network)? In that case you probably want the secondary 
to take over instead. At least if you value service availability. At 
this point we can still do a safe failover too.


My point here is that if HA is important to you, then you may very well 
want to disable synchronous replication on a failure to avoid down time, 
but this has to be integrated with your overall failover / cluster 
management solution. Just having the primary automatically disable 
synchronous replication doesn't seem overly useful to me... If you're 
using synchronous replication to begin with, you probably want to *know* 
if you may have lost data or not. Otherwise, you will have to assume 
that you did and then you could frankly have been running async 
replication all along. If you do integrate it with your failover 
solution, then you can keep track of when it's safe to do a failover and 
when it's not, however, and decide how to handle each case.


How you decide what to do with the servers on failures isn't that 
important here, really. You can probably run e.g. Pacemaker on 3+ 
machines and have it check for quorums to accomplish this. That's a good 
approach at least. You can still have only 2 database servers (for cost 
reasons), if you want. PostgreSQL could have all this built-in, but I 
don't think it sounds overly useful to only be able to disable 
synchronous replication on the primary after a timeout. Then you can 
never safely do a failover to the secondary, because you can't be sure 
synchronous replication was active on the failed primary...


Regards,
Hampus

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
Hi all,

I am trying to install my FDW project on windows.I did some research
and I believe I shall be requiring pre compiled binaries(dll files).I
tried cross compiling using MinGW on my Ubuntu.I am using PGXS for
compiling and making my project.I tried overriding CC and setting it
to the MinGW C compiler,but I am getting header file not found errors.

How can I compile binaries for windows with PGXS?

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Dave Page
On Fri, Jul 13, 2012 at 9:31 AM, Atri Sharma atri.j...@gmail.com wrote:
 Hi all,

 I am trying to install my FDW project on windows.I did some research
 and I believe I shall be requiring pre compiled binaries(dll files).I
 tried cross compiling using MinGW on my Ubuntu.I am using PGXS for
 compiling and making my project.I tried overriding CC and setting it
 to the MinGW C compiler,but I am getting header file not found errors.

 How can I compile binaries for windows with PGXS?

Are you trying to build them to use with the EDB distribution, or your
own builds? If the former, then you'll likely need to build with VC++
which means you cannot use PGXS. If it's your own build, then as long
as you used Mingw/Msys in the first place, you should be able to use
PGXS as you would on Linux.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
On Fri, Jul 13, 2012 at 2:11 PM, Dave Page dp...@pgadmin.org wrote:
 On Fri, Jul 13, 2012 at 9:31 AM, Atri Sharma atri.j...@gmail.com wrote:
 Hi all,

 I am trying to install my FDW project on windows.I did some research
 and I believe I shall be requiring pre compiled binaries(dll files).I
 tried cross compiling using MinGW on my Ubuntu.I am using PGXS for
 compiling and making my project.I tried overriding CC and setting it
 to the MinGW C compiler,but I am getting header file not found errors.

 How can I compile binaries for windows with PGXS?

 Are you trying to build them to use with the EDB distribution, or your
 own builds? If the former, then you'll likely need to build with VC++
 which means you cannot use PGXS. If it's your own build, then as long
 as you used Mingw/Msys in the first place, you should be able to use
 PGXS as you would on Linux.


 --
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company

Hi Dave,

I am sorry,I am not too sure about EDB.Cannot I cross compile even for
EnterpriseDB version?

I am not too good with windows.I downloaded PostgreSQL 9.1 for windows
from PostgreSQL website.

What should I be doing?

Atri
-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Dave Page
On Fri, Jul 13, 2012 at 9:46 AM, Atri Sharma atri.j...@gmail.com wrote:

 Hi Dave,

 I am sorry,I am not too sure about EDB.Cannot I cross compile even for
 EnterpriseDB version?

I doubt it. The EDB builds are compiled with VC++ 2008 (for 9.0/9.1).
If you cross-compile with GCC, you'll be using a different version of
the runtime library.

 I am not too good with windows.I downloaded PostgreSQL 9.1 for windows
 from PostgreSQL website.

 What should I be doing?

You'll need to get a copy of VC++ 2008 (the express version is a free
download), and create a VC++ project file to compile your FDW. I'd
suggest you actually try building PostgreSQL first, as that will cause
a number of project files to be generated which you could use as a
reference.


-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma


Sent from my iPad

On 13-Jul-2012, at 2:23 PM, Dave Page dp...@pgadmin.org wrote:

 On Fri, Jul 13, 2012 at 9:46 AM, Atri Sharma atri.j...@gmail.com wrote:
 
 Hi Dave,
 
 I am sorry,I am not too sure about EDB.Cannot I cross compile even for
 EnterpriseDB version?
 
 I doubt it. The EDB builds are compiled with VC++ 2008 (for 9.0/9.1).
 If you cross-compile with GCC, you'll be using a different version of
 the runtime library.
 
 I am not too good with windows.I downloaded PostgreSQL 9.1 for windows
 from PostgreSQL website.
 
 What should I be doing?
 
 You'll need to get a copy of VC++ 2008 (the express version is a free
 download), and create a VC++ project file to compile your FDW. I'd
 suggest you actually try building PostgreSQL first, as that will cause
 a number of project files to be generated which you could use as a
 reference.
 
 
 -- 
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake
 
 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


Hi Dave,

Thanks for the advice.i shall build PostgreSQL from sources in windows,then,try 
to get vc++ installed and configured?

Atri
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BlockNumber initialized to InvalidBuffer?

2012-07-13 Thread Markus Wanner
On 07/11/2012 05:45 AM, Tom Lane wrote:
 I'm also inclined to think that the while (stack) coding of the rest
 of it is wrong, misleading, or both, on precisely the same grounds: if
 that loop ever did fall out at the test, the function would have failed
 to honor its contract.  The only correct exit points are the returns
 in the middle.

I came to the same conclusion, yes. Looks like the additional asserts in
the attached patch all hold true.

As another minor improvement, it doesn't seem necessary to repeatedly
set the rootBlkno.

Regards

Markus Wanner
#
# old_revision [d90761edf47c6543d4686a76baa0b4e2a7ed113b]
#
# patch src/backend/access/gin/ginbtree.c
#  from [2d3e63387737b4034fc25ca3cb128d9ac57f4f01]
#to [62efe71b8e3429fc1bf2d2742f8d2fa69f2f4de6]
#

*** src/backend/access/gin/ginbtree.c	2d3e63387737b4034fc25ca3cb128d9ac57f4f01
--- src/backend/access/gin/ginbtree.c	62efe71b8e3429fc1bf2d2742f8d2fa69f2f4de6
*** ginInsertValue(GinBtree btree, GinBtreeS
*** 276,294 
  ginInsertValue(GinBtree btree, GinBtreeStack *stack, GinStatsData *buildStats)
  {
  	GinBtreeStack *parent = stack;
! 	BlockNumber rootBlkno = InvalidBuffer;
  	Page		page,
  rpage,
  lpage;
  
  	/* remember root BlockNumber */
! 	while (parent)
  	{
  		rootBlkno = parent-blkno;
  		parent = parent-parent;
! 	}
  
! 	while (stack)
  	{
  		XLogRecData *rdata;
  		BlockNumber savedRightLink;
--- 276,298 
  ginInsertValue(GinBtree btree, GinBtreeStack *stack, GinStatsData *buildStats)
  {
  	GinBtreeStack *parent = stack;
! 	BlockNumber rootBlkno;
  	Page		page,
  rpage,
  lpage;
  
  	/* remember root BlockNumber */
! 	Assert(stack != NULL);
! 	parent = stack;
! 	do
  	{
  		rootBlkno = parent-blkno;
  		parent = parent-parent;
! 	} while (parent);
  
! 	Assert(BlockNumberIsValid(rootBlkno));
! 
! 	for (;;)
  	{
  		XLogRecData *rdata;
  		BlockNumber savedRightLink;
*** ginInsertValue(GinBtree btree, GinBtreeS
*** 469,474 
--- 473,479 
  
  		UnlockReleaseBuffer(stack-buffer);
  		pfree(stack);
+ 		Assert(parent != NULL);		/* parent == NULL case is handled above */
  		stack = parent;
  	}
  }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Dave Page
On Fri, Jul 13, 2012 at 10:04 AM, Atri Sharma atri.j...@gmail.com wrote:

 Thanks for the advice.i shall build PostgreSQL from sources in 
 windows,then,try to get vc++ installed and configured?

No, the other way round - that'll prove VC++ is setup correctly.

Note that you don't actually need to build PG from source (or
shouldn't do) - the reasons I suggest it are that it'll prove your
VC++ environment is setup well, and it'll give you some project files
to use as templates.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
On Fri, Jul 13, 2012 at 3:21 PM, Dave Page dp...@pgadmin.org wrote:
 On Fri, Jul 13, 2012 at 10:04 AM, Atri Sharma atri.j...@gmail.com wrote:

 Thanks for the advice.i shall build PostgreSQL from sources in 
 windows,then,try to get vc++ installed and configured?

 No, the other way round - that'll prove VC++ is setup correctly.

 Note that you don't actually need to build PG from source (or
 shouldn't do) - the reasons I suggest it are that it'll prove your
 VC++ environment is setup well, and it'll give you some project files
 to use as templates.

 --
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company

Okies,thanks.

Another thing,could you please help me in installation of my FDW
project in packaged installations of PostgreSQL in Debian and Red
hat?I did source based installations.

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Dave Page
On Fri, Jul 13, 2012 at 11:03 AM, Atri Sharma atri.j...@gmail.com wrote:

 Okies,thanks.

 Another thing,could you please help me in installation of my FDW
 project in packaged installations of PostgreSQL in Debian and Red
 hat?I did source based installations.

I'm not really involved with the native packages for those platforms
so cannot help I'm afraid. Building against the EDB installers for
Linux should be as simple as using PGXS though.

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
On Fri, Jul 13, 2012 at 3:35 PM, Dave Page dp...@pgadmin.org wrote:
 On Fri, Jul 13, 2012 at 11:03 AM, Atri Sharma atri.j...@gmail.com wrote:

 Okies,thanks.

 Another thing,could you please help me in installation of my FDW
 project in packaged installations of PostgreSQL in Debian and Red
 hat?I did source based installations.

 I'm not really involved with the native packages for those platforms
 so cannot help I'm afraid. Building against the EDB installers for
 Linux should be as simple as using PGXS though.

 --
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company

Ok,Thanks a ton!

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq compression

2012-07-13 Thread Magnus Hagander
On Mon, Jun 25, 2012 at 2:26 PM, Magnus Hagander mag...@hagander.net wrote:
 On Mon, Jun 25, 2012 at 4:04 AM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Jun 22, 2012 at 12:38 PM, Euler Taveira eu...@timbira.com wrote:
 On 20-06-2012 17:40, Marko Kreen wrote:
 On Wed, Jun 20, 2012 at 10:05 PM, Florian Pflug f...@phlo.org wrote:
 I'm starting to think that relying on SSL/TLS for compression of
 unencrypted connections might not be such a good idea after all. We'd
 be using the protocol in a way it quite clearly never was intended to
 be used...

 Maybe, but what is the argument that we should avoid
 on encryption+compression at the same time?

 AES is quite lightweight compared to compression, so should
 be no problem in situations where you care about compression.

 If we could solve compression problem without AES that will turn things
 easier. Compression-only via encryption is a weird manner to solve the 
 problem
 in the user's POV.

 RSA is noticeable, but only for short connections.
 Thus easily solvable with connection pooling.

 RSA overhead is not the main problem. SSL/TLS setup is.

 And for really special compression needs you can always
 create a UDF that does custom compression for you.

 You have to own the code to modify it; it is not always an option.

 So what exactly is the situation we need to solve
 with postgres-specific protocol compression?

 Compression only support. Why do I need to set up SSL/TLS just for 
 compression?

 IMHO SSL/TLS use is no different from relying in another library to handle
 compression for the protocol and more it is compression-specific. That way, 
 we
 could implement another algorithms in such library without needing to modify
 libpq code. Using SSL/TLS you are bounded by what SSL/TLS software products
 decide to use as compression algorithms. I'll be happy to maintain the code
 iif it is postgres-specific or even as close as possible to core.

 I guess my feeling on this is that, so far as I can see, supporting
 compression via OpenSSL involves work and trade-offs, and supporting
 it without depending on OpenSSL also involves work, and trade-offs.

 Nice summary :)

 So it's not real evident to me that we should prefer one to the other
 on general principle.  It seems to me that a lot might come down to
 performance.  If someone can demonstrate that using an external
 library involves gets significantly better compression, chews up
 significantly less CPU time, and/or is significantly less code than
 supporting this via OpenSSL, then maybe we ought to consider it.

 I think we should, yes. But as you say, we need to know first. It's
 also a question of if one of these compression schemes are trivial
 enough that we could embed the code rather than rely on it externally
 - I have no idea if that's even remotely possibe, but that would move
 the goalposts a bit too.

A followup on this thread. I was researching something else, and
stumbled across
http://www.openssl.org/docs/ssl/SSL_COMP_add_compression_method.html,
which says:


The TLS standard (or SSLv3) allows the integration of compression
methods into the communication. The TLS RFC does however not specify
compression methods or their corresponding identifiers, so there is
currently no compatible way to integrate compression with unknown
peers. It is therefore currently not recommended to integrate
compression into applications. Applications for non-public use may
agree on certain compression methods. Using different compression
methods with the same identifier will lead to connection failure.


Which I think is another reason to not go down that path as our
official way to do compression.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-07-13 Thread Ryan Kelly
On Mon, Jul 09, 2012 at 05:35:15PM +0900, Shigeru HANADA wrote:
 Hi Ryan,
 
 On Mon, Jun 25, 2012 at 9:00 PM, Ryan Kelly rpkell...@gmail.com wrote:
  Connection attempt by \connect command could be also canceled by
  pressing Ctrl+C on psql prompt.
 
  In addition, I tried setting PGCONNECT_TIMEOUT to 0 (infinite), but
  psql gave up after few seconds, for both start-up and re-connect.
  Is this intentional behavior?
  A timeout of 0 (infinite) means to keep trying until we succeed or fail,
  not keep trying forever. As you mentioned above, your connection
  attempts error out after a few seconds. This is what is happening. In my
  environment no such error occurs and as a result psql continues to
  attempt to connect for as long as I let it.
 
 For handy testing, I wrote simple TCP server which accepts connection
 and blocks until client gives up connection attempt (or forever).  When
 I tested your patch with setting PGCONNECT_TIMEOUT=0, I found that
 psql's CPU usage goes up to almost 100% (10~ usr + 80~ sys) while
 connection attempt by \c command is undergoing.  After reading code for
 a while, I found that FD_ZERO was not called.  This makes select() to
 return immediately in the loop every time and caused busy loop.
 # Maybe you've forgotten adding FD_ZERO when you were merging Heikki's
 v2 patch.
Yes, I had lost them somewhere. My apologies.

  - Checking status by calling PQstatus just after
  PQconnectStartParams is necessary.
  Yes, I agree.
 
 Checked.
 
  - Copying only select() part of pqSocketPoll seems not enough,
  should we use poll(2) if it is supported?
  I did not think the additional complexity was worth it in this case.
  Unless you see some reason to use poll(2) that I do not.
 
 I checked where select() is used in PG, and noticed that poll is used at
 only few places.  Agreed to use only select() here.
 
  - Don't we need to clear error message stored in PGconn after
  PQconnectPoll returns OK status, like connectDBComplete?
  I do not believe there is a client interface for clearing the error
  message. Additionally, the documentation states that PQerrorMessage
  Returns the error message most recently generated by an operation on
  the connection. This seems to indicate that the error message should be
  cleared as this behavior is part of the contract of PQerrorMessage.
 
 My comment was pointless.  Sorry for noise.
 
 Here is my additional comments for v5 patch:
 
 - Using static array for fixed-length connection parameters was
 suggested in comments for another CF item, using
 fallback_application_name for some command line tools, and IMO this
 approach would also suit for this patch.
 
 http://archives.postgresql.org/message-id/CA+TgmoYZiayts=fjsytzqlg7rewlwkdkey5f+fhp5v5_nu_...@mail.gmail.com
I have done this as well.

 - Some comments go past 80 columns, and opening brace in line 1572
 should go on next line.  Please refer coding conventions written in
 PostgreSQL wiki.
 http://www.postgresql.org/docs/devel/static/source-format.html
I have corrected these issues.

 Once the issues above are fixed, IMO this patch can be marked as Ready
 for committer.
I have also added additional documentation reflecting Heikki's
suggestion that PQconnectTimeout be recommended for use by applications
using the async API.

Attached is v6 of the patch.

 Regards,
 -- 
 Shigeru Hanada
 
   * 不明 - 自動検出
   * 英語
   * 日本語
 
   * 英語
   * 日本語
 
  javascript:void(0);

-Ryan Kelly
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 5c5dd68..654f5f5 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -436,8 +436,12 @@ switch(PQstatus(conn))
The literalconnect_timeout/literal connection parameter is ignored
when using functionPQconnectPoll/function; it is the application's
responsibility to decide whether an excessive amount of time has elapsed.
-   Otherwise, functionPQconnectStart/function followed by a
-   functionPQconnectPoll/function loop is equivalent to
+   It is recommended to use functionPQconnectTimeout/function to get the
+   value of literalconnect_timeout/literal and use that as the timeout
+   in the application. That way the user gets the same timeout behavior
+   regardless of whether the application uses functionPQconnectPoll/function
+   or the blocking connection API. Otherwise, functionPQconnectStart/function
+   followed by a functionPQconnectPoll/function loop is equivalent to
functionPQconnectdb/function.
   /para
 
@@ -1496,6 +1500,24 @@ char *PQoptions(const PGconn *conn);
   /para
  /listitem
 /varlistentry
+
+varlistentry id=libpq-pqconnecttimeout
+ term
+  functionPQconnectTimeout/function
+  indexterm
+   primaryPQconnectTimeout/primary
+  /indexterm
+ /term
+
+ listitem
+  para
+   Returns the connect_timeout property as given to libpq.
+synopsis
+char *PQconnectTimeout(const PGconn *conn);
+/synopsis
+  /para
+ 

Re: [HACKERS] Re: [COMMITTERS] pgsql: Fix mapping of PostgreSQL encodings to Python encodings.

2012-07-13 Thread Jan Urbański

On 12/07/12 11:08, Heikki Linnakangas wrote:

On 07.07.2012 00:12, Jan Urbański wrote:

On 06/07/12 22:47, Peter Eisentraut wrote:

On fre, 2012-07-06 at 18:53 +0300, Heikki Linnakangas wrote:

What shall we do about those? Ignore them? Document that if you're
sing
one of these encodings then PL/Python with Python 2 will be crippled
and
with Python 3 just won't work?


We could convert to UTF-8, and use the PostgreSQL functions to convert
from UTF-8 to the server encoding. Double conversion might be slow, but
I think it would be better than failing.


Actually, we already do the other direction that way
(PLyUnicode_FromStringAndSize) , so maybe it would be more consistent to
always use this.

I would hesitate to use this as a kind of fallback, because then we
would sometimes be using PostgreSQL's recoding tables and sometimes
Python's recoding tables, which could became confusing.


So you're in favour of doing unicode - bytes by encoding with UTF-8 and
then using the server's encoding functions?


Sounds reasonable to me. The extra conversion between UTF-8 and UCS-2
should be quite fast, and it would be good to be consistent in the way
we do conversions in both directions.



I'll implement that than (sorry for not following up on that eariler).

J

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BlockNumber initialized to InvalidBuffer?

2012-07-13 Thread Markus Wanner
On 07/13/2012 11:33 AM, Markus Wanner wrote:
 As another minor improvement, it doesn't seem necessary to repeatedly
 set the rootBlkno.

Sorry, my mail program delivered an older version of the patch, which
didn't reflect that change. Here's what I intended to send.

Regards

Markus Wanner
#
# Minor cleanup of ginInsertValue with additional asserts and
# a tighter loop for finding the root in the GinBtreeStack.
#

*** src/backend/access/gin/ginbtree.c	2d3e63387737b4034fc25ca3cb128d9ac57f4f01
--- src/backend/access/gin/ginbtree.c	f6dc88ae5716275e62a6f6715aa7204abd430089
*** ginInsertValue(GinBtree btree, GinBtreeS
*** 276,294 
  ginInsertValue(GinBtree btree, GinBtreeStack *stack, GinStatsData *buildStats)
  {
  	GinBtreeStack *parent = stack;
! 	BlockNumber rootBlkno = InvalidBuffer;
  	Page		page,
  rpage,
  lpage;
  
! 	/* remember root BlockNumber */
! 	while (parent)
! 	{
! 		rootBlkno = parent-blkno;
  		parent = parent-parent;
- 	}
  
! 	while (stack)
  	{
  		XLogRecData *rdata;
  		BlockNumber savedRightLink;
--- 276,296 
  ginInsertValue(GinBtree btree, GinBtreeStack *stack, GinStatsData *buildStats)
  {
  	GinBtreeStack *parent = stack;
! 	BlockNumber rootBlkno;
  	Page		page,
  rpage,
  lpage;
  
! 	/* extract root BlockNumber from stack */
! 	Assert(stack != NULL);
! 	parent = stack;
! 	while (parent-parent)
  		parent = parent-parent;
  
! 	rootBlkno = parent-blkno;
! 	Assert(BlockNumberIsValid(rootBlkno));
! 
! 	for (;;)
  	{
  		XLogRecData *rdata;
  		BlockNumber savedRightLink;
*** ginInsertValue(GinBtree btree, GinBtreeS
*** 469,474 
--- 471,477 
  
  		UnlockReleaseBuffer(stack-buffer);
  		pfree(stack);
+ 		Assert(parent != NULL);		/* parent == NULL case is handled above */
  		stack = parent;
  	}
  }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Albe Laurenz
Atri Sharma wrote:
 On Fri, Jul 13, 2012 at 2:11 PM, Dave Page dp...@pgadmin.org wrote:
 On Fri, Jul 13, 2012 at 9:31 AM, Atri Sharma atri.j...@gmail.com
wrote:
 I am trying to install my FDW project on windows.I did some research
 and I believe I shall be requiring pre compiled binaries(dll
files).I
 tried cross compiling using MinGW on my Ubuntu.I am using PGXS for
 compiling and making my project.I tried overriding CC and setting it
 to the MinGW C compiler,but I am getting header file not found
errors.

 How can I compile binaries for windows with PGXS?

 Are you trying to build them to use with the EDB distribution, or
your
 own builds? If the former, then you'll likely need to build with VC++
 which means you cannot use PGXS. If it's your own build, then as long
 as you used Mingw/Msys in the first place, you should be able to use
 PGXS as you would on Linux.

 I am sorry,I am not too sure about EDB.Cannot I cross compile even for
 EnterpriseDB version?
 
 I am not too good with windows.I downloaded PostgreSQL 9.1 for windows
 from PostgreSQL website.
 
 What should I be doing?

I have never tried cross compiling PostgreSQL for Windows, and I
don't know if it works.  If you have a Windows machine, do it there,
if not, forget it since you cannot even test your binaries :^)

You can either use PGXS with MinGW, which should work just like
building on UNIX, or use Microsoft Visual C++ / Windows SDK.

If you use EnterpriseDB's binary downloads, which are built
with MSVC, you'll have to compile your extension by hand.

I have read a report that extensions built with MinGW are
compatible with EDB's binaries if you use --disable-float8-byval
(http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr
apper-odbc_fdw-on-windows.html).

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
On Fri, Jul 13, 2012 at 5:37 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Atri Sharma wrote:
 On Fri, Jul 13, 2012 at 2:11 PM, Dave Page dp...@pgadmin.org wrote:
 On Fri, Jul 13, 2012 at 9:31 AM, Atri Sharma atri.j...@gmail.com
 wrote:
 I am trying to install my FDW project on windows.I did some research
 and I believe I shall be requiring pre compiled binaries(dll
 files).I
 tried cross compiling using MinGW on my Ubuntu.I am using PGXS for
 compiling and making my project.I tried overriding CC and setting it
 to the MinGW C compiler,but I am getting header file not found
 errors.

 How can I compile binaries for windows with PGXS?

 Are you trying to build them to use with the EDB distribution, or
 your
 own builds? If the former, then you'll likely need to build with VC++
 which means you cannot use PGXS. If it's your own build, then as long
 as you used Mingw/Msys in the first place, you should be able to use
 PGXS as you would on Linux.

 I am sorry,I am not too sure about EDB.Cannot I cross compile even for
 EnterpriseDB version?

 I am not too good with windows.I downloaded PostgreSQL 9.1 for windows
 from PostgreSQL website.

 What should I be doing?

 I have never tried cross compiling PostgreSQL for Windows, and I
 don't know if it works.  If you have a Windows machine, do it there,
 if not, forget it since you cannot even test your binaries :^)

 You can either use PGXS with MinGW, which should work just like
 building on UNIX, or use Microsoft Visual C++ / Windows SDK.

 If you use EnterpriseDB's binary downloads, which are built
 with MSVC, you'll have to compile your extension by hand.

 I have read a report that extensions built with MinGW are
 compatible with EDB's binaries if you use --disable-float8-byval
 (http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr
 apper-odbc_fdw-on-windows.html).

 Yours,
 Laurenz Albe

Laurenz,

How do I use PGXS with MinGW)?

Atri


-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Albe Laurenz
Atri Sharma wrote:
 How do I use PGXS with MinGW)?

Just as described in
http://www.postgresql.org/docs/9.1/static/extend-pgxs.html

Build PostgreSQL with MinGW
(./configure  make  make install)
and use a Makefile as shown in the documentation.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Andrew Dunstan


On 07/13/2012 08:07 AM, Albe Laurenz wrote:
I have read a report that extensions built with MinGW are compatible 
with EDB's binaries if you use --disable-float8-byval 
(http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr 
apper-odbc_fdw-on-windows.html).



Unfortunately, not always.

I had to change how file_fixed_length_record_fdw did its IO before this 
would work.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
On Fri, Jul 13, 2012 at 6:13 PM, Andrew Dunstan and...@dunslane.net wrote:

 On 07/13/2012 08:07 AM, Albe Laurenz wrote:

 I have read a report that extensions built with MinGW are compatible with
 EDB's binaries if you use --disable-float8-byval
 (http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr
 apper-odbc_fdw-on-windows.html).



 Unfortunately, not always.

 I had to change how file_fixed_length_record_fdw did its IO before this
 would work.

 cheers

 andrew



Andrew,

I am trying to make pre compiled binaries so that,in case users of my
project(on windows) do not have a build environment configured,they
can use the pre compiled binaries to use my project.The best way to do
this seems to be cross compile.

I tried overriding CC in makefile of my project and setting the value
of CC to MinGW C compiler,but,it gave errors(header files not found).

What changes should I make?

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Andrew Dunstan


On 07/13/2012 08:57 AM, Atri Sharma wrote:

On Fri, Jul 13, 2012 at 6:13 PM, Andrew Dunstan and...@dunslane.net wrote:

On 07/13/2012 08:07 AM, Albe Laurenz wrote:

I have read a report that extensions built with MinGW are compatible with
EDB's binaries if you use --disable-float8-byval
(http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr
apper-odbc_fdw-on-windows.html).



Unfortunately, not always.

I had to change how file_fixed_length_record_fdw did its IO before this
would work.

cheers

andrew



Andrew,

I am trying to make pre compiled binaries so that,in case users of my
project(on windows) do not have a build environment configured,they
can use the pre compiled binaries to use my project.The best way to do
this seems to be cross compile.

I tried overriding CC in makefile of my project and setting the value
of CC to MinGW C compiler,but,it gave errors(header files not found).

What changes should I make?




Don't cross-compile. Build on Windows.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
On Fri, Jul 13, 2012 at 6:40 PM, Andrew Dunstan and...@dunslane.net wrote:

 On 07/13/2012 08:57 AM, Atri Sharma wrote:

 On Fri, Jul 13, 2012 at 6:13 PM, Andrew Dunstan and...@dunslane.net
 wrote:

 On 07/13/2012 08:07 AM, Albe Laurenz wrote:

 I have read a report that extensions built with MinGW are compatible
 with
 EDB's binaries if you use --disable-float8-byval
 (http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr
 apper-odbc_fdw-on-windows.html).



 Unfortunately, not always.

 I had to change how file_fixed_length_record_fdw did its IO before this
 would work.

 cheers

 andrew


 Andrew,

 I am trying to make pre compiled binaries so that,in case users of my
 project(on windows) do not have a build environment configured,they
 can use the pre compiled binaries to use my project.The best way to do
 this seems to be cross compile.

 I tried overriding CC in makefile of my project and setting the value
 of CC to MinGW C compiler,but,it gave errors(header files not found).

 What changes should I make?



 Don't cross-compile. Build on Windows.

 cheers

 andrew




So,I should set up the build environment on windows and then install my FDW?

Atri

-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Boszormenyi Zoltan

2012-07-11 21:47 keltezéssel, Tom Lane írta:

Boszormenyi Zoltan z...@cybertec.at writes:

Attached are the refreshed patches. InitializeTimeouts() can be called
twice and PGSemaphoreTimedLock() returns bool now. This saves
two calls to get_timeout_indicator().

I'm starting to look at this patch now.  There are a number of cosmetic
things I don't care for, the biggest one being the placement of
timeout.c under storage/lmgr/.  That seems an entirely random place,
since the functionality provided has got nothing to do with storage
let alone locks.  I'm inclined to think that utils/misc/ is about
the best option in the existing backend directory hierarchy.  Anybody
object to that, or have a better idea?


Good idea, storage/lmgr/timeout.c was chosen simply because
it was born out of files living there.


Another thing that needs some discussion is the handling of
InitializeTimeouts.  As designed, I think it's completely unsafe,
the reason being that if a process using timeouts forks off another
one, the child will inherit the parent's timeout reasons and be unable
to reset them.  Right now this might not be such a big problem because
the postmaster doesn't need any timeouts, but what if it does in the
future?  So I think we should drop the base_timeouts_initialized
protection, and that means we need a pretty consistent scheme for
where to call InitializeTimeouts.  But we already have the same issue
with respect to on_proc_exit callbacks, so we can just add
InitializeTimeouts calls in the same places as on_exit_reset().

Comments?

I'll work up a revised patch and post it.

regards, tom lane




--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Andrew Dunstan


On 07/13/2012 09:12 AM, Atri Sharma wrote:

On Fri, Jul 13, 2012 at 6:40 PM, Andrew Dunstan and...@dunslane.net wrote:

On 07/13/2012 08:57 AM, Atri Sharma wrote:

On Fri, Jul 13, 2012 at 6:13 PM, Andrew Dunstan and...@dunslane.net
wrote:

On 07/13/2012 08:07 AM, Albe Laurenz wrote:

I have read a report that extensions built with MinGW are compatible
with
EDB's binaries if you use --disable-float8-byval
(http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr
apper-odbc_fdw-on-windows.html).



Unfortunately, not always.

I had to change how file_fixed_length_record_fdw did its IO before this
would work.

cheers

andrew



Andrew,

I am trying to make pre compiled binaries so that,in case users of my
project(on windows) do not have a build environment configured,they
can use the pre compiled binaries to use my project.The best way to do
this seems to be cross compile.

I tried overriding CC in makefile of my project and setting the value
of CC to MinGW C compiler,but,it gave errors(header files not found).

What changes should I make?



Don't cross-compile. Build on Windows.

cheers

andrew




So,I should set up the build environment on windows and then install my FDW?




Yes.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding installation of FDW on Windows

2012-07-13 Thread Atri Sharma
On Fri, Jul 13, 2012 at 7:03 PM, Andrew Dunstan and...@dunslane.net wrote:

 On 07/13/2012 09:12 AM, Atri Sharma wrote:

 On Fri, Jul 13, 2012 at 6:40 PM, Andrew Dunstan and...@dunslane.net
 wrote:

 On 07/13/2012 08:57 AM, Atri Sharma wrote:

 On Fri, Jul 13, 2012 at 6:13 PM, Andrew Dunstan and...@dunslane.net
 wrote:

 On 07/13/2012 08:07 AM, Albe Laurenz wrote:

 I have read a report that extensions built with MinGW are compatible
 with
 EDB's binaries if you use --disable-float8-byval

 (http://www.postgresonline.com/journal/archives/246-ODBC-Foreign-Data-wr
 apper-odbc_fdw-on-windows.html).



 Unfortunately, not always.

 I had to change how file_fixed_length_record_fdw did its IO before this
 would work.

 cheers

 andrew


 Andrew,

 I am trying to make pre compiled binaries so that,in case users of my
 project(on windows) do not have a build environment configured,they
 can use the pre compiled binaries to use my project.The best way to do
 this seems to be cross compile.

 I tried overriding CC in makefile of my project and setting the value
 of CC to MinGW C compiler,but,it gave errors(header files not found).

 What changes should I make?


 Don't cross-compile. Build on Windows.

 cheers

 andrew



 So,I should set up the build environment on windows and then install my
 FDW?



 Yes.


 cheers

 andrew


Thanks Andrew!


-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Boszormenyi Zoltan

2012-07-11 22:59 keltezéssel, Tom Lane írta:

I wrote:

I'm starting to look at this patch now.

After reading this further, I think that the sched_next option is a
bad idea and we should get rid of it.  AFAICT, what it is meant to do
is (if !sched_next) automatically do disable_all_timeouts(true) if
the particular timeout happens to fire.  But there is no reason the
timeout's callback function couldn't do that; and doing it in the
callback is more flexible since you could have logic about whether to do
it or not, rather than freezing the decision at RegisterTimeout time.
Moreover, it does not seem to me to be a particularly good idea to
encourage timeouts to have such behavior, anyway.  Each time we add
another timeout we'd have to look to see if it's still sane for each
existing timeout to use !sched_next.  It would likely be better, in
most cases, for individual callbacks to explicitly disable any other
individual timeout reasons that should no longer be fired.


+1


I am also underwhelmed by the timeout_start callback function concept.


It was generalized out of static TimestampTz timeout_start_time;
in proc.c which is valid if deadlock_timeout is activated. It is used
in ProcSleep() for logging the time difference between the time when
the timeout was activated and now at several places in that function.


In the first place, that's broken enable_timeout, which incorrectly
assumes that the value it gets must be now (see its schedule_alarm
call).


You're right, it must be fixed.


   In the second place, it seems fairly likely that callers of
get_timeout_start would likewise want the clock time at which the
timeout was enabled, not the timeout_start reference time.  (If they
did want the latter, why couldn't they get it from wherever the callback
function had gotten it?)  I'm inclined to propose that we drop the
timeout_start concept and instead provide two functions for scheduling
interrupts:

enable_timeout_after(TimeoutName tn, int delay_ms);
enable_timeout_at(TimeoutName tn, TimestampTz fin_time);

where you use the former if you want the standard GetCurrentTimestamp +
n msec calculation, but if you want the stop time calculated in some
other way, you calculate it yourself and use the second function.

regards, tom lane




--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Bruce Momjian
On Fri, Jul 13, 2012 at 09:12:56AM +0200, Hampus Wessman wrote:
 How you decide what to do with the servers on failures isn't that
 important here, really. You can probably run e.g. Pacemaker on 3+
 machines and have it check for quorums to accomplish this. That's a
 good approach at least. You can still have only 2 database servers
 (for cost reasons), if you want. PostgreSQL could have all this
 built-in, but I don't think it sounds overly useful to only be able
 to disable synchronous replication on the primary after a timeout.
 Then you can never safely do a failover to the secondary, because
 you can't be sure synchronous replication was active on the failed
 primary...

So how about this for a Postgres TODO:

Add configuration variable to allow Postgres to disable synchronous
replication after a specified timeout, and add variable to alert
administrators of the change.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] psql \n shortcut for set search_path =

2012-07-13 Thread Colin 't Hart
On 10 July 2012 18:00, Tom Lane t...@sss.pgh.pa.us wrote:

 Josh Kupershmidt schmi...@gmail.com writes:
  On Tue, Jul 10, 2012 at 2:09 AM, Colin 't Hart co...@sharpheart.org
 wrote:
  Attached please find a trivial patch for psql which adds a \n meta
 command
  as a shortcut for typing set search_path =.

  I think the use-case is a bit narrow: saving a few characters typing
  on a command not everyone uses very often (I don't), at the expense of
  adding yet another command to remember.

 Another point here is that we are running low on single-letter backslash
 command names in psql.  I'm not sure that SET SEARCH_PATH is so useful
 as to justify using up one of the ones that are left.

 ISTM there was some discussion awhile back about user-definable typing
 shortcuts in psql.  I don't recall any details, but being able to set
 up SET SEARCH_PATH as a user-definable shortcut if it's useful to you
 would eliminate the question about whether it's useful to everyone.


And these could be setup to be available on psql startup by adding them to
.psqlrc

While I like my \n idea (heck, I thought of it :-) ), this would be a very
good generic solution.

I did a quick search but couldn't find the relevant discussion: do you
remember roughly when it was?
If I find it I could have a go at trying to implement it, but it might
exceed my ability in C...

Cheers,

Colin


Re: [HACKERS] [PATCH] psql \n shortcut for set search_path =

2012-07-13 Thread Colin 't Hart
On 10 July 2012 18:24, David Fetter da...@fetter.org wrote:

 On Tue, Jul 10, 2012 at 12:00:06PM -0400, Tom Lane wrote:
  Josh Kupershmidt schmi...@gmail.com writes:
   On Tue, Jul 10, 2012 at 2:09 AM, Colin 't Hart co...@sharpheart.org
 wrote:
   Attached please find a trivial patch for psql which adds a \n
   meta command as a shortcut for typing set search_path =.
 
   I think the use-case is a bit narrow: saving a few characters
   typing on a command not everyone uses very often (I don't), at the
   expense of adding yet another command to remember.
 
  Another point here is that we are running low on single-letter
  backslash command names in psql.  I'm not sure that SET
  SEARCH_PATH is so useful as to justify using up one of the ones
  that are left.
 
  ISTM there was some discussion awhile back about user-definable
  typing shortcuts in psql.

 In some sense, we already have them:

 \set FOO 'SELECT * FROM pg_stat_activity;'
 ...
 :FOO

 Was there more to it?


Can I pass a parameter to :FOO ?

Cheers,

Colin


Re: [HACKERS] BlockNumber initialized to InvalidBuffer?

2012-07-13 Thread Tom Lane
Markus Wanner mar...@bluegap.ch writes:
 On 07/13/2012 11:33 AM, Markus Wanner wrote:
 As another minor improvement, it doesn't seem necessary to repeatedly
 set the rootBlkno.

 Sorry, my mail program delivered an older version of the patch, which
 didn't reflect that change. Here's what I intended to send.

Applied to HEAD with a little bit of further tweaking.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Query planning, nested loops and FDW.

2012-07-13 Thread Ronan Dunklau
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello.

Thanks to the indications given by Tom Lane, I was able to make use of
parameterized path in our multicorn extension.

For example, given a (local) table real_table and another (foreign)
table foreign_table, having the same set of columns, if the foreign
table declares that a filter on col1 and col2 will, on average, return
100 rows (vs 1million for an unparameterized path), postgresql will
produce the following plan:

explain select real_table.*, foreign_table.*
from real_table inner join foreign_table
on real_table.col1 = foreign_table.col1 and
   real_table.col2 = foreign_table.col2;

   QUERY PLAN
-
-
 Nested Loop  (cost=10.00..19145165.27 rows=50558 width=64)
   -  Seq Scan on real_table  (cost=0.00..38.27 rows=2127 width=32)
   -  Foreign Scan on foreign_table  (cost=10.00..9000.00 rows=100
width=90)
 Filter: ((real_table.col1 = (col1)::text) AND
(real_table.col2 = (col2)::text))

This is exactly what I wanted to achieve.

But there is still room for improvement here, because the inner
foreign scan will be executed for each row, and not for each distinct
couple of col1, col2.

Does this happen because:
 - there is no other way to proceed.
 - the planner thinks it is less costly to execute the inner scan
multiple times than to do whatever it takes to execute it only one
time for each distinct couple ?

Best regards,

- -- 
Ronan Dunklau

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)

iQEcBAEBAgAGBQJQAEbxAAoJECTYLCgFy323FTUH/j81AAT1ODBdizIdTV+yI7nX
KjCg+hBwTlKMs8l8KUuslEo0wp3Wc8Yem0PFCvO3+0IYZ26iGsi5jIoqflaZ86gZ
MAjRoUyXfn3Maz/vU3TIYYwYnWhMp1i4GwFf6bqXaVlCVYAaARetksxc5o52lZZT
cgN/D1wek6FQkKSSN916siuIwlkEIHiMkB3VF2up1veRtzPbOvvosmdAKyYaMAcH
auqOBu/PVMUkR/5g/HbqkK+DoN3PYXpUw7LWPfoAQHEYijCPdR9De9BnJGW4RZL2
j2xkixJSR0h8KYkgH6WIAXTfbz1/l9GFXe0mJFskfU42mGmpLO41YeqhbSHaNtw=
=SiMC
-END PGP SIGNATURE-

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Type modifier parameter of input function

2012-07-13 Thread Michael Schneider

Hi,

whenever pg calls my input function, the type modifier parameter is 
ALWAYS (-1). If I specify a type modifier like


SELECT 'Hello World!'::my_string_type(MODIFIER1,MODIFIER2);

then pg:
1. calls the my_string_type-typmod_in function, and gets the correct result
2. calls the my_string_type-input function with type modifier parameter (-1)
3. calls the CAST(my_string_type AS my_string_type) function with the 
correct type modifier returned by the typmod_in function and the 
my_string_type pointer returned by the input function.


How can I convince pg to call the input function with the correct type 
modifier?


Thanks in advance

Regards

Michael





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Type modifier parameter of input function

2012-07-13 Thread Tom Lane
Michael Schneider mschn...@mpi-bremen.de writes:
 whenever pg calls my input function, the type modifier parameter is 
 ALWAYS (-1).
 ...
 How can I convince pg to call the input function with the correct type 
 modifier?

You can't.  Per the comment in coerce_type:

 * For most types we pass typmod -1 to the input routine, because
 * existing input routines follow implicit-coercion semantics for
 * length checks, which is not always what we want here.  Any length
 * constraint will be applied later by our caller.An exception
 * however is the INTERVAL type, for which we *must* pass the typmod
 * or it won't be able to obey the bizarre SQL-spec input rules. (Ugly
 * as sin, but so is this part of the spec...)

If the SQL standard didn't have such bizarrely inconsistent rules for
the results of implicit vs explicit coercions to char(n) and varchar(n),
we'd have more flexibility here.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pl/perl and utf-8 in sql_ascii databases

2012-07-13 Thread Alvaro Herrera

Excerpts from Kyotaro HORIGUCHI's message of jue jul 12 00:09:19 -0400 2012:
 
 Hmm... Sorry for immature patch..

No need to apologize.

  ... and this story hasn't ended yet, because one of the new tests is
  failing.  See here:
  
  http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=magpiedt=2012-07-11%2010%3A00%3A04
  
  The interesting part of the diff is:
 ...
SELECT encode(perl_utf_inout(E'ab\xe5\xb1\xb1cd')::bytea, 'escape')
  ! ERROR:  character with byte sequence 0xe5 0xb7 0x9d in encoding UTF8 
  has no equivalent in encoding LATIN1
  ! CONTEXT:  PL/Perl function perl_utf_inout
  
  
  I am not sure what can we do here other than remove this function and
  query from the test.
 
 I've run the regress only for the environment capable to handle
 the character U+5ddd (Japanese character which means river)...
 
 The byte sequences which can be decoded and the result byte
 sequences of encoding from a unicode character vary among the
 encodings.

Right.  I only ran the test in C and UTF8, not Latin1, so I didn't see
it fail either.

 The problem itself which is the aim of this thread could be
 covered without the additional test. That confirms if
 encoding/decoding is done as expected on calling the language
 handler.

Right.

 I suppose that testing for the two cases and additional
 one case which runs pg_do_encoding_conversion(), say latin1,
 would be enough to confirm that encoding/decoding is properly
 done, since the concrete conversion scheme is not significant
 this case.
 
 So I recommend that we should add the test for latin1 and omit
 the test from other than sql_ascii, utf8 and latin1. This might
 be archieved by create empty plperl_lc.sql and plperl_lc.out
 files for those encodings.
 
 What do you think about that?

I think that's probably too much engineering for something that doesn't
really warrant it.  A real solution to this problem could be to create
yet another new test file containing just this function definition and
the query that calls it, and have one expected file for each encoding;
but that's too much work and too many files, I'm afraid.

I can see us supporting tests that require a small number of expected
files.  No Make tricks with file copying, though.  If we can't get
some easy way to test this without that, I submit we should just remove
the test.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb and fsync

2012-07-13 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On Tuesday, June 19, 2012 07:22:02 PM Jeff Davis wrote:
 Right now I'm inclined to leave the patch as-is.

 Fine with that, I wanted to bring it up and see it documented.

 I have marked it with ready for committer. That committer needs to decide on -
 N in the regression tests or not, but that shouldn't be much of a problem ;)

I'm picking up this patch now.  What I'm inclined to do about the -N
business is to commit without that, so that we get a round of testing
in the buildfarm and find out about any portability issues, but then
change to use -N after a week or so.  I agree that in the long run
we don't want regression tests to run with fsyncs by default.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb and fsync

2012-07-13 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 On Mon, 2012-06-18 at 20:57 +0200, Andres Freund wrote:
 Ok. Sensible reasons. I dislike that we know have two files using different 
 logic (copydir.c only using fadvise, initdb using sync_file_range if 
 available). Maybe we should just move the fadvise and sync_file_range calls 
 into its own common function?

 I don't see fadvise in copydir.c, it looks like it just uses fsync. It
 might speed it up to use a pre-sync call there, too -- database creation
 does take a while.

No, that's incorrect: the fadvise is there, inside pg_flush_data() which
is done during the writing phase.  It seems to me that if we think
sync_file_range is a win, we ought to be using it in pg_flush_data just
as much as in initdb.

However, I'm a bit confused because in
http://archives.postgresql.org/pgsql-hackers/2012-03/msg01098.php
you said

 So, it looks like fadvise is the right thing to do, but I expect we'll

Was that backwards?  If not, why are we bothering with taking any
portability risks by adding use of sync_file_range?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for XLogRecPtr in expand_fmt_string?

2012-07-13 Thread Peter Eisentraut
On tor, 2012-07-12 at 10:13 +0300, Heikki Linnakangas wrote:
 One idea would be to use a macro, like this:
 
 #define XLOGRECPTR_FMT_ARGS(recptr) (uint32) ((recptr)  32),
 (uint32) 
 (recptr)
 
 elog(LOG, current WAL location is %X/%X,
 XLOGRECPTR_FMT_ARGS(RecPtr));
 
I would rather get rid of this %X/%X notation.  I know we have all grown
to like it, but it's always been a workaround.  We're now making the
move to simplify this whole business by saying, the WAL location is an
unsigned 64-bit number -- which everyone can understand -- but then why
is it printed in some funny format?



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Schema version management

2012-07-13 Thread Peter Eisentraut
On ons, 2012-07-11 at 17:20 -0400, Alvaro Herrera wrote:
 operator_!___numeric.sql (postfix, name does not need escape)
 operator_%7C%2F_integer__.sql (prefix)
 operator_%3C_bit_varying__bit_varying.sql (type name with spaces,
 changed to _)

I'm not sure if it makes things better to escape some operator names and
some not.  It could easily become confusing.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Schema version management

2012-07-13 Thread Peter Eisentraut
On tor, 2012-07-12 at 16:14 +0200, Joel Jacobson wrote:
 Could work. But I think it's more relevant and useful to keep all objects
 in a schema in its own directory.

Personally, I hate this proposed nested directory structure.  I would
like to have all objects in one directory.

But there is a lot of personally in this thread, of course.

 I think its more common you want to show all objects within schema X
 than show all schemas of type X.

Or maybe it isn't ...



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw in contrib

2012-07-13 Thread Peter Eisentraut
On tor, 2012-07-12 at 19:44 +0900, Shigeru HANADA wrote:
 Yes, I've proposed to rename existing postgresql_fdw_validator to
 dblink_fdw_validator and move it into contrib/dblink so that pgsql_fdw
 can use the name postgresql_fdw and postgresql_fdw_validator.

I was somehow under the impression that plproxy also used this, but I
see now that it doesn't.  So I agree with renaming the existing
postgresql_fdw and moving it into the dblink module.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for XLogRecPtr in expand_fmt_string?

2012-07-13 Thread Bruce Momjian
On Fri, Jul 13, 2012 at 10:34:35PM +0300, Peter Eisentraut wrote:
 On tor, 2012-07-12 at 10:13 +0300, Heikki Linnakangas wrote:
  One idea would be to use a macro, like this:
  
  #define XLOGRECPTR_FMT_ARGS(recptr) (uint32) ((recptr)  32),
  (uint32) 
  (recptr)
  
  elog(LOG, current WAL location is %X/%X,
  XLOGRECPTR_FMT_ARGS(RecPtr));
  
 I would rather get rid of this %X/%X notation.  I know we have all grown
 to like it, but it's always been a workaround.  We're now making the
 move to simplify this whole business by saying, the WAL location is an
 unsigned 64-bit number -- which everyone can understand -- but then why
 is it printed in some funny format?

+1

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for XLogRecPtr in expand_fmt_string?

2012-07-13 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Fri, Jul 13, 2012 at 10:34:35PM +0300, Peter Eisentraut wrote:
 I would rather get rid of this %X/%X notation.

 +1

I'm for it if we can find a less messy way of dealing with the
platform-specific-format-code issue.  I don't want to be plugging
UINT64_FORMAT into string literals in a pile of places.

Personally I think that a function returning a static string
buffer is probably good enough for this.  If there are places
where we need to print more than one XLogRecPtr value in a message,
we could have two of them.  (Yeah, it's ugly, but less so than
dealing with platform-specific format codes everywhere.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] isolation check takes a long time

2012-07-13 Thread Andrew Dunstan
Why does the isolation check take such a long time? On some of my slower 
buildfarm members I am thinking of disabling it because it takes so 
long. This single test typically takes longer than a full serial 
standard regression test. Is there any way we could make it faster?


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Boszormenyi Zoltan

2012-07-12 19:05 keltezéssel, Tom Lane írta:

Here is a revised version of the timeout-infrastructure patch.
I whacked it around quite a bit, notably:

* I decided that the most convenient way to handle the initialization
issue was to combine establishment of the signal handler with resetting
of the per-process variables.  So handle_sig_alarm is no longer global,
and InitializeTimeouts is called at the places where we used to do
pqsignal(SIGALRM, handle_sig_alarm);.  I believe this is correct
because any subprocess that was intending to use SIGALRM must have
called that before establishing any timeouts.


OK.


* BTW, doing that exposed the fact that walsender processes were failing
to establish a SIGALRM signal handler, which is a pre-existing bug,
because they run a normal authentication transaction during InitPostgres
and hence need to be able to cope with deadlock and statement timeouts.
I will do something about back-patching a fix for that.


Wow, my work uncovered a pre-existing bug in PostgreSQL. :-)


* I ended up putting the RegisterTimeout calls for DEADLOCK_TIMEOUT
and STATEMENT_TIMEOUT into InitPostgres, ensuring that they'd get
done in walsender and autovacuum processes.  I'm not totally satisfied
with that, but for sure it didn't work to only establish them in
regular backends.

* I didn't like the TimeoutName nomenclature, because to me name
suggests that the value is a string, not just an enum.  So I renamed
that to TimeoutId.


OK.


* I whacked around the logic in timeout.c a fair amount, because it
had race conditions if SIGALRM happened while enabling or disabling
a timeout.  I believe the attached coding is safe, but I'm not totally
happy with it from a performance standpoint, because it will do two
setitimer calls (a disable and then a re-enable) in many cases where
the old code did only one.


Disabling deadlock timeout, a.k.a. disable_sig_alarm(false) in
the original code called setitimer() twice if statement timeout
was still in effect, it was done in CheckStatementTimeout().
Considering this, I think there is no performance problem with
the new code you came up with.


I think what we ought to do is go ahead and apply this, so that we
can have the API nailed down, and then we can revisit the internals
of timeout.c to see if we can't get the performance back up.
It's clearly a much cleaner design than the old spaghetti logic in
proc.c, so I think we ought to go ahead with this independently of
whether the second patch gets accepted.


There is one tiny bit you might have broken. You wrote previously:


I am also underwhelmed by the timeout_start callback function concept.
In the first place, that's broken enable_timeout, which incorrectly
assumes that the value it gets must be now (see its schedule_alarm
call).  In the second place, it seems fairly likely that callers of
get_timeout_start would likewise want the clock time at which the
timeout was enabled, not the timeout_start reference time.  (If they
did want the latter, why couldn't they get it from wherever the callback
function had gotten it?)  I'm inclined to propose that we drop the
timeout_start concept and instead provide two functions for scheduling
interrupts:

enable_timeout_after(TimeoutName tn, int delay_ms);
enable_timeout_at(TimeoutName tn, TimestampTz fin_time);

where you use the former if you want the standard GetCurrentTimestamp +
n msec calculation, but if you want the stop time calculated in some
other way, you calculate it yourself and use the second function.


It's okay, but you haven't really followed it with STATEMENT_TIMEOUT:

-8--8--8--8--8-
*** 2396,2404 
/* Set statement timeout running, if any */
/* NB: this mustn't be enabled until we are within an xact */
if (StatementTimeout  0)
!   enable_sig_alarm(StatementTimeout, true);
else
!   cancel_from_timeout = false;

xact_started = true;
}
--- 2397,2405 
/* Set statement timeout running, if any */
/* NB: this mustn't be enabled until we are within an xact */
if (StatementTimeout  0)
!   enable_timeout_after(STATEMENT_TIMEOUT, 
StatementTimeout);
else
!   disable_timeout(STATEMENT_TIMEOUT, false);

xact_started = true;
}
-8--8--8--8--8-

It means that StatementTimeout losts its precision. It would trigger
in the future counting from now instead of counting from
GetCurrentStatementStartTimestamp(). It should be:

enable_timeout_at(STATEMENT_TIMEOUT,
TimestampTzPlusMilliseconds(GetCurrentStatementStartTimestamp(), 
StatementTimeout));


I haven't really looked at the second patch yet, but at minimum that
will need some rebasing to match the API tweaks here.


Yes, I will do that.

Thanks for your 

Re: [HACKERS] initdb and fsync

2012-07-13 Thread Tom Lane
I wrote:
 I'm picking up this patch now.  What I'm inclined to do about the -N
 business is to commit without that, so that we get a round of testing
 in the buildfarm and find out about any portability issues, but then
 change to use -N after a week or so.  I agree that in the long run
 we don't want regression tests to run with fsyncs by default.

Committed without the -N in pg_regress (for now).  I also stuck
sync_file_range into the backend's pg_flush_data --- it would be
interesting to hear measurements of whether that makes a noticeable
difference for CREATE DATABASE.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Boszormenyi Zoltan

2012-07-13 22:32 keltezéssel, Boszormenyi Zoltan írta:

2012-07-12 19:05 keltezéssel, Tom Lane írta:


I haven't really looked at the second patch yet, but at minimum that
will need some rebasing to match the API tweaks here.


Yes, I will do that.


While doing it, I discovered another bug you introduced.
enable_timeout_after(..., 0); would set an alarm instead of ignoring it.
Try SET deadlock_timeout = 0;

Same for enable_timeout_at(..., fin_time): if fin_time points to the past,
it enables a huge timeout that wouldn't possibly trigger for short
transactions but it's a bug nevertheless.



Thanks for your review and work.

Best regards,
Zoltán Böszörményi




--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb and fsync

2012-07-13 Thread Jeff Davis
On Fri, 2012-07-13 at 15:21 -0400, Tom Lane wrote:
 No, that's incorrect: the fadvise is there, inside pg_flush_data() which
 is done during the writing phase.

Yeah, Andres pointed that out, also.

   It seems to me that if we think
 sync_file_range is a win, we ought to be using it in pg_flush_data just
 as much as in initdb.

It was pretty clearly a win for initdb, but I wasn't convinced it was a
good idea for other use cases. 

The mechanism is outlined in the email you linked below. To paraphrase,
fadvise tries to put the data in the io scheduler queue (still in the
kernel before going to the device), and gives up if there is no room;
sync_file_range waits for there to be room if none is available.

For the case of initdb, the data needing to be fsync'd is effectively
constant, and it's a lot of small files. If the requests don't make it
to the io scheduler queue before fsync, the kernel doesn't have an
opportunity to schedule them properly.

But for larger amounts of data copying (like ALTER DATABASE SET
TABLESPACE), it seemed like there was more risk that sync_file_range
would starve out other processes by continuously filling up the io
scheduler queue (I'm not sure if there are protections against that or
not). Also, if the files are larger, a single fsync represents more
data, and the kernel may be able to schedule it well enough anyway.

I'm not an authority in this area though, so if you are comfortable
extending sync_file_range to copydir() as well, that's fine with me.

 However, I'm a bit confused because in
 http://archives.postgresql.org/pgsql-hackers/2012-03/msg01098.php
 you said
 
  So, it looks like fadvise is the right thing to do, but I expect we'll
 
 Was that backwards?  If not, why are we bothering with taking any
 portability risks by adding use of sync_file_range?

That part of the email was less than conclusive, and I can't remember
exactly what point I was trying to make. sync_file_range is a big
practical win for the reasons I mentioned above (and also avoids some
unpleasant noises coming from the disk drive).

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Tom Lane
Boszormenyi Zoltan z...@cybertec.at writes:
 It means that StatementTimeout losts its precision. It would trigger
 in the future counting from now instead of counting from
 GetCurrentStatementStartTimestamp().

That is, in fact, better not worse.  Note the comment in the existing
code:

 * Begin statement-level timeout
 *
 * Note that we compute statement_fin_time with reference to the
 * statement_timestamp, but apply the specified delay without any
 * correction; that is, we ignore whatever time has elapsed since
 * statement_timestamp was set.  In the normal case only a small
 * interval will have elapsed and so this doesn't matter, but there
 * are corner cases (involving multi-statement query strings with
 * embedded COMMIT or ROLLBACK) where we might re-initialize the
 * statement timeout long after initial receipt of the message. In
 * such cases the enforcement of the statement timeout will be a bit
 * inconsistent.  This annoyance is judged not worth the cost of
 * performing an additional gettimeofday() here.

That is, measuring from GetCurrentStatementStartTimestamp is a hack to
save one gettimeofday call, at the cost of making the timeout less
accurate, sometimes significantly so.  In the new model there isn't any
good way to duplicate this kluge (in particular, there's no point in
using enable_timeout_at, because that will still make a gettimeofday
call).  That doesn't bother me too much.  I'd rather try to buy back
whatever performance was lost by seeing if we can reduce the number of
setitimer calls.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Tom Lane
Boszormenyi Zoltan z...@cybertec.at writes:
 While doing it, I discovered another bug you introduced.
 enable_timeout_after(..., 0); would set an alarm instead of ignoring it.
 Try SET deadlock_timeout = 0;

Hm.  I don't think it's a bug for enable_timeout_after(..., 0) to cause
a timeout ... but we'll have to change the calling code.  Thanks for
catching that.

 Same for enable_timeout_at(..., fin_time): if fin_time points to the past,
 it enables a huge timeout

No, it should cause an immediate interrupt, or at least after 1
microsecond.  Look at TimestampDifference.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb and fsync

2012-07-13 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 For the case of initdb, the data needing to be fsync'd is effectively
 constant, and it's a lot of small files. If the requests don't make it
 to the io scheduler queue before fsync, the kernel doesn't have an
 opportunity to schedule them properly.

 But for larger amounts of data copying (like ALTER DATABASE SET
 TABLESPACE), it seemed like there was more risk that sync_file_range
 would starve out other processes by continuously filling up the io
 scheduler queue (I'm not sure if there are protections against that or
 not). Also, if the files are larger, a single fsync represents more
 data, and the kernel may be able to schedule it well enough anyway.

 I'm not an authority in this area though, so if you are comfortable
 extending sync_file_range to copydir() as well, that's fine with me.

It could use some performance testing, which I don't have the time
for (or proper equipment).  Anyone?

Also, I note that copy_file is set up so that it does sync_file_range or
fadvise for each 64K chunk of data, which seems mighty small.  I wonder
if it'd be better to take that out of the loop and do one whole-file
advise at the end of the copy loop.  Or at least use some larger
granularity for those calls.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Boszormenyi Zoltan

2012-07-13 23:51 keltezéssel, Tom Lane írta:

Boszormenyi Zoltan z...@cybertec.at writes:

While doing it, I discovered another bug you introduced.
enable_timeout_after(..., 0); would set an alarm instead of ignoring it.
Try SET deadlock_timeout = 0;

Hm.  I don't think it's a bug for enable_timeout_after(..., 0) to cause
a timeout ... but we'll have to change the calling code.  Thanks for
catching that.


You're welcome. This caused a segfault in my second patch,
the code didn't expect enable_timeout_after(..., 0);
to set up a timer.

So, the calling code should check for the value and not call
enable_timeout_*() when it shouldn't, right? It's making the code
more obvious for the casual reader, I agree it's better that way.

Will you post a new version with callers checking their *Timeout settings
or commit it with this change? I can then post a new second patch.

Regarding the lock_timeout functionality: the patch can be reduced to
about half of its current size and it would be a lot less intrusive if the
LockAcquire() callers don't need to report the individual object types
and names or OIDs. Do you prefer the verbose ereport()s or a
generic one about lock timeout triggered in ProcSleep()?


Same for enable_timeout_at(..., fin_time): if fin_time points to the past,
it enables a huge timeout

No, it should cause an immediate interrupt, or at least after 1
microsecond.  Look at TimestampDifference.


Okay.

--
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Alvaro Herrera

Excerpts from Boszormenyi Zoltan's message of vie jul 13 18:11:27 -0400 2012:

 Regarding the lock_timeout functionality: the patch can be reduced to
 about half of its current size and it would be a lot less intrusive if the
 LockAcquire() callers don't need to report the individual object types
 and names or OIDs. Do you prefer the verbose ereport()s or a
 generic one about lock timeout triggered in ProcSleep()?

For what it's worth, I would appreciate it if you would post the lock
timeout patch for the upcoming commitfest.  This one is already almost a
month long now.  That way we can close this CF item soon and concentrate
on the remaining patches.  This one has received its fair share of
committer attention already, ISTM.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Tom Lane
Boszormenyi Zoltan z...@cybertec.at writes:
 Try SET deadlock_timeout = 0;

Actually, when I try that I get

ERROR:  0 is outside the valid range for parameter deadlock_timeout (1 .. 
2147483647)

So I don't see any bug here.  The places that are unconditionally doing
enable_timeout_after(..., DeadlockTimeout); are perfectly fine.  The
only place that might need an if-test has already got one:

  if (StatementTimeout  0)
! enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);
  else
! disable_timeout(STATEMENT_TIMEOUT, false);


regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] isolation check takes a long time

2012-07-13 Thread Alvaro Herrera

Excerpts from Andrew Dunstan's message of vie jul 13 16:05:37 -0400 2012:
 Why does the isolation check take such a long time? On some of my slower 
 buildfarm members I am thinking of disabling it because it takes so 
 long. This single test typically takes longer than a full serial 
 standard regression test. Is there any way we could make it faster?

I think the prepared transactions test is the one that takes the
longest.  Which is a shame when prepared xacts are not enabled, because
all it does is throw millions of prepared transactions are not enabled
errors.  There is one other test that takes very long because it commits
a large amount of transactions.  I found it to be much faster if run
with fsync disabled.

Maybe it'd be a good idea to disable fsync on buildfarm runs, if we
don't already do so?

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] lock_timeout and common SIGALRM framework

2012-07-13 Thread Tom Lane
Alvaro Herrera alvhe...@commandprompt.com writes:
 For what it's worth, I would appreciate it if you would post the lock
 timeout patch for the upcoming commitfest.

+1.  I think it's reasonable to get the infrastructure patch in now,
but we are running out of time in this commitfest (and there's still
a lot of patches that haven't been reviewed at all).

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb and fsync

2012-07-13 Thread Jeff Davis
On Fri, 2012-07-13 at 17:35 -0400, Tom Lane wrote:
 I wrote:
  I'm picking up this patch now.  What I'm inclined to do about the -N
  business is to commit without that, so that we get a round of testing
  in the buildfarm and find out about any portability issues, but then
  change to use -N after a week or so.  I agree that in the long run
  we don't want regression tests to run with fsyncs by default.
 
 Committed without the -N in pg_regress (for now).  I also stuck
 sync_file_range into the backend's pg_flush_data --- it would be
 interesting to hear measurements of whether that makes a noticeable
 difference for CREATE DATABASE.

Thank you.

One point about the commit message: fadvise does not block to go into
the request queue, sync_file_range does. The problem with fadvise is
that, when the request queue is small, it fills up so fast that most of
the requests never make it in, and fadvise is essentially a no-op.
sync_file_range waits for room in the queue, which is (based on my
tests) enough to improve the scheduling a lot.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Jose Ildefonso Camargo Tolosa
On Fri, Jul 13, 2012 at 12:25 AM, Amit Kapila amit.kap...@huawei.com wrote:

 From: pgsql-hackers-ow...@postgresql.org
 [mailto:pgsql-hackers-ow...@postgresql.org]
 On Behalf Of Jose Ildefonso Camargo Tolosa
On Thu, Jul 12, 2012 at 9:28 AM, Aidan Van Dyk ai...@highrise.ca wrote:
 On Thu, Jul 12, 2012 at 9:21 AM, Shaun Thomas stho...@optionshouse.com
 wrote:


 As currently is, the point of: freezing the master because standby
 dies is not good for all cases (and I dare say: for most cases), and
 having to wait for pacemaker or other monitoring to note that, change
 master config and reload... it will cause a service disruption! (for
 several seconds, usually, ~30 seconds).

 Yes, this is true that it can cause service disruption, but the same will be
 True even if master detects that internally by having timeout.
 By keeping this as external, the current behavior of PostgreSQL can be
 maintained that
 if there is no standy in sync mode, it will wait and still serve the purpose
 as externally it can send message for master.


How does currently PostgreSQL detects that its main synchronous
standby went away and switch to another synchronous standby on the
synchronous_standby_names config parameter?

The same logic could be applied to no more synchronous standbys: go
into standalone (optionally).

--
Ildefonso Camargo
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] initdb and fsync

2012-07-13 Thread Tom Lane
Jeff Davis pg...@j-davis.com writes:
 One point about the commit message: fadvise does not block to go into
 the request queue, sync_file_range does. The problem with fadvise is
 that, when the request queue is small, it fills up so fast that most of
 the requests never make it in, and fadvise is essentially a no-op.
 sync_file_range waits for room in the queue, which is (based on my
 tests) enough to improve the scheduling a lot.

I see.  I misunderstood your previous message.  In that case, it seems
quite likely that it might be helpful if copy_file were to aggregate
the fadvise/sync_file_range calls over larger pieces of the file.
(I'm assuming that the request queue isn't bright enough to aggregate
by itself, though that might be wrong.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Jose Ildefonso Camargo Tolosa
Hi Hampus,

On Fri, Jul 13, 2012 at 2:42 AM, Hampus Wessman ham...@hampuswessman.se wrote:
 Hi all,

 Here are some (slightly too long) thoughts about this.

Nah, not that long.


 Shaun Thomas skrev 2012-07-12 22:40:

 On 07/12/2012 12:02 PM, Bruce Momjian wrote:

 Well, the problem also exists if add it as an internal database
 feature --- how long do we wait to consider the standby dead, how do
 we inform administrators, etc.


 True. Though if there is no secondary connected, either because it's not
 there yet, or because it disconnected, that's an easy check. It's the
 network lag/stall detection that's tricky.


 It is indeed tricky to detect this. If you don't get an (immediate) reply
 from the secondary (and you never do!), then all you can do is wait and
 *eventually* (after how long? 250ms? 10s?) assume that there is no
 connection between them. The conclusion may very well be wrong sometimes. A
 second problem is that we still don't know if this is caused by some kind of
 network problems or if it's caused by the secondary not running. It's
 perfectly possible that both servers are working, but just can't communicate
 at the moment.

How about: same logic as it currently uses to detect when the
designated synchronous standby is no longer there, and move on to
the next one on the synchronous_standby_names?

The rule to *know* that a standby went away is already there.


 The thing is that what we do next (at least if our data is important and why
 otherwise use synchronous replication of any kind...) depends on what *did*
 happen. Assume that we have two database servers. At any time we need at
 most one primary database to be running. Without that requirement our data
 can get messed up completely... If HA is important to us, we may choose to

Not necessarily, but true: that's why you use to kill the (failing?)
node on promotion of the standby, just in case.

 do a failover to the secondary (and live without replication for the moment)
 if the primary fails. With synchronous repliction, we can do this without
 losing any data. If the secondary also dies, then we do lose data (and we'll
 know it!), but it might be an acceptable risk. If the secondary isn't
 permanently damaged, then we might even be able to get the data back after
 some down time. Ok, so that's one way to reconfigure the database servers on
 a failure. If the secondary fails instead, then we can do similarly and
 remove it from the cluster (or in other words, disable synchronous
 replication to the secondary). Again, we don't lose any data by doing this.

Right, but you have to monitor the standby too! ie: more work on the
pacemaker side. and non-trivial work, for example, just blowing
away the standby won't do any good here, as for the master: you can
just power it off, promote the standby, and be done with it!, if the
standby fails: you have to modify master's config, and reload configs
there... more code: more chances of failure.

 We're taking a certain risk, however. We can't safely do a failover to the
 secondary anymore... So if the primary fails now, then the only way not to
 lose data is to hope that we can get it back from the failed machine (the
 failure may be temporary).

 There's also the third possibility, of course, that the two servers are both
 up and running, but they can't communicate over the network at the moment
 (this is, by the way, a difference from RAID, I guess). What do we do then?

Kill the failing node, just in case, in this case, without the
extra work of monitoring standby, you would just make the standby
kill the master before promoting the standby.

 Well, we still need at most one primary database server. We'll have to
 (somehow, which doesn't matter as much) decide which database to keep and
 consider the other one down. Then we can just do as above (with all the

This is arbitrary, we usually just assume the master to be failing
when the standby is healthy (from the standby point of view).

 same implications!). Is it always a good idea to keep the primary? No! What
 if you (as a stupid example) pull the network cable from the primary (or
 maybe turn off a switch so that it's isolated from most of the network)? In

That means that you failed to have redundant connectivity to the
standby (that is a must on clusters), yes, redundant switch too: with
smart switches on the US$100 range now, there is no much excuse for
not having 2 switches connecting your cluster (but, if you have just 2
nodes, you just need 2 network interfaces, and 2 network cables).

 that case you probably want the secondary to take over instead. At least if
 you value service availability. At this point we can still do a safe
 failover too.

 My point here is that if HA is important to you, then you may very well want
 to disable synchronous replication on a failure to avoid down time, but this
 has to be integrated with your overall failover / cluster management
 solution. Just having the primary automatically disable 

Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Jose Ildefonso Camargo Tolosa
On Fri, Jul 13, 2012 at 10:22 AM, Bruce Momjian br...@momjian.us wrote:
 On Fri, Jul 13, 2012 at 09:12:56AM +0200, Hampus Wessman wrote:
 How you decide what to do with the servers on failures isn't that
 important here, really. You can probably run e.g. Pacemaker on 3+
 machines and have it check for quorums to accomplish this. That's a
 good approach at least. You can still have only 2 database servers
 (for cost reasons), if you want. PostgreSQL could have all this
 built-in, but I don't think it sounds overly useful to only be able
 to disable synchronous replication on the primary after a timeout.
 Then you can never safely do a failover to the secondary, because
 you can't be sure synchronous replication was active on the failed
 primary...

 So how about this for a Postgres TODO:

 Add configuration variable to allow Postgres to disable synchronous
 replication after a specified timeout, and add variable to alert
 administrators of the change.

I agree we need a TODO for this, but... I think timeout-only is not
the best choice, there should be a maximum timeout (as a last
resource: the maximum time we are willing to wait for standby, this
have to have the option of forever), but certainly PostgreSQL have
to detect the *complete* disconnection of the standby (or all standbys
on the synchronous_standby_names), if it detects that no standbys are
eligible for sync standby AND the option to do fallback to async is
enabled = it will go into standalone mode (as if
synchronous_standby_names were empty), otherwise (if option is
disabled) it will just continue to wait for ever (the last resource
timeout is ignored if the fallback option is disabled) I would
call this soft_synchronous_standby, and
soft_synchronous_standby_timeout (in seconds, 0=forever, a sane
value would be ~5 seconds) or something like that (I'm quite bad at
picking names :( ).

--
Ildefonso Camargo
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Amit kapila
From: pgsql-hackers-ow...@postgresql.org [pgsql-hackers-ow...@postgresql.org] 
on behalf of Jose Ildefonso Camargo Tolosa [ildefonso.cama...@gmail.com]
Sent: Saturday, July 14, 2012 6:08 AM
On Fri, Jul 13, 2012 at 10:22 AM, Bruce Momjian br...@momjian.us wrote:
 On Fri, Jul 13, 2012 at 09:12:56AM +0200, Hampus Wessman wrote:

 So how about this for a Postgres TODO:

 Add configuration variable to allow Postgres to disable synchronous
replication after a specified timeout, and add variable to alert
 administrators of the change.

 I agree we need a TODO for this, but... I think timeout-only is not
 the best choice, there should be a maximum timeout (as a last
 resource: the maximum time we are willing to wait for standby, this
 have to have the option of forever), but certainly PostgreSQL have
 to detect the *complete* disconnection of the standby (or all standbys
 on the synchronous_standby_names), if it detects that no standbys are
 eligible for sync standby AND the option to do fallback to async is
 enabled = it will go into standalone mode (as if
 synchronous_standby_names were empty), otherwise (if option is
 disabled) it will just continue to wait for ever (the last resource
 timeout is ignored if the fallback option is disabled) I would
 call this soft_synchronous_standby, and
 soft_synchronous_standby_timeout (in seconds, 0=forever, a sane
 value would be ~5 seconds) or something like that (I'm quite bad at
 picking names :( ).

After it has gone to standalone mode, if the standby came back will it be able 
to return back to sync mode with it.
If not, then won't it break the current behavior, as currently I think in 
freeze mode if the standby came back, the sync mode replication
can again start.

With Regards,
Amit Kapila. 
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Jose Ildefonso Camargo Tolosa
On Fri, Jul 13, 2012 at 11:12 PM, Amit kapila amit.kap...@huawei.com wrote:
 From: pgsql-hackers-ow...@postgresql.org [pgsql-hackers-ow...@postgresql.org] 
 on behalf of Jose Ildefonso Camargo Tolosa [ildefonso.cama...@gmail.com]
 Sent: Saturday, July 14, 2012 6:08 AM
 On Fri, Jul 13, 2012 at 10:22 AM, Bruce Momjian br...@momjian.us wrote:
 On Fri, Jul 13, 2012 at 09:12:56AM +0200, Hampus Wessman wrote:

 So how about this for a Postgres TODO:

 Add configuration variable to allow Postgres to disable synchronous
replication after a specified timeout, and add variable to alert
 administrators of the change.

 I agree we need a TODO for this, but... I think timeout-only is not
 the best choice, there should be a maximum timeout (as a last
 resource: the maximum time we are willing to wait for standby, this
 have to have the option of forever), but certainly PostgreSQL have
 to detect the *complete* disconnection of the standby (or all standbys
 on the synchronous_standby_names), if it detects that no standbys are
 eligible for sync standby AND the option to do fallback to async is
 enabled = it will go into standalone mode (as if
 synchronous_standby_names were empty), otherwise (if option is
 disabled) it will just continue to wait for ever (the last resource
 timeout is ignored if the fallback option is disabled) I would
 call this soft_synchronous_standby, and
 soft_synchronous_standby_timeout (in seconds, 0=forever, a sane
 value would be ~5 seconds) or something like that (I'm quite bad at
 picking names :( ).

 After it has gone to standalone mode, if the standby came back will it be 
 able to return back to sync mode with it.

That's the idea, yes, after the standby comes back, the master would
act as if the sync standby connected for the first time: first going
through the catchup mode, and once the lag between standby and
primary reaches zero (...) we move to real-time streaming state
(from 9.1 docs), at that point: normal sync behavior is restored.

--
Ildefonso Camargo
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Synchronous Standalone Master Redoux

2012-07-13 Thread Amit kapila
 From: Jose Ildefonso Camargo Tolosa [ildefonso.cama...@gmail.com]
 Sent: Saturday, July 14, 2012 9:36 AM
On Fri, Jul 13, 2012 at 11:12 PM, Amit kapila amit.kap...@huawei.com wrote:
 From: pgsql-hackers-ow...@postgresql.org [pgsql-hackers-ow...@postgresql.org] 
 on behalf of Jose Ildefonso Camargo Tolosa [ildefonso.cama...@gmail.com]
 Sent: Saturday, July 14, 2012 6:08 AM
 On Fri, Jul 13, 2012 at 10:22 AM, Bruce Momjian br...@momjian.us wrote:
 On Fri, Jul 13, 2012 at 09:12:56AM +0200, Hampus Wessman wrote:

 So how about this for a Postgres TODO:

 Add configuration variable to allow Postgres to disable synchronous
replication after a specified timeout, and add variable to alert
 administrators of the change.

 I agree we need a TODO for this, but... I think timeout-only is not
 the best choice, there should be a maximum timeout (as a last
 resource: the maximum time we are willing to wait for standby, this
 have to have the option of forever), but certainly PostgreSQL have
 to detect the *complete* disconnection of the standby (or all standbys
 on the synchronous_standby_names), if it detects that no standbys are
 eligible for sync standby AND the option to do fallback to async is
 enabled = it will go into standalone mode (as if
 synchronous_standby_names were empty), otherwise (if option is
 disabled) it will just continue to wait for ever (the last resource
 timeout is ignored if the fallback option is disabled) I would
 call this soft_synchronous_standby, and
 soft_synchronous_standby_timeout (in seconds, 0=forever, a sane
 value would be ~5 seconds) or something like that (I'm quite bad at
 picking names :( ).

 After it has gone to standalone mode, if the standby came back will it be 
 able to return back to sync mode with it.

 That's the idea, yes, after the standby comes back, the master would
 act as if the sync standby connected for the first time: first going
 through the catchup mode, and once the lag between standby and
 primary reaches zero (...) we move to real-time streaming state
 (from 9.1 docs), at that point: normal sync behavior is restored.

Idea wise, it looks okay, but are you sure that in the current code/design, it 
can handle the way you are suggesting.
I am not sure it can work because it might be the case that due to network 
instability, the master has gone in standalone mode
and now after standy is able to communicate back, it might be expecting to get 
more data rather than go in cacthup mode.
I believe some person who is expert of this code area can comment here to make 
it more concrete.

With Regards,
Amit Kapila.
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers