Re: [HACKERS] Has anybody been able to install Pg 9.3 beta .debs on Ubuntu 12.04

2013-08-07 Thread Ryan Kelly
On Wed, Aug 08/07/13, 2013 at 07:29:32PM -0400, Hannu Krosing wrote:
> Has anybody been able to install Pg 9.3 beta .debs on Ubuntu 12.04
Yes.

> when I try to select postgresql-9.3 (9,3~beta2-2.pgdg12.4+1) for
> installation I get error saying
> 
> ...
> The following packages have unmet dependencies:
>  postgresql-9.3 : Depends: postgresql-client-9.3 but it is not going to
> be installed
> E: Unable to correct problems, you have held broken packages.
I seem to recall not adding "9.3" explicitly to the sources.list line as
causing this problem. Is this you sources.list line:

deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main 9.3

This is recommended by:
https://wiki.postgresql.org/wiki/Apt/FAQ#I_want_to_try_the_beta_version_of_the_next_PostgreSQL_release

-Ryan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RETURNING syntax for COPY

2013-05-08 Thread Ryan Kelly
On Wed, May 05/08/13, 2013 at 10:55:40AM -0700, David Fetter wrote:
> On Wed, May 08, 2013 at 01:16:14PM -0400, Tom Lane wrote:
> > Heikki Linnakangas  writes:
> > > On 08.05.2013 19:44, Tom Lane wrote:
> > >> No there isn't; what you suggest would require FE/BE protocol
> > >> extensions, making it several orders of magnitude more work than the
> > >> other thing.
> > 
> > > I'd imagine that the flow would go something like this:
> > 
> > > BEFE
> > 
> > > CopyInResponse
> > >   CopyData
> > >   CopyData
> > >   ...
> > >   CopyDone
> > > RowDescription
> > > DataRow
> > > DataRow
> > > CommandComplete
> > 
> > That would require the backend to buffer the entire query response,
> > which isn't a great idea.  I would expect that such an operation would
> > need to interleave CopyData to the backend with DataRow responses.  Such
> > a thing could possibly be built on COPY_BOTH mode, but it would be a lot
> > of work (at both ends) for extremely debatable value.
> > 
> > The general idea of COPY is to load data as fast as possible,
> 
> With utmost respect, that is one of several use cases, and any change
> would need to keep that use case unburdened.  A sometimes overlapping
> set of use cases move data in and out of the database in a simple
> manner.  In some of these, people might wish to trade some performance
> for the feature.

99% of my uses at work for COPY are as a general data import and export
facility. I often find myself loading CSV files into our database for
analysis and further cleanup, and then use COPY to output queries as CSV
files for consumption by other members of the business.

The recent work for (PRE|POST)PROCESSOR options to COPY is indicative of
the fact that users are not merely using COPY to "load data as fast as
possible".

Other discussions around a COMPRESSED option are more than just a
performance enhancement, in my view, as I oftern receive files
compressed and decompressing the data is just another step standing in
the way of myself importing the data into the database.

Additionally, once I have the data imported, I often take many steps to
cleanup and format the data, prior to applying actual typing to a table
(which invariably fails due to invalid dates, and other nonsense).

COPY ... RETURNING would certainly be useful to apply additional
transformations to the data before finally sending it to its ultimate
destination.

> A particular example would be one where there are several tables to be
> loaded, some with generated columns that the future ones would depend
> on.  Yes, it's possible (kinda) to do this with the FDW machinery, but
> the burden is much higher as it requires DDL permission in general
> each time.

I find using the FDW machinery to perform many queries to be much slower
than importing the data once and then running my queries. There is also
no ability to use indexes.

> > so weighing it down with processing options seems like a pretty
> > dubious idea even if the implementation were easy.
> 
> Totally agreed that the "fast load/unload" code path must not be
> affected by any such changes.

Agreed here as well.

-Ryan P. Kelly



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RETURNING syntax for COPY

2013-05-08 Thread Ryan Kelly
On Wed, May 05/08/13, 2013 at 03:38:10PM -0400, Andrew Dunstan wrote:
> 
> On 05/08/2013 03:23 PM, Jim Nasby wrote:
> >>WITH new_data AS (
> >>COPY FROM ...
> >>RETURNING id, field_to_check
> >>)
> >
> 
> Why is this better than this, which you can do today?
> 
>WITH new_data AS (
> INSERT into ... FROM foreign_table_with_file_fdw RETURNING ...
>)
> 
> 
> The whole reason I abandoned trying to do this sort of thing with
> COPY was that I realized the FDW would provide what I wanted.

You need to first CREATE EXTENSION file_fdw. Then you need to CREATE
SERVER. Then CREATE FOREIGN TABLE. Which requires appropriate permission
to do those things, and certainly has no hope of working on the client
side.

-Ryan P. Kelly



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql sets up cancel handler very early

2013-05-14 Thread Ryan Kelly
I submitted essentially this same patch over a year ago and Tom vetoed
it: http://www.postgresql.org/message-id/3741.1325731...@sss.pgh.pa.us

The thread moved to -hackers at some point and I made some further
enhancements: 
http://www.postgresql.org/message-id/20120108201802.ga31...@llserver.lakeliving.com

Never went anywhere though.

-Ryan P. Kelly

On Tue, May 05/14/13, 2013 at 11:35:36AM -0400, Peter Eisentraut wrote:
> Sometimes, the psql startup hangs when it cannot resolve or connect to a
> host.  Intuitively, I would like to press Ctrl+C and correct the
> connection string or investigate.  But that doesn't work because Ctrl+C
> is already bound to the query cancel handler by that time.
> 
> It seems to me that there is no point in setting up the cancel handler
> before the database connection is established.  Example patch attached.
>  Comments?

> diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c
> index 5d7fe6e..65a1cde 100644
> --- a/src/bin/psql/startup.c
> +++ b/src/bin/psql/startup.c
> @@ -111,8 +111,6 @@ static void parse_psql_options(int argc, char *argv[],
>   setvbuf(stderr, NULL, _IONBF, 0);
>  #endif
>  
> - setup_cancel_handler();
> -
>   pset.progname = get_progname(argv[0]);
>  
>   pset.db = NULL;
> @@ -249,6 +247,8 @@ static void parse_psql_options(int argc, char *argv[],
>   exit(EXIT_BADCONN);
>   }
>  
> + setup_cancel_handler();
> +
>   PQsetNoticeProcessor(pset.db, NoticeProcessor, NULL);
>  
>   SyncVariables();

> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-04-30 Thread Ryan Kelly
On Sun, Apr 29, 2012 at 10:12:40PM -0400, Alvaro Herrera wrote:
> 
> Excerpts from Ryan Kelly's message of sáb ene 14 16:22:21 -0300 2012:
> 
> > I have attached a new patch which handles the connect_timeout option by
> > adding a PQconnectTimeout(conn) function to access the connect_timeout
> > which I then use to retrieve the existing value from the old connection.
> 
> Was this patch dropped entirely?  If not and it hasn't been committed
> yet, I think it belongs in the open CF here:
> https://commitfest.postgresql.org/action/commitfest_view?id=14
Needs some freshening if anyone still wants it. Update against latest
HEAD attached.

> 
> -- 
> Álvaro Herrera 
> The PostgreSQL Company - Command Prompt, Inc.
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-Ryan Kelly
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 8a820ac..bf4d110 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1479,6 +1479,24 @@ char *PQoptions(const PGconn *conn);
   
  
 
+
+
+ 
+  PQconnectTimeout
+  
+   PQconnectTimeout
+  
+ 
+
+ 
+  
+   Returns the connect_timeout property as given to libpq.
+
+char *PQconnectTimeout(const PGconn *conn);
+
+  
+ 
+

   
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index dd59aa1..90dfe13 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1504,7 +1504,7 @@ static bool
 do_connect(char *dbname, char *user, char *host, char *port)
 {
 	PGconn	   *o_conn = pset.db,
-			   *n_conn;
+			   *n_conn = NULL;
 	char	   *password = NULL;
 
 	if (!dbname)
@@ -1537,7 +1537,7 @@ do_connect(char *dbname, char *user, char *host, char *port)
 
 	while (true)
 	{
-#define PARAMS_ARRAY_SIZE	8
+#define PARAMS_ARRAY_SIZE	9
 		const char **keywords = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*keywords));
 		const char **values = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*values));
 
@@ -1555,17 +1555,120 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		values[5] = pset.progname;
 		keywords[6] = "client_encoding";
 		values[6] = (pset.notty || getenv("PGCLIENTENCODING")) ? NULL : "auto";
-		keywords[7] = NULL;
-		values[7] = NULL;
+		keywords[7] = "connect_timeout";
+		values[7] = PQconnectTimeout(o_conn);
+		keywords[8] = NULL;
+		values[8] = NULL;
 
-		n_conn = PQconnectdbParams(keywords, values, true);
+		/* attempt connection asynchronously */
+		n_conn = PQconnectStartParams(keywords, values, true);
+
+		if (sigsetjmp(sigint_interrupt_jmp, 1) != 0)
+		{
+			/* interrupted during connection attempt */
+			PQfinish(n_conn);
+			n_conn = NULL;
+		}
+		else
+		{
+			time_t		end_time = -1;
+
+			/*
+			 * maybe use a connection timeout. this code essentially stolen
+			 * from src/interfaces/libpq/fe-connect.c connectDBComplete
+			 */
+			if (PQconnectTimeout(n_conn) != NULL)
+			{
+int			timeout = atoi(PQconnectTimeout(n_conn));
+if (timeout > 0)
+{
+	/*
+	 * Rounding could cause connection to fail; need at least 2 secs
+	 */
+	if (timeout < 2)
+		timeout = 2;
+	/* calculate the finish time based on start + timeout */
+	end_time = time(NULL) + timeout;
+}
+			}
+
+			while(end_time < 0 || time(NULL) < end_time)
+			{
+int			poll_res;
+int			rc;
+fd_set		read_mask,
+			write_mask;
+struct timeval timeout;
+struct timeval *ptr_timeout;
+
+poll_res = PQconnectPoll(n_conn);
+if (poll_res == PGRES_POLLING_OK ||
+	poll_res == PGRES_POLLING_FAILED)
+{
+	break;
+}
+
+if (poll_res == PGRES_POLLING_READING)
+	FD_SET(PQsocket(n_conn), &read_mask);
+if (poll_res == PGRES_POLLING_WRITING)
+	FD_SET(PQsocket(n_conn), &write_mask);
+
+/*
+ * Compute appropriate timeout interval. essentially stolen
+ * from src/interfaces/libpq/fe-misc.c pqSocketPoll. Maybe
+ * that function could be made public? we could then replace
+ * the whole inside of this while loop, assuming it is safe
+ * to longjmp out from there.
+ */
+if (end_time == ((time_t) -1))
+	ptr_timeout = NULL;
+else
+{
+	time_t  now = time(NULL);
+
+	if (end_time > now)
+		timeout.tv_sec = end_time - now;
+	else
+		timeout.tv_sec = 0;
+	timeout.tv_usec = 0;
+	ptr_timeout = &timeout;
+}
+
+sigint_interrupt_enabled = true;
+if (cancel_pressed)
+{
+	PQfinish(n_conn);
+	n_conn = NULL;
+	sigint_interrupt_enabled = false;
+	break;
+}
+rc = select(PQsocket(n_conn) + 1,
+			&read_mask, &write_mask, NULL,
+			ptr_timeout);
+sigint_interrupt_enabled = false;
+
+if (rc < 0 && errno != EINTR)
+	break;
+			}
+
+			if (PQstatus(n_conn) != CONNECTION_OK &&
+end_time > 0 && time(NULL) >= end_time)
+			

Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-04-30 Thread Ryan Kelly
On Mon, Apr 30, 2012 at 09:02:33AM -0400, Alvaro Herrera wrote:
> 
> Excerpts from Ryan Kelly's message of lun abr 30 07:10:14 -0400 2012:
> > On Sun, Apr 29, 2012 at 10:12:40PM -0400, Alvaro Herrera wrote:
> > > 
> > > Excerpts from Ryan Kelly's message of sáb ene 14 16:22:21 -0300 2012:
> > > 
> > > > I have attached a new patch which handles the connect_timeout option by
> > > > adding a PQconnectTimeout(conn) function to access the connect_timeout
> > > > which I then use to retrieve the existing value from the old connection.
> > > 
> > > Was this patch dropped entirely?  If not and it hasn't been committed
> > > yet, I think it belongs in the open CF here:
> > > https://commitfest.postgresql.org/action/commitfest_view?id=14
> > Needs some freshening if anyone still wants it. Update against latest
> > HEAD attached.
> 
> Well, do *you* want it?
Of course. That way I can stop patching my psql and go back to using the
one that came with my release :)

-Ryan Kelly

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq compression

2012-06-15 Thread Ryan Kelly
On Fri, Jun 15, 2012 at 11:28:48PM +0800, Magnus Hagander wrote:
> On Fri, Jun 15, 2012 at 11:24 PM, Heikki Linnakangas
>  wrote:
> > On 15.06.2012 17:58, Magnus Hagander wrote:
> >>
> >> On Fri, Jun 15, 2012 at 10:56 PM, Heikki Linnakangas
> >>   wrote:
> >>>
> >>> On 15.06.2012 17:39, Magnus Hagander wrote:
> >>>>
> >>>>
> >>>> On Fri, Jun 15, 2012 at 6:48 PM, Florian Pflug    wrote:
> >>>>>
> >>>>>
> >>>>> The way I see it, if we use SSL-based compression then non-libpq
> >>>>> clients
> >>>>>
> >>>>> there's at least a chance of those clients being able to use it easily
> >>>>> (if their SSL implementation supports it). If we go with a third-party
> >>>>> compression method, they *all* need to add yet another dependency, or
> >>>>> may
> >>>>> even need to re-implement the compression method in their
> >>>>> implementation
> >>>>> language of choice.
> >>>>
> >>>>
> >>>> I only partially agree. If there *is* no third party SSL libary that
> >>>> does support it, then they're stuck reimplementing an *entire SSL
> >>>> library*, which is surely many orders of magnitude more work, and
> >>>> suddenly steps into writing encryption code which is a lot more
> >>>> sensitive.
> >>>
> >>>
> >>> You could write a dummy SSL implementation that only does compression,
> >>> not
> >>> encryption. Ie. only support the 'null' encryption method. That should be
> >>> about the same amount of work as writing an implementation of compression
> >>> using whatever protocol we would decide to use for negotiating the
> >>> compression.
> >>
> >>
> >> Sure, but then what do you do if you actually want both?
> >
> >
> > Umm, then you use a real SSL libray, not the dummy one?
> 
> But (in this scenario, and so far nobody has proven it to be wrong)
> there exists no real SSL library that does support compression.
gnutls and openssl both support compression:

http://www.gnu.org/software/gnutls/manual/html_node/Compression-algorithms-used-in-the-record-layer.html
http://www.openssl.org/docs/apps/enc.html

-Ryan Kelly

> 
> -- 
>  Magnus Hagander
>  Me: http://www.hagander.net/
>  Work: http://www.redpill-linpro.com/
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Add some more documentation for array indexes/operators

2012-06-20 Thread Ryan Kelly
I had trouble finding what operators arrays supported or which ones
had index support or even determining that arrays could be indexed from
the documentation from the array data type. So, patch.

-Ryan Kelly
diff --git a/doc/src/sgml/array.sgml b/doc/src/sgml/array.sgml
index 3508ba3..51d996d 100644
--- a/doc/src/sgml/array.sgml
+++ b/doc/src/sgml/array.sgml
@@ -573,6 +573,33 @@ SELECT * FROM
   This function is described in .
  
 
+ 
+  You can also search with arrays using the @>,
+  <@, and && operators which check
+  whether the left operand contains, is contained by, or overlaps with
+  the right operand, respectively. For instance,
+
+
+SELECT * FROM sal_emp WHERE pay_by_quarter && ARRAY[1];
+
+
+  Will find all rows where the pay_by_quarter
+  overlaps with an array containing 1, that is, any
+  employee with a quarterly salary of 1 will be matched.
+ 
+
+ 
+  See  for more details about array operator
+  behavior.
+ 
+
+ 
+  Additionally, a number of operators support indexed operations. This means
+  that the example above could utilize an index to efficiently locate those 
+  employees with the desired quarterly salary. See 
+  for more details about which operators support indexed operations.
+ 
+
  
   
Arrays are not sets; searching for specific array elements
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index cd374ac..2cef8e4 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -10282,7 +10282,8 @@ SELECT NULLIF(value, '(none)') ...
 
   
See  for more details about array operator
-   behavior.
+   behavior. See  for more details about
+   which operators support indexed operations.
   
 
   

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-06-25 Thread Ryan Kelly
Shigeru:

Thank you very much for your review. Comments are inline below, and a
new patch is attached.

On Fri, Jun 22, 2012 at 10:06:53AM +0900, Shigeru HANADA wrote:
> (2012/05/01 0:30), Ryan Kelly wrote:
> >On Mon, Apr 30, 2012 at 09:02:33AM -0400, Alvaro Herrera wrote:
> >>Well, do *you* want it?
> >Of course. That way I can stop patching my psql and go back to using the
> >one that came with my release :)
> 
> Here is result of my review of v4 patch.
> 
> Submission
> --
> - The patch is in context diff format
> - It needs rebase for HEAD, but patch command takes care automatically.
> - Make finishes cleanly, and all regression tests pass
> 
> Functionality
> -
> After applying this patch, I could cancel connection attempt at
> start-up by pressing Ctrl+C on terminal just after invoking psql
> command with an unused IP address.  Without this patch, such attempt
> ends up with error such as "No route to host" after uninterruptable
> few seconds (maybe the duration until error would depend on
> environment).
> 
> Connection attempt by \connect command could be also canceled by
> pressing Ctrl+C on psql prompt.
> 
> In addition, I tried setting PGCONNECT_TIMEOUT to 0 (infinite), but
> psql gave up after few seconds, for both start-up and re-connect.
> Is this intentional behavior?
A timeout of 0 (infinite) means to keep trying until we succeed or fail,
not keep trying forever. As you mentioned above, your connection
attempts error out after a few seconds. This is what is happening. In my
environment no such error occurs and as a result psql continues to
attempt to connect for as long as I let it.

> Detail of changes
> -
> Basically this patch consists of three changes:
> 
> 1) defer setup of cancel handler until start-up connection has established
> 2) new libpq API PQconnectTimeout which returns connect_timeout
> value of current session
> 3) use asynchronous connection API at \connect psql command, this
> requires API added by 2)
> 
> These seem reasonable to achieve canceling connection attempt at
> start-up and \connect, but, as Ryan says, codes added to do_connect
> might need more simplification.
> 
> I have some random comments.
> 
> - Checking status by calling PQstatus just after
> PQconnectStartParams is necessary.
Yes, I agree.

> - Copying only "select()" part of pqSocketPoll seems not enough,
> should we use poll(2) if it is supported?
I did not think the additional complexity was worth it in this case.
Unless you see some reason to use poll(2) that I do not.

> - Don't we need to clear error message stored in PGconn after
> PQconnectPoll returns OK status, like connectDBComplete?
I do not believe there is a client interface for clearing the error
message. Additionally, the documentation states that PQerrorMessage
"Returns the error message most recently generated by an operation on
the connection." This seems to indicate that the error message should be
cleared as this behavior is part of the contract of PQerrorMessage.

> 
> Regards,
> -- 
> Shigeru HANADA

-Ryan Kelly
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 5c5dd68..79f4bc0 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1496,6 +1496,24 @@ char *PQoptions(const PGconn *conn);
   
  
 
+
+
+ 
+  PQconnectTimeout
+  
+   PQconnectTimeout
+  
+ 
+
+ 
+  
+   Returns the connect_timeout property as given to libpq.
+
+char *PQconnectTimeout(const PGconn *conn);
+
+  
+ 
+

   
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 0926786..23b0106 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1506,7 +1506,7 @@ static bool
 do_connect(char *dbname, char *user, char *host, char *port)
 {
 	PGconn	   *o_conn = pset.db,
-			   *n_conn;
+			   *n_conn = NULL;
 	char	   *password = NULL;
 
 	if (!dbname)
@@ -1539,7 +1539,7 @@ do_connect(char *dbname, char *user, char *host, char *port)
 
 	while (true)
 	{
-#define PARAMS_ARRAY_SIZE	8
+#define PARAMS_ARRAY_SIZE	9
 		const char **keywords = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*keywords));
 		const char **values = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*values));
 
@@ -1557,30 +1557,140 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		values[5] = pset.progname;
 		keywords[6] = "client_encoding";
 		values[6] = (pset.notty || getenv("PGCLIENTENCODING")) ? NULL : "auto";
-		keywords[7] = NULL;
-		values[7] = NULL;
+		keywords[7] = "connect_timeout";
+		values[7] = PQconnectTimeout(o_conn);
+		keywords[8] = NULL;
+		values[8] = NULL;
 
-		n_conn = PQconnectdbParams(keywords, values, true);
-
-		free(keywords);
-		free(values);
-
-		/

Re: [HACKERS] warning handling in Perl scripts

2012-06-25 Thread Ryan Kelly
On Mon, Jun 25, 2012 at 12:07:55PM -0400, Tom Lane wrote:
> "David E. Wheeler"  writes:
> > Hrm, I think that `use warnings 'FATAL';` might only work for core 
> > warnings. Which is annoying. I missed what was warning up-thread, but the 
> > most foolproof way to make all warnings fatal is the originally suggested
> 
> >   local $SIG{__WARN__} = sub { die shift };
> 
> Sigh, let's do it that way then.
> 
> > A *bit* cleaner is to use Carp::croak:
> 
> > use Carp;
> > local $SIG{__WARN__} = \&croak;
> 
> Just as soon not add a new module dependency if we don't have to.
Carp is core since Perl 5 [1994-10-17].

> In this case, since we're not really expecting the warnings to get
> thrown, it seems like there'd be little value added by doing so.
> 
>   regards, tom lane
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers

-Ryan Kelly

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-07-13 Thread Ryan Kelly
On Mon, Jul 09, 2012 at 05:35:15PM +0900, Shigeru HANADA wrote:
> Hi Ryan,
> 
> On Mon, Jun 25, 2012 at 9:00 PM, Ryan Kelly  wrote:
> >> Connection attempt by \connect command could be also canceled by
> >> pressing Ctrl+C on psql prompt.
> >>
> >> In addition, I tried setting PGCONNECT_TIMEOUT to 0 (infinite), but
> >> psql gave up after few seconds, for both start-up and re-connect.
> >> Is this intentional behavior?
> > A timeout of 0 (infinite) means to keep trying until we succeed or fail,
> > not keep trying forever. As you mentioned above, your connection
> > attempts error out after a few seconds. This is what is happening. In my
> > environment no such error occurs and as a result psql continues to
> > attempt to connect for as long as I let it.
> 
> For handy testing, I wrote simple TCP server which accepts connection
> and blocks until client gives up connection attempt (or forever).  When
> I tested your patch with setting PGCONNECT_TIMEOUT=0, I found that
> psql's CPU usage goes up to almost 100% (10~ usr + 80~ sys) while
> connection attempt by \c command is undergoing.  After reading code for
> a while, I found that FD_ZERO was not called.  This makes select() to
> return immediately in the loop every time and caused busy loop.
> # Maybe you've forgotten adding FD_ZERO when you were merging Heikki's
> v2 patch.
Yes, I had lost them somewhere. My apologies.

> >> - Checking status by calling PQstatus just after
> >> PQconnectStartParams is necessary.
> > Yes, I agree.
> 
> Checked.
> 
> >> - Copying only "select()" part of pqSocketPoll seems not enough,
> >> should we use poll(2) if it is supported?
> > I did not think the additional complexity was worth it in this case.
> > Unless you see some reason to use poll(2) that I do not.
> 
> I checked where select() is used in PG, and noticed that poll is used at
> only few places.  Agreed to use only select() here.
> 
> >> - Don't we need to clear error message stored in PGconn after
> >> PQconnectPoll returns OK status, like connectDBComplete?
> > I do not believe there is a client interface for clearing the error
> > message. Additionally, the documentation states that PQerrorMessage
> > "Returns the error message most recently generated by an operation on
> > the connection." This seems to indicate that the error message should be
> > cleared as this behavior is part of the contract of PQerrorMessage.
> 
> My comment was pointless.  Sorry for noise.
> 
> Here is my additional comments for v5 patch:
> 
> - Using static array for fixed-length connection parameters was
> suggested in comments for another CF item, using
> fallback_application_name for some command line tools, and IMO this
> approach would also suit for this patch.
> 
> http://archives.postgresql.org/message-id/CA+TgmoYZiayts=fjsytzqlg7rewlwkdkey5f+fhp5v5_nu_...@mail.gmail.com
I have done this as well.

> - Some comments go past 80 columns, and opening brace in line 1572
> should go on next line.  Please refer coding conventions written in
> PostgreSQL wiki.
> http://www.postgresql.org/docs/devel/static/source-format.html
I have corrected these issues.

> Once the issues above are fixed, IMO this patch can be marked as "Ready
> for committer".
I have also added additional documentation reflecting Heikki's
suggestion that PQconnectTimeout be recommended for use by applications
using the async API.

Attached is v6 of the patch.

> Regards,
> -- 
> Shigeru Hanada
> 
>   * 不明 - 自動検出
>   * 英語
>   * 日本語
> 
>   * 英語
>   * 日本語
> 
>  

-Ryan Kelly
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 5c5dd68..654f5f5 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -436,8 +436,12 @@ switch(PQstatus(conn))
The connect_timeout connection parameter is ignored
when using PQconnectPoll; it is the application's
responsibility to decide whether an excessive amount of time has elapsed.
-   Otherwise, PQconnectStart followed by a
-   PQconnectPoll loop is equivalent to
+   It is recommended to use PQconnectTimeout to get the
+   value of connect_timeout and use that as the timeout
+   in the application. That way the user gets the same timeout behavior
+   regardless of whether the application uses PQconnectPoll
+   or the blocking connection API. Otherwise, PQconnectStart
+   followed by a PQconnectPoll loop is equivalent to
PQconnectdb.
   
 
@@ -1496,6 +1500,24 @@ char *PQoptions(const PGconn *conn);
   
  
 
+
+
+ 
+  PQconnectTimeout
+  
+   PQconnectTimeout
+  
+ 

[HACKERS] Add LINE: hint when schemaname.typename is a non-existent schema

2015-02-02 Thread Ryan Kelly
The attached patch adds a LINE: ... hint when schemaname.typename
results in a schema which does not exist. I came across this when a
missing comma in a SELECT list resulted in an error without a location
in a query a few thousand lines long.

Before:

(postgres@[local]:5432 14:41:25) [postgres]> select test.id 'all' as
example from test;
ERROR:  3F000: schema "test" does not exist
LOCATION:  get_namespace_oid, namespace.c:2826

After:

(postgres@[local]:5433 14:42:32) [postgres]> select test.id 'all' as
example from test;
ERROR:  3F000: schema "test" does not exist
LINE 1: select test.id 'all' as example from test;
       ^
LOCATION:  LookupTypeName, parse_type.c:171

-Ryan Kelly


missing_type_schema_hint.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add LINE: hint when schemaname.typename is a non-existent schema

2015-02-08 Thread Ryan Kelly
On Tue, Feb 3, 2015 at 11:10 AM, Tom Lane  wrote:
> Ryan Kelly  writes:
>> The attached patch adds a LINE: ... hint when schemaname.typename
>> results in a schema which does not exist. I came across this when a
>> missing comma in a SELECT list resulted in an error without a location
>> in a query a few thousand lines long.
>
> I think this is a good problem to attack, but the proposed patch seems
> a bit narrow and brute-force.  Surely this isn't the only place where
> we're missing an errposition pointer on a "no such schema" error?

I managed to find four code paths that failed to report an error position if
the schema did not exist:
- Schema-qualified types (e.g. select schemaname.typename 'foo')
- Schema-qualified operators (e.g. select 1 operator(schemaname.+) 1)
- Schema-qualified table names in CREATE TABLE (e.g. create table
schemaname.tablename (colname integer))
- Schema-qualified function calls (e.g. select schemaname.funcname())

> It's probably not reasonable to add a pstate argument to
> LookupExplicitNamespace itself, given that namespace.c isn't part
> of the parser.  But you could imagine adding an interface function
> in the parser that calls LookupExplicitNamespace and then throws a
> position-aware error on failure, and then making sure that all schema
> lookups in the parser go through that.

While searching for other instances of this problem, I found that using
a schema that does not exist in conjunction with a collation reported the
position. That code uses setup_parser_errposition_callback and the
documentation for that function seems to indicate that the usage of it in my
newly attached patch are consistent. I do not know, however, if this is the
cleanest approach or if I should actually create a function like
validate_explicit_namespace to handle this (and I'm also not sure where
such a function should live, if I do create it).

-Ryan Kelly


missing_type_schema_hint_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-01-08 Thread Ryan Kelly
When attempting to connect to a non-existent host with psql, the
connection will hang and ^C will not break the attempt. This affects two
places: startup and in interactive mode with \c. During startup, the new
behavior will be to exit psql. In interactive mode, the new behavior
will be to return to the interactive shell.
---
 src/bin/psql/command.c |   17 +
 src/bin/psql/startup.c |5 ++---
 2 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 69fac83..74e406b 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1516,7 +1516,7 @@ static bool
 do_connect(char *dbname, char *user, char *host, char *port)
 {
PGconn *o_conn = pset.db,
-  *n_conn;
+  *n_conn = NULL;
char   *password = NULL;
 
if (!dbname)
@@ -1570,7 +1570,13 @@ do_connect(char *dbname, char *user, char *host, char 
*port)
keywords[7] = NULL;
values[7] = NULL;
 
-   n_conn = PQconnectdbParams(keywords, values, true);
+   if (sigsetjmp(sigint_interrupt_jmp, 1) != 0) {
+   /* got here with longjmp */
+   } else {
+   sigint_interrupt_enabled = true;
+   n_conn = PQconnectdbParams(keywords, values, true);
+   sigint_interrupt_enabled = false;
+   }
 
free(keywords);
free(values);
@@ -1600,7 +1606,8 @@ do_connect(char *dbname, char *user, char *host, char 
*port)
 */
if (pset.cur_cmd_interactive)
{
-   psql_error("%s", PQerrorMessage(n_conn));
+   if (n_conn)
+   psql_error("%s", PQerrorMessage(n_conn));
 
/* pset.db is left unmodified */
if (o_conn)
@@ -1608,7 +1615,9 @@ do_connect(char *dbname, char *user, char *host, char 
*port)
}
else
{
-   psql_error("\\connect: %s", PQerrorMessage(n_conn));
+   if (n_conn)
+   psql_error("\\connect: %s", 
PQerrorMessage(n_conn));
+
if (o_conn)
{
PQfinish(o_conn);
diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c
index 8b1864c..e53d84c 100644
--- a/src/bin/psql/startup.c
+++ b/src/bin/psql/startup.c
@@ -111,8 +111,6 @@ main(int argc, char *argv[])
setvbuf(stderr, NULL, _IONBF, 0);
 #endif
 
-   setup_cancel_handler();
-
pset.progname = get_progname(argv[0]);
 
pset.db = NULL;
@@ -245,8 +243,9 @@ main(int argc, char *argv[])
}
 
/*
-* Now find something to do
+* Now find something to do (and handle cancellation, if applicable)
 */
+   setup_cancel_handler();
 
/*
 * process file given by -f
-- 
1.7.7.4

Previously, these changes were submitted to -bugs:
http://archives.postgresql.org/pgsql-bugs/2012-01/msg00030.php
http://archives.postgresql.org/pgsql-bugs/2012-01/msg00036.php

Though, I think that might not have been the correct forum? I still could be
wrong with -hackers, so just let me know where to go with this if I'm still
off-base.

-Ryan Kelly


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-01-09 Thread Ryan Kelly
On Mon, Jan 09, 2012 at 10:35:50AM +0200, Heikki Linnakangas wrote:
> On 08.01.2012 22:18, Ryan Kelly wrote:
> >@@ -1570,7 +1570,13 @@ do_connect(char *dbname, char *user, char *host, char 
> >*port)
> > keywords[7] = NULL;
> > values[7] = NULL;
> >
> >-n_conn = PQconnectdbParams(keywords, values, true);
> >+if (sigsetjmp(sigint_interrupt_jmp, 1) != 0) {
> >+/* got here with longjmp */
> >+} else {
> >+sigint_interrupt_enabled = true;
> >+n_conn = PQconnectdbParams(keywords, values, true);
> >+sigint_interrupt_enabled = false;
> >+}
> >
> > free(keywords);
> > free(values);
> 
> That assumes that it's safe to longjmp out of PQconnectdbParams at
> any instant. It's not.
I'm guessing because it could result in a resource leak?

> I think you'd need to use the asynchronous connection functions
> PQconnectStartParams() and PQconnectPoll(), and select().
New patch attached.

> 
> -- 
>   Heikki Linnakangas
>   EnterpriseDB   http://www.enterprisedb.com

-Ryan Kelly

diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 69fac83..7c31891 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1516,7 +1516,7 @@ static bool
 do_connect(char *dbname, char *user, char *host, char *port)
 {
 	PGconn	   *o_conn = pset.db,
-			   *n_conn;
+			   *n_conn = NULL;
 	char	   *password = NULL;
 
 	if (!dbname)
@@ -1570,14 +1570,69 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		keywords[7] = NULL;
 		values[7] = NULL;
 
-		n_conn = PQconnectdbParams(keywords, values, true);
+		/* attempt connection asynchronously */
+		n_conn = PQconnectStartParams(keywords, values, true);
+
+		if (sigsetjmp(sigint_interrupt_jmp, 1) != 0)
+		{
+			/* interrupted during connection attempt */
+			PQfinish(n_conn);
+			n_conn = NULL;
+		} else {
+			int waiting = true, select_ret;
+			struct timeval timeout;
+			fd_set input_mask;
+
+			FD_ZERO(&input_mask);
+			FD_SET(PQsocket(n_conn), &input_mask);
+
+			timeout.tv_sec = 0;
+			timeout.tv_usec = 50; /* 0.5 sec */
+
+			sigint_interrupt_enabled = true;
+			select_ret = select(PQsocket(n_conn) + 1, &input_mask, NULL, NULL, &timeout);
+			sigint_interrupt_enabled = false;
+
+			/* don't even bother waiting, something wrong with the socket */
+			if (select_ret < 0 && errno != EINTR)
+			{
+waiting = false;
+			}
+
+			while(waiting)
+			{
+switch(PQconnectPoll(n_conn))
+{
+	case PGRES_POLLING_READING:
+	case PGRES_POLLING_WRITING:
+		timeout.tv_sec = 0;
+		timeout.tv_usec = 50;
+
+		sigint_interrupt_enabled = true;
+		select_ret = select(PQsocket(n_conn) + 1, &input_mask, NULL, NULL, &timeout);
+		sigint_interrupt_enabled = false;
+
+		if (select_ret < 0 && errno != EINTR)
+		{
+			waiting = false;
+		}
+		break;
+	default:
+		waiting = false;
+		break;
+}
+			}
+		}
 
 		free(keywords);
 		free(values);
 
 		/* We can immediately discard the password -- no longer needed */
 		if (password)
+		{
 			free(password);
+			password = NULL;
+		}
 
 		if (PQstatus(n_conn) == CONNECTION_OK)
 			break;
@@ -1586,7 +1641,7 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		 * Connection attempt failed; either retry the connection attempt with
 		 * a new password, or give up.
 		 */
-		if (!password && PQconnectionNeedsPassword(n_conn) && pset.getPassword != TRI_NO)
+		if (PQconnectionNeedsPassword(n_conn) && pset.getPassword != TRI_NO)
 		{
 			PQfinish(n_conn);
 			password = prompt_for_password(user);
@@ -1600,7 +1655,8 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		 */
 		if (pset.cur_cmd_interactive)
 		{
-			psql_error("%s", PQerrorMessage(n_conn));
+			if (n_conn)
+psql_error("%s", PQerrorMessage(n_conn));
 
 			/* pset.db is left unmodified */
 			if (o_conn)
@@ -1608,7 +1664,9 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		}
 		else
 		{
-			psql_error("\\connect: %s", PQerrorMessage(n_conn));
+			if (n_conn)
+psql_error("\\connect: %s", PQerrorMessage(n_conn));
+
 			if (o_conn)
 			{
 PQfinish(o_conn);
diff --git a/src/bin/psql/startup.c b/src/bin/psql/startup.c
index 8b1864c..e53d84c 100644
--- a/src/bin/psql/startup.c
+++ b/src/bin/psql/startup.c
@@ -111,8 +111,6 @@ main(int argc, char *argv[])
 	setvbuf(stderr, NULL, _IONBF, 0);
 #endif
 
-	setup_cancel_handler();
-
 	pset.progname = get_progname(argv[0]);
 
 	pset.db = NULL;
@@ -245,8 +243,9 @@ main(int argc, char *argv[])
 	}
 
 	/*
-	 * Now find something to do
+	 * Now find something to do (and handle cancellation, if applicable)
 	 */
+	setup_cancel_handler();
 
 	/*
 	 * process file given by -f

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Allow breaking out of hung connection attempts

2012-01-14 Thread Ryan Kelly
On Tue, Jan 10, 2012 at 11:29:58AM +0200, Heikki Linnakangas wrote:
> On 09.01.2012 15:49, Ryan Kelly wrote:
> >On Mon, Jan 09, 2012 at 10:35:50AM +0200, Heikki Linnakangas wrote:
> >>That assumes that it's safe to longjmp out of PQconnectdbParams at
> >>any instant. It's not.
> >I'm guessing because it could result in a resource leak?
> 
> Yes, and other unfinished business, too.
> 
> >>I think you'd need to use the asynchronous connection functions
> >>PQconnectStartParams() and PQconnectPoll(), and select().
> >New patch attached.
> 
> Thanks, some comments:
> 
> * Why do you need the timeout?
> 
> * If a SIGINT arrives before you set sigint_interrupt_enabled, it
> just sets cancel_pressed but doesn't jump out of the connection
> attempt. You need to explicitly check cancel_pressed after setting
> sigint_interrupt_enabled to close that race condition.
> 
> * You have to reinitialize the fd mask with FD_ZERO/SET before each
> call to select(). select() modifies the mask.
> 
> * In case of PGRES_POLLING_WRITING, you have to wait until the
> socket becomes writable, not readable.
> 
> Attached is a new version that fixes those.

Yup, I'm an idiot.

> 
> There's one caveat in the libpq docs about PQconnectStart/PQconnectPoll:
> 
> >The connect_timeout connection parameter is ignored when using 
> >PQconnectPoll; it is the application's responsibility to decide whether an 
> >excessive amount of time has elapsed. Otherwise, PQconnectStart followed by 
> >a PQconnectPoll loop is equivalent to PQconnectdb.
> 
> So after this patch, connect_timeout will be ignored in \connect.
> That probably needs to be fixed. You could incorporate a timeout
> fairly easily into the select() calls, but unfortunately there's no
> easy way to get the connect_timeout value. You could to parse the
> connection string the user gave with PQconninfoParse(), but the
> effective timeout setting could come from a configuration file, too.
> 
> Not sure what to do about that. If there was a
> PQconnectTimeout(conn) function, similar to PQuser(conn),
> PQhost(conn) et al, you could use that. Maybe we should add that, or
> even better, a generic function that could be used to return not
> just connect_timeout, but all the connection options in effect in a
> connection.

I have attached a new patch which handles the connect_timeout option by
adding a PQconnectTimeout(conn) function to access the connect_timeout
which I then use to retrieve the existing value from the old connection.

I also "borrowed" some code from other places:

* connectDBComplete had some logic surrounding handling the timeout
  value (src/interfaces/libpq/fe-connect.c:1402).
* The timeout code is lifted from pqSocketPoll which, if made public,
  could essentially replace all the select/timeout code in that loop
  (src/interfaces/libpq/fe-misc.c:1034).

Before I get flamed for duplicating code, yes, I know it's a bad thing
to do. If anyone has any guidance I'd be glad to implement their
suggestions.

-Ryan Kelly

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 72c9384..0ee2df1 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1375,6 +1375,24 @@ char *PQoptions(const PGconn *conn);
   
  
 
+
+
+ 
+  PQconnectTimeout
+  
+   PQconnectTimeout
+  
+ 
+
+ 
+  
+   Returns the connect_timeout property as given to libpq.
+
+char *PQconnectTimeout(const PGconn *conn);
+
+  
+ 
+

   
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 69fac83..7ba22d0 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1516,7 +1516,7 @@ static bool
 do_connect(char *dbname, char *user, char *host, char *port)
 {
 	PGconn	   *o_conn = pset.db,
-			   *n_conn;
+			   *n_conn = NULL;
 	char	   *password = NULL;
 
 	if (!dbname)
@@ -1549,7 +1549,7 @@ do_connect(char *dbname, char *user, char *host, char *port)
 
 	while (true)
 	{
-#define PARAMS_ARRAY_SIZE	8
+#define PARAMS_ARRAY_SIZE	9
 		const char **keywords = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*keywords));
 		const char **values = pg_malloc(PARAMS_ARRAY_SIZE * sizeof(*values));
 
@@ -1567,17 +1567,120 @@ do_connect(char *dbname, char *user, char *host, char *port)
 		values[5] = pset.progname;
 		keywords[6] = "client_encoding";
 		values[6] = (pset.notty || getenv("PGCLIENTENCODING")) ? NULL : "auto";
-		keywords[7] = NULL;
-		values[7] = NULL;
+		keywords[7] = "connect_timeout";
+		values[7] = PQconnectTimeout(o_conn);
+		keywords[8] = NULL;
+		values[8] = NULL;
 
-		n_conn = PQconnectdbParams(keywords, values, true);
+		/* attempt connection asynchronously */
+		n_conn = PQconnectStartParams(keywords,