Re: [HACKERS] Client Connection redirection support for PostgreSQL

2017-11-03 Thread Satyanarayana Narlapuram
> pg_hba.conf is "host based access [control]" . I'm not sure it's really the 
> right place.
I am open to have another configuration file, say routing_list.conf to define 
the routing rules, but felt it is easy to extend the hba conf file.

> But we now have a session-intent stuff though. So we could possibly do it at 
> session level.
Session intent can be used as an obvious hint for the routing to kick in. This 
can be a rule in the routing list to route the read intent sessions round robin 
across multiple secondary replicas.

> Backends used just for a redirect would be pretty expensive though.
It is somewhat expensive as the new process fork has to happen for each new 
connection. The advantage is that it makes proxies optional (if the middle tier 
can do connection management), and all the routing configurations can be within 
the server.
This also benefits latency sensitive applications not going through proxy.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Client Connection redirection support for PostgreSQL

2017-11-03 Thread Satyanarayana Narlapuram

> What advantages do you see in doing this in the backend over the current 
> system where the concerns are separated, i.e. people use connection poolers 
> like pgbouncer to do the routing?
IMHO connection pooler is not great for latency sensitive applications. For 
small deployments, proxy is an overhead. For example, in the cloud environment, 
the proxy has to sit in one data center / region and has to server the client 
requests serving from other data centers.

> Would it make sense also to include an optional routing algorithm or pointer 
> to a routing function for each RoutingList, or do you see this as entirely 
> the client's responsibility?
This is a great point, I haven't put much though into this beyond round robin / 
random shuffling. Providing the priority list of endpoints to the client from 
the server will allow client connections balanced accordingly. However, it is 
up to the client implementation to honor the list.

> How does this work with SSL?
The protocol doesn't change much with SSL, and after the handshake, startup 
message is sent to the server from the client, and the new message flow kicks 
on the server based on the routing list.

>>   1.  Bumping the protocol version - old server instances may not understand 
>> the new client protocol
> This sounds more attractive, assuming that the feature is.
I agree, bumping the protocol version makes things simple.

> > 3.  The current proposal - to keep it in the hba.conf and let the
> > server admin deal with the configuration by taking conscious
> > choice on the configuration of routing list based on the clients
>>   connecting to the server instance.

>How would clients identify themselves as eligible without a protocol version 
>bump?
Either through optional parameter, or controlled configuration by the server 
administrator are the only choices.
Protocol bump seems to me is a better idea here.

> So to DoS the server, what's required is a flock of old clients?  I presume 
> there's a good reason to reroute rather than serve these requests.
Possible, but I would say the server admin understands where the requests are 
coming from (old / new client) and does the capacity planning accordingly.

> Comments and feedback have begun.
Thank you :)

Thanks,
Satya


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Client Connection redirection support for PostgreSQL

2017-11-02 Thread Satyanarayana Narlapuram
Proposal:
Add the ability to the PostgreSQL server instance to route the traffic to a 
different server instance based on the rules defined in server's pg_bha.conf 
configuration file. At a high level this enables offloading the user requests 
to a different server instance based on the rules defined in the pg_hba.conf 
configuration file. Some of the interesting scenarios this enables include but 
not limited to - rerouting traffic based on the client hosts, users, database, 
etc. specified, redirecting read-only query traffic to the hot stand by 
replicas, and in multi-master scenarios.
The rules to route the traffic will be provided in the pg_hba.conf file. The 
proposal is to add a new optional field 'RoutingList' to the record format. The 
RoutingList contains comma-seperated list of one or more servers that can be 
routed the traffic to. In the absence of this new field there is no change to 
the current login code path for both the server and the client. RoutingList can 
be updated for each new connection to balance the load across multiple server 
instances
RoutingList format:
server_address1:port, server_address2:port...
The message flow

  1.  Client connects to the server, and server accepts the connections
  2.  Client sends the startup message
  3.  Server looks at the rules configured in the pg_hba.conf file and
 *   If the rule matches redirection

   i.  Send a 
special message with the RoutingList described above

 ii.  Server 
disconnects

 *   If the rule doesn't have RoutingList defined

   i.  Server 
proceeds in the existing code path and sends auth request

  1.  Client gets the list of addresses and attempts to connect to a server in 
the list provided until the first successful connections is established or the 
list is exhausted. If the client can't connect to any server instance on the 
RoutingList, client reports the login failure message.

Backward compatibility:
There are a few ways to provide the backward compatibility, and each approach 
has their own advantages and disadvantage and are listed below

  1.  Bumping the protocol version - old server instances may not understand 
the new client protocol
  2.  Adding additional optional parameter routing_enabled, without bumping the 
protocol version. In this approach, old Postgres server instances may not 
understand this and fail the connections.
  3.  The current proposal - to keep it in the hba.conf and let the server 
admin deal with the configuration by taking conscious choice on the 
configuration of routing list based on the clients connecting to the server 
instance.
Backward compatibility scenarios:

  *   The feature is not usable for the existing clients, and the new servers 
shouldn't set the routing list if they expect any connections from the legacy 
clients. We should do either (1) or (2) in the above list to achieve this. 
Otherwise need to rely on the admin to take care of the settings.
  *   For the new client connecting to the old server, there is no change in 
the message flow
  *   For the new clients to the new server, the message flow will be based on 
the routing list filed in the configuration.
This proposal is in very early stage, comments and feedback is very much 
appreciated.
Thanks,
Satya



Re: [HACKERS] Supporting Windows SChannel as OpenSSL replacement

2017-10-19 Thread Satyanarayana Narlapuram
Tom, Robert, Microsoft is interested in supporting windows SChannel for 
Postgres. Please let know how we can help taking this forward. We would love 
contributing to this either by enhancing the original patch provided by Heikki, 
or test the changes on Windows.

Thanks,
Satya

-Original Message-
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Wednesday, October 18, 2017 11:51 AM
To: Robert Haas 
Cc: hlinnaka ; Jeff Janes ; Andreas 
Karlsson ; Martijn van Oosterhout ; 
Magnus Hagander ; PostgreSQL-development 

Subject: Re: [HACKERS] Supporting Windows SChannel as OpenSSL replacement

Robert Haas  writes:
> Heikki, do you have any plans to work more on this?
> Or does anyone else?

FWIW, I have some interest in the Apple Secure Transport patch that is in the 
CF queue, and will probably pick that up at some point if no one beats me to it 
(but it's not real high on my to-do list).
I won't be touching the Windows version though.  I suspect that the folk who 
might be competent to review the Windows code may have correspondingly little 
interest in the macOS patch.  This is a bit of a problem, since it would be 
good for someone to look at both of them, with an eye to whether there are any 
places in our SSL abstraction API that ought to be rethought now that we have 
actual non-OpenSSL implementations to compare to.

regards, tom lane


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make 
changes to your subscription:
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.postgresql.org%2Fmailpref%2Fpgsql-hackers=02%7C01%7CSatyanarayana.Narlapuram%40microsoft.com%7C99f781c4865e46f8e69408d5165965f0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636439495588088189=4I%2FYNAtDb63%2BGbSIgh6XVmfKZlbq1YewZ2mkAJkQVKE%3D=0


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reading backup label file for checkpoint and redo location during crash recovery

2017-09-25 Thread Satyanarayana Narlapuram
Thank you! Got it. 

-Original Message-
From: Stephen Frost [mailto:sfr...@snowman.net] 
Sent: Monday, September 25, 2017 10:57 AM
To: Magnus Hagander <mag...@hagander.net>
Cc: Satyanarayana Narlapuram <satyanarayana.narlapu...@microsoft.com>; 
PostgreSQL-development <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Reading backup label file for checkpoint and redo 
location during crash recovery

* Magnus Hagander (mag...@hagander.net) wrote:
> On Mon, Sep 25, 2017 at 7:43 PM, Stephen Frost <sfr...@snowman.net> wrote:
> > * Satyanarayana Narlapuram (satyanarayana.narlapu...@microsoft.com) wrote:
> > > During crash recovery, last checkpoint record information is 
> > > obtained
> > from the backup label if present, instead of getting it from the 
> > control file. This behavior is causing PostgreSQL database cluster 
> > not to come up until the backup label file is deleted (as the error message 
> > says).
> > >
> > > if (checkPoint.redo < checkPointLoc)
> > >   {
> > >  if (!ReadRecord(xlogreader,
> > checkPoint.redo, LOG, false))
> > > ereport(FATAL,
> > >   (errmsg("could 
> > > not
> > find redo location referenced by checkpoint record"),
> > >   errhint("If you 
> > > are
> > not restoring from a backup, try removing the file 
> > \"%s/backup_label\".", DataDir)));
> > >   }
> > >
> > > If we are recovering from a dump file, reading from the backup 
> > > label
> > files makes sense as the control file could be archived after a few 
> > checkpoints. But this is not the case for crash recovery, and is 
> > always safe to read the checkpoint record information from the control file.
> > > Is this behavior kept this way as there is no clear way to 
> > > distinguish
> > between the recovery from the dump and the regular crash recovery?
> >
> > This is why the exclusive backup method has been deprecated in PG10 
> > in favor of the non-exclusive backup method, which avoids this by 
> > not creating a backup label file (it's up to the backup software to 
> > store the necessary information and create the file for use during 
> > recovery).
> 
> Actally, it was deprecated already in 9.6, not just 10.

Whoops, right.  Thanks for the clarification. :)

Stephen


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Reading backup label file for checkpoint and redo location during crash recovery

2017-09-25 Thread Satyanarayana Narlapuram
Hi there,

During crash recovery, last checkpoint record information is obtained from the 
backup label if present, instead of getting it from the control file. This 
behavior is causing PostgreSQL database cluster not to come up until the backup 
label file is deleted (as the error message says).

if (checkPoint.redo < checkPointLoc)
  {
 if (!ReadRecord(xlogreader, checkPoint.redo, LOG, 
false))
ereport(FATAL,
  (errmsg("could not find redo 
location referenced by checkpoint record"),
  errhint("If you are not 
restoring from a backup, try removing the file \"%s/backup_label\".", 
DataDir)));
  }

If we are recovering from a dump file, reading from the backup label files 
makes sense as the control file could be archived after a few checkpoints. But 
this is not the case for crash recovery, and is always safe to read the 
checkpoint record information from the control file.
Is this behavior kept this way as there is no clear way to distinguish between 
the recovery from the dump and the regular crash recovery?


Thanks,
Satya









Re: protocol version negotiation (Re: [HACKERS] Libpq PGRES_COPY_BOTH - version compatibility)

2017-06-29 Thread Satyanarayana Narlapuram
-Original Message-
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Robert Haas
Sent: Thursday, June 29, 2017 5:18 AM
To: Tom Lane 
Cc: Craig Ringer ; Peter Eisentraut ; 
Magnus Hagander ; PostgreSQL-development 

Subject: Re: protocol version negotiation (Re: [HACKERS] Libpq PGRES_COPY_BOTH 
- version compatibility)

> 1. The client sends a StartupMessage 3.x for version 3.x.  We could bump the 
> version explicitly, or perhaps we should just coin a version of libpq for 
> every server release; e.g. whatever PostgreSQL 11 ships is version 3.11, etc. 
>  It includes any protocol options that don't exist today as > 
> pg_protocol. in the startup packet.

+1 on this. Happy to read this conversation. I am hopeful that this provides us 
a path to include parameters needed for Azure database for PostgreSQL service 
(host name, and connection id in the startupmessage). For someone wondering 
what they are, please see the threads below.

https://www.postgresql.org/message-id/DM2PR03MB416343FC02D6E977FEF2EB191C00%40DM2PR03MB416.namprd03.prod.outlook.com
https://www.postgresql.org/message-id/DM2PR03MB4168F3C796B2965FDC4CF9991C00%40DM2PR03MB416.namprd03.prod.outlook.com

2. If the client version is anything other than 3.0, the server responds with a 
ServerProtocolVersion indicating the highest version it supports, and ignores 
any pg_protocol. options not known to it as being either third-party 
extensions or something from a future version.  If the initial response to the 
startup message is anything other than a ServerProtocolVersion message, the 
client should assume it's talking to a 3.0 server.  (To make this work, we 
would back-patch a change into existing releases to allow any 3.x protocol 
version and ignore any pg_protocol. options that were specified.)

> If the client version is anything other than 3.0, the server responds with a 
> ServerProtocolVersion indicating the highest version it supports, and ignores 
> any pg_protocol. options not known to it as being either 
> third-party extensions or something from a future version.  If > the initial 
> response to the startup message is anything other than a 
> ServerProtocolVersion message, the client should assume it's talking to a 3.0 
> server.  (To make this work, we would back-patch a change into existing 
> releases to allow any 3.x protocol version and ignore any > 
> pg_protocol. options that were specified.)

We can avoid one round trip if the server accepts the startupmessage as is 
(including understanding all the parameters supplied by the client), and in the 
cases where server couldn’t accept the startupmessage / require negotiation, it 
should send ServerProtocolVersion message that contains both MIN and MAX 
versions it can support. Providing Min version helps server enforce the client 
Min protocol version, and provides a path to deprecate older versions. Thoughts?


> If either the client or the server is unhappy about the age of the other, 
> then it can disconnect; e.g. if the server is configured to require the use 
> of whizzbang-2 security, and the client protocol version indicates that at 
> most whizzbang-1.9 is available, then the server can close the   > 
> connection with a suitable complaint; conversely, if the connection string 
> had require_whizzbang=2, and the server is too old to support that, then the 
> client can decide to bail out when it receives the ServerProtocolVersion 
> message.

Does the proposal also include the client can negotiate the protocol version on 
the same connection rather than going through connection setup process again? 
The state machine may not sound simple with this proposal but helps bringing 
down total time taken for the login.
Client / server can disconnect any time they think the negotiation failed.


Thanks,
Satya

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making server name part of the startup message

2017-06-21 Thread Satyanarayana Narlapuram
PgBouncer for example assumes that the database names are unique across the 
database clusters it is serving. Our front-end Gateways can serve tens of 
thousands of Postgres servers spanning multiple customers, and organizations, 
and enforcing the database names being unique is not possible for the users of 
the service.

> For the original idea in this thread, using something like dbname@server 
> seems a more logical choice than username@server.


We considered this option but connecting to the database from the GUI tools is 
not very intuitive / possible. Also /c switch in Psql requires including full 
cluster_name every time user connect to a different database.


Thanks,
Satya
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: Thursday, June 15, 2017 9:24 AM
To: Tom Lane <t...@sss.pgh.pa.us>
Cc: Alvaro Herrera <alvhe...@2ndquadrant.com>; Satyanarayana Narlapuram 
<satyanarayana.narlapu...@microsoft.com>; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Making server name part of the startup message

On Thu, Jun 15, 2017 at 5:57 PM, Tom Lane 
<t...@sss.pgh.pa.us<mailto:t...@sss.pgh.pa.us>> wrote:
Alvaro Herrera <alvhe...@2ndquadrant.com<mailto:alvhe...@2ndquadrant.com>> 
writes:
> Tom Lane wrote:
>> This makes no sense at all.  The client is telling the server what the
>> server's name is?

> I think for instance you could have one pgbouncer instance (or whatever
> pooler) pointing to several different servers.  So the client connects
> to the pooler and indicates which of the servers to connect to.

I should think that in such cases, the end client is exactly not what
you want to be choosing which server it gets redirected to.  You'd
be wanting to base that on policies defined at the pooler.  There are
already plenty of client-supplied attributes you could use as inputs
for such policies (user name and application name, for instance).
Why do we need to incur a protocol break to add another one?

The normal one to use for pgbonucer today is, well, "database name". You can 
then have pgbouncer map different databases to different backend servers. It's 
fairly common in my experience to have things like "dbname" and "dbname-ro" 
(for example) as different database names with one mapping to the master and 
one mapping to a load-balanced set of standbys, and things like that. ISTM that 
using the database name is a good choice for that.

For the original idea in this thread, using something like dbname@server seems 
a more logical choice than username@server.

TBH, so maybe I'm misunderstanding the original issue?

--
 Magnus Hagander
 Me: 
https://www.hagander.net/<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.hagander.net%2F=02%7C01%7CSatyanarayana.Narlapuram%40microsoft.com%7Cf43f90262f6d40ee419b08d4b40ae033%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636331406193392642=eA8FqyXqqKKc7xTjciwjp4TWY8LqxJIJihD2t3hGO9M%3D=0>
 Work: 
https://www.redpill-linpro.com/<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.redpill-linpro.com%2F=02%7C01%7CSatyanarayana.Narlapuram%40microsoft.com%7Cf43f90262f6d40ee419b08d4b40ae033%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636331406193392642=UwV7vawHILPH%2FbNCSonDx6vhIkYc8jX3Z6t%2BhuiEpFw%3D=0>


Re: [HACKERS] Making server name part of the startup message

2017-06-21 Thread Satyanarayana Narlapuram
> I should think that in such cases, the end client is exactly not what you 
> want to be choosing which server it gets redirected to.  You'd be wanting to 
> base that on >policies defined at the pooler.  There are already plenty of 
> client-supplied attributes you could use as inputs for such policies (user 
> name and application name, for >instance).
Pooler would be the end client for the Postgres database cluster, and 
connection string changes are required at the pooler. There is no change in the 
connection string format in such cases.

>Why do we need to incur a protocol break to add another one?
This is optional and is not a protocol break. This doesn't make the cluster 
name field mandatory in the startup message. If the client specifies the extra 
parameter in the connection string to include the server name in the startup 
message, then only it will be included otherwise it is not. In a proxy 
scenario, end client's startup message doesn't need to include the server name 
in it, and for proxy it is optional to include this field while sending the 
startup message to the server. It is preferred to set the field for the Azure 
PostgreSQL service instead of appending the cluster name to the user name.

Proposed LibPQ connection string format would be:

host=localhost port=5432 dbname=mydb connect_timeout=10 
include_cluster_name=true 

include_cluster_name is an optional parameter and setting this true includes 
cluster_name in the startup message and will not be included otherwise.

Thanks,
Satya

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, June 15, 2017 8:58 AM
To: Alvaro Herrera <alvhe...@2ndquadrant.com>
Cc: Satyanarayana Narlapuram <satyanarayana.narlapu...@microsoft.com>; 
pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Making server name part of the startup message

Alvaro Herrera <alvhe...@2ndquadrant.com> writes:
> Tom Lane wrote:
>> This makes no sense at all.  The client is telling the server what 
>> the server's name is?

> I think for instance you could have one pgbouncer instance (or 
> whatever
> pooler) pointing to several different servers.  So the client connects 
> to the pooler and indicates which of the servers to connect to.

I should think that in such cases, the end client is exactly not what you want 
to be choosing which server it gets redirected to.  You'd be wanting to base 
that on policies defined at the pooler.  There are already plenty of 
client-supplied attributes you could use as inputs for such policies (user name 
and application name, for instance).
Why do we need to incur a protocol break to add another one?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optional message to user when terminating/cancelling backend

2017-06-20 Thread Satyanarayana Narlapuram
Agree with David on the general usefulness of this channel. Not that Azure has 
this implementation or proposal today, but as a managed service this channel of 
communication is worth. For example, DBA / service can set a policy that if 
certain session exceeds the resource usage DBA can kill it and communicate the 
same. For example, too many locks, lot of IO activity, large open transactions 
etc. The messages will help application developer to tune their workloads 
appropriately. 

Thanks,
Satya

-Original Message-
From: David G. Johnston [mailto:david.g.johns...@gmail.com] 
Sent: Tuesday, June 20, 2017 12:44 PM
To: Alvaro Herrera <alvhe...@2ndquadrant.com>
Cc: Satyanarayana Narlapuram <satyanarayana.narlapu...@microsoft.com>; Pavel 
Stehule <pavel.steh...@gmail.com>; Daniel Gustafsson <dan...@yesql.se>; 
PostgreSQL mailing lists <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Optional message to user when terminating/cancelling 
backend

On Tue, Jun 20, 2017 at 11:54 AM, Alvaro Herrera <alvhe...@2ndquadrant.com> 
wrote:
> Satyanarayana Narlapuram wrote:

> Unless you have a lot of users running psql manually, I don't see how 
> this is actually very useful or actionable.  What would the user do 
> with the information?  Hopefully your users already trust that you'd 
> keep the downtime to the minimum possible.
>

Why wouldn't this be useful in application logs?  Spurious dropped connections 
during application execution would be alarming.  Seeing a message from the DBA 
when looking into those would be a painless and quick way to alleviate stress.

pg_cancel_backend(, 'please try not to leave sessions in an "idle in 
transaction" state...') would also seem like a useful message to communicate; 
to user or application.  Sure, some of this can, and maybe would also need to, 
be done out-of-band but this communication channel seems worthy enough to at 
least evaluate the provided implementation.

David J.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optional message to user when terminating/cancelling backend

2017-06-19 Thread Satyanarayana Narlapuram
+1.

This really helps PostgreSQL Azure service as well. When we are doing the 
upgrades to the service, instead of abruptly terminating the sessions we can 
provide this message.

Thanks,
Satya

From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Pavel Stehule
Sent: Monday, June 19, 2017 11:41 AM
To: Daniel Gustafsson 
Cc: PostgreSQL mailing lists 
Subject: Re: [HACKERS] Optional message to user when terminating/cancelling 
backend



2017-06-19 20:24 GMT+02:00 Daniel Gustafsson 
>:
When terminating, or cancelling, a backend it’s currently not possible to let
the signalled session know *why* it was dropped.  This has nagged me in the
past and now it happened to come up again, so I took a stab at this.  The
attached patch implements the ability to pass an optional text message to the
signalled session which is included in the error message:

  SELECT pg_terminate_backend( [, message]);
  SELECT pg_cancel_backend( [, message]);

Right now the message is simply appended on the error message, not sure if
errdetail or errhint would be better? Calling:

  select pg_terminate_backend(, 'server rebooting');

..leads to:

  FATAL:  terminating connection due to administrator command: "server 
rebooting"

Omitting the message invokes the command just like today.

The message is stored in a new shmem area which is checked when the session is
aborted.  To keep things simple a small buffer is kept per backend for the
message.  If deemed too costly, keeping a central buffer from which slabs are
allocated can be done (but seemed rather complicated for little gain compared
to the quite moderate memory spend.)

cheers ./daniel

+1

very good idea

Pavel




--
Sent via pgsql-hackers mailing list 
(pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Making server name part of the startup message

2017-06-19 Thread Satyanarayana Narlapuram


-Original Message-
From: Andres Freund [mailto:and...@anarazel.de] 
Sent: Friday, June 16, 2017 10:48 AM
To: Tom Lane <t...@sss.pgh.pa.us>
Cc: Satyanarayana Narlapuram <satyanarayana.narlapu...@microsoft.com>; 
pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Making server name part of the startup message

On 2017-06-15 09:43:13 -0400, Tom Lane wrote:
> Satyanarayana Narlapuram <satyanarayana.narlapu...@microsoft.com> writes:
> > As a cloud service, Azure Database for PostgreSQL uses a gateway proxy to 
> > route connections to a node hosting the actual server. To do that, the 
> > proxy needs to know the name of the server it tries to locate. As a 
> > work-around we currently overload the username parameter to pass in the 
> > server name using username@servername convention. It is purely a convention 
> > that our customers need to follow and understand. We would like to extend 
> > the PgSQL connection protocol to add an optional parameter for the server 
> > name to help with this scenario.
> 
> We don't actually have any concept of a server name at the moment, and 
> it isn't very clear what introducing that concept would buy.
> Please explain.

cluster_name could be what's meant?

Andres, thank you! It is database cluster name as you mentioned.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Adding connection id in the startup message

2017-06-15 Thread Satyanarayana Narlapuram
As a cloud service, Azure Database for PostgreSQL uses a gateway proxy to route 
connections to a node hosting the actual server. Potentially there could be 
multiple hops (for example client, optional proxy at the client like pgbouncer 
for connection pooling, Azure gateway proxy, backend server) in between the 
client, and the server. For various reasons (client firewall rules, network 
issues etc.), the connection can be dropped before it is fully authenticated at 
one of these hops, and it becomes extremely difficult to say where and why the 
connection is dropped.
The proposal is to tweak the connectivity wire protocol, and add a connection 
id (GUID) filed in the startup message. We can trace the connection using this 
GUID and investigate further on where the connection failed.
Client adds a connection id in the startup message and send it to the server it 
is trying to connect to. Proxy logs the connection id information in its logs, 
and passes it to the server. Server logs the connection Id in the server log, 
and set it in the GUC variable (ConnectionId).

When an attempt to connection to the server fails, the connection failed 
message must include the connection id in the message. This Id can be used to 
trace the connection end to end.
Customers can provide this Id to the support team to investigate the 
connectivity issues to the server, along with the server information.

This field can be an optional field driven by the connection parameters for 
PSql (-C or--include-clientId).

P.S: I am looking for initial feedback on this idea and can provide more design 
details if needed.

Thanks,
Satya



[HACKERS] Making server name part of the startup message

2017-06-15 Thread Satyanarayana Narlapuram
As a cloud service, Azure Database for PostgreSQL uses a gateway proxy to route 
connections to a node hosting the actual server. To do that, the proxy needs to 
know the name of the server it tries to locate. As a work-around we currently 
overload the username parameter to pass in the server name using 
username@servername convention. It is purely a convention that our customers 
need to follow and understand. We would like to extend the PgSQL connection 
protocol to add an optional parameter for the server name to help with this 
scenario.

Proposed changes:

Change the Postgres wire protocol to include server name in the startup 
message. This field can be an optional field driven by the connection 
parameters for psql (-N, --servername).
We need this extra parameter for backward compatibility.
Make PostgreSQL server aware of the new field, and accept the startup message 
containing this field. Though server doesn't need this field, this change helps 
making the server name by default included in the startup message in future.

P.S: I would like to get some initial feedback on this idea and will provide 
more design details if required. Any feedback in this regard is really 
appreciated.

Thanks,
Satya