Peter Eisentraut pete...@gmx.net writes:
On mån, 2012-06-11 at 18:07 -0400, Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
So you do need to create M*N sockets.
I don't really see a problem with that.
I do: first, it's a lotta sockets, and second, it's not real hard to
foresee
On Thu, Jun 14, 2012 at 12:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Peter Eisentraut pete...@gmx.net writes:
On mån, 2012-06-11 at 18:07 -0400, Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
So you do need to create M*N sockets.
I don't really see a problem with that.
I do:
Hi,
There was already some discussion about compressing libpq data [1][2][3].
Recently, I faced a scenario that would become less problematic if we have had
compression support. The scenario is frequent data load (aka COPY) over
slow/unstable links. It should be executed in a few hundreds of
Robert Haas robertmh...@gmail.com writes:
Maybe:
listen_addresses = { host | host:port | * | *:port } [, ...]
unix_socket_directory = { directory | directory:port ] [,...]
...except that colon is a valid character in a directory name. Not
sure what to do about that.
Do we need to do
Euler Taveira eu...@timbira.com writes:
There was already some discussion about compressing libpq data [1][2][3].
Recently, I faced a scenario that would become less problematic if we have had
compression support. The scenario is frequent data load (aka COPY) over
slow/unstable links. It
101 - 105 of 105 matches
Mail list logo