Re: [HACKERS] DBD::Pg, schema support

2003-07-23 Thread Arguile
On Wed, 2003-07-23 at 18:24, Richard Schilling wrote:
 Can you give an example on how to execute that command?  I've been 
 wondering about that too but haven't had time to read the documentation.

SET search_path TO foo,'$user',public;

http://developer.postgresql.org/docs/postgres/runtime-config.html


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Updating psql for features of new FE/BE protocol

2003-06-25 Thread Arguile
On Wed, 2003-06-25 at 13:49, Tom Lane wrote:
 There are a number of things that need to be done in psql before feature
 freeze.  Any comments on the following points?
 
 * We need a client-side autocommit-off implementation to substitute for
 the one removed from the server.  I am inclined to create a new psql
 backslash command:
   \autocommit on  traditional handling of transactions
   \autocommit off force BEGIN before any user command
   that's not already in a transaction
   \autocommit with no arg, show current state
 An alternative to creating a new command is to define a special variable
 in psql, whereupon the above three would instead be rendered
   \set AUTOCOMMIT on
   \set AUTOCOMMIT off
   \echo :AUTOCOMMIT
 The first choice seems less verbose to me, but if anyone wants to make a
 case for the second, I'm open to it.  Note that either of these could be
 put in ~/.psqlrc if someone wants autocommit off as their default.

A case for the latter is that it's very similar to environment
variables, a well known system.

The main advantage I see -- other than the shell similarities -- is the
ability to call set with no arguments and get a listing of all the
options. This is currently much shorter than the already overburdened \?
screen and would concentrate all psql preference settings in one
location.



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Arguile

Bruce Momjian wrote:

 Greg Copeland wrote:
  Well, it occurred to me that if a large result set were to be identified
  before transport between a client and server, a significant amount of
  bandwidth may be saved by using a moderate level of compression.
  Especially with something like result sets, which I tend to believe may
  lend it self well toward compression.

 I should have said compressing the HTTP protocol, not FTP.

  This may be of value for users with low bandwidth connectivity to their
  servers or where bandwidth may already be at a premium.

 But don't slow links do the compression themselves, like PPP over a
 modem?

Yes, but that's packet level compression. You'll never get even close to the
result you can achieve compressing the set as a whole.

Speaking of HTTP, it's fairly common for web servers (Apache has mod_gzip)
to gzip content before sending it to the client (which unzips it silently);
especially when dealing with somewhat static content (so it can be cached
zipped). This can provide great bandwidth savings.

I'm sceptical of the benefit such compressions would provide in this setting
though. We're dealing with sets that would have to be compressed every time
(no caching) which might be a bit expensive on a database server. Having it
as a default off option for psql migtht be nice, but I wonder if it's worth
the time, effort, and cpu cycles.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])