Hi All,
I would like to write the output of the \d command on all tables in a database to an output file. There are more than 200 tables in the database. I am aware of \o command to write the output to a file. But,it will be tough to do the \d for each table manually and write the output to a
...and on Wed, Dec 15, 2004 at 06:38:22AM -0800, sarlav kumar used the keyboard:
Hi All,
I would like to write the output of the \d command on all tables in a
database to an output file. There are more than 200 tables in the database. I
am aware of \o command to write the output to a
Geoffrey [EMAIL PROTECTED] writes:
sarlav kumar wrote:
I would like to write the output of the \d command on all tables in a
database to an output file.
What is the OS? On any UNIX variant you can do:
echo '\d' | psql outputfile
Or use \o:
regression=# \o zzz1
regression=# \d
sarlav kumar wrote:
Hi all,
Can someone please help me optimize this query? Is there a better way to
write this query? I am generating a report of transactions ordered by
time and with details of the sender and receiver etc.
SELECT distinct a.time::date
On Wed, 2004-12-15 at 11:50 -0500, Tom Lane wrote:
Geoffrey [EMAIL PROTECTED] writes:
sarlav kumar wrote:
I would like to write the output of the \d command on all tables in a
database to an output file.
What is the OS? On any UNIX variant you can do:
echo '\d' | psql outputfile
Stacy,
Thanks again for the reply. So it sounds like the answer to my original
question is that it's expected that the pseudo-partitioning would introduce
a fairly significant amount of overhead. Correct?
Correct. For that matter, Oracle table partitioning introduces significant
Greg,
Well Oracle has lots of partitioning intelligence pushed up to the planner
to avoid overhead.
If you have a query with something like WHERE date = '2004-01-01' and
date is your partition key (even if it's a range) then Oracle will figure
out which partition it will need at planning
Josh Berkus [EMAIL PROTECTED] writes:
Stacy,
Thanks again for the reply. So it sounds like the answer to my original
question is that it's expected that the pseudo-partitioning would introduce
a fairly significant amount of overhead. Correct?
Correct. For that matter, Oracle table
Josh Berkus [EMAIL PROTECTED] writes:
But I'm a bit puzzled. Why would Append have any significant cost? It's
just taking the tuples from one plan node and returning them until they run
out, then taking the tuples from another plan node. It should have no i/o
cost and hardly any cpu
Theo Galanakis wrote:
I have written a program that parses a syslog file, reading all the postgres
transactions. I would like to know if there is a way for postgres to log
also the specific database the sql statement originated from.
The only options available in the postgresql.conf are:
Greg Stark [EMAIL PROTECTED] writes:
But I'm a bit puzzled. Why would Append have any significant cost? It's just
taking the tuples from one plan node and returning them until they run out,
then taking the tuples from another plan node. It should have no i/o cost and
hardly any cpu cost. Where
sarlav kumar wrote:
Hi All,
I would like to write the output of the \d command on all tables in a
database to an output file. There are more than 200 tables in the
database. I am aware of \o command to write the output to a file.
But, it will be tough to do the \d for each table manually and write
Title: indentifying the database in a Postgres log file.
I have written a program that parses a syslog file, reading all the postgres transactions. I would like to know if there is a way for postgres to log also the specific database the sql statement originated from.
The only options
The world rejoiced as [EMAIL PROTECTED] (Josh Berkus) wrote:
Hasnul,
My question is if there is a query design that would query multiple
server simultaneously.. would that improve the performance?
Not without a vast amounts of infrastructure coding. You're
basically talking about what
14 matches
Mail list logo