Richard Huxton a écrit :
Gauri Kanekar wrote:
Hi,
Can anybody help me out to get following info of all the tables in a
database.
1. Have you read up on the information schema and system catalogues?
http://www.postgresql.org/docs/8.2/static/catalogs.html
Christian Rengstl a écrit :
My OS is Windows 2003 with 4GB Ram and Xeon Duo with 3.2 GHz;
shared_buffers is set to 32MB (as I read it should be fairly low on
Windows) and work_mem is set to 2500MB, but nevertheless the query takes
about 38 seconds to finish. The table table1 contains approx. 3
Dave North a écrit :
[...]
I'd suggest re-tuning as follows:
1) Increase shared_buffers to 10,000, test. Things should be
a bit faster.
2) Increase checkpoint_segments to 30, test. What you want
to watch for here whether there are periods where the server
seems to freeze for a couple of
Greg Smith a écrit :
[...]
- When, or in what case is a new clog file produced?
Every 32K transactions.
Are you sure about this?
y clog files get up to 262144 bytes. Which means 100 transactions'
status: 262144 bytes are 2Mb (mega bits), so if a status is 2 bits, it
holds 1M
Jessica Richard a écrit :
for a short test purpose, I would like to see what queries are running
and how long each of them takes.by reconfiguring postgres.conf on
the server level.
log_statement = 'all' is giving me the query statements.. but I don't
know where I can turn timing on
Ivan Voras a écrit :
Jesper Krogh wrote:
[...]
It worked when I tested it, but I may just have been darn lucky.
No, it should be ok - I just didn't catch up with the times :) At least
that's my interpretation of this paragraph in documentation:
Some backup tools that you might wish to use
Laszlo Nagy a écrit :
We have serveral table where the index size is much bigger than the
table size.
Example:
select count(*) from product_price -- 2234244
Table size: 400 MB
Index size: 600 MB
After executing reindex table product_price, index size reduced to 269MB.
I believe
Le 02/04/2010 22:10, Campbell, Lance a écrit :
Greg,
Thanks for your help.
1) How does the number of buffers provided by pg_buffercache compare to
memory (buffers * X = Y meg)?
1 buffer is 8 KB.
2) Is there a way to tell how many total buffers I have available/max?
With pg_buffercache,
Le 18/08/2010 17:23, Thom Brown a écrit :
On 18 August 2010 17:06, Justin Graf jus...@magwerks.com wrote:
On 8/18/2010 9:15 AM, Clemens Eisserer wrote:
Hi,
they are generated automatically.
Thanks depesz!
The reason why I asked was because pgAdmin doesn't display the
automatically
Le 16/09/2010 20:39, Josh Berkus a écrit :
It's been pure nonsense in this thread. Please show an example of
what's not working.
1) Init a postgresql 8.3 with autovacuum disabled.
2) Load a backup of a database into that PostgreSQL.
3) Check pg_stat_user_tables. n_live_tup for all
On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
Hi,
I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid it'll go into swap. Using postgres 8.3.14.
I'm purging some old data from table
Hi,
On Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote:
[...]
We are running a PostgreSQL 8.4 database, with two tables containing a
lot ( 1 million) moderatly small rows. It contains some btree indexes,
and one of the two tables contains a gin full-text index.
We noticed that the
On Sat, 2011-07-09 at 10:43 +0200, Gael Le Mignot wrote:
Hello Guillaume!
Sat, 09 Jul 2011 10:33:03 +0200, you wrote:
Hi,
On Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote:
[...]
We are running a PostgreSQL 8.4 database, with two tables containing a
lot ( 1 million)
On Sat, 2011-07-09 at 11:00 +0200, Gael Le Mignot wrote:
Hello Guillaume!
Sat, 09 Jul 2011 10:53:14 +0200, you wrote:
I don't quite understand how you can get up to 1GB used by your process.
According to your configuration, and unless I'm wrong, it shouldn't take
more than 40MB.
On Wed, 2011-08-24 at 13:05 +0530, Venkat Balaji wrote:
Hello Everyone,
I am working on an alert script to track the number of connections with the
host IPs to the Postgres cluster.
1. I need all the host IPs making a connection to Postgres Cluster (even for
a fraction of second).
You
On Wed, 2011-08-24 at 16:51 +0530, Venkat Balaji wrote:
But, the information vanishes if the application logs off.
That's why you need a tool to track this.
I am looking for an alternative to track the total amount of the connections
with the host IPs through a Cron job.
If you only want
On Wed, 2011-09-21 at 13:01 +0600, AI Rumman wrote:
I am using Postgresql 9.0.1.
Using the query http://wiki.postgresql.org/wiki/Show_database_bloat, I got
the following result for a table:
-[ RECORD 1 ]+---
current_database | crm
On Tue, 2012-12-04 at 15:42 -0800, Jeff Janes wrote:
On Tue, Dec 4, 2012 at 10:03 AM, postgre...@foo.me.uk wrote:
[...]
Is there some nice bit of literature somewhere that explains what sort of
costs are associated with the different types of lookup?
I've heard good things about Greg
On Mon, 2013-12-16 at 11:42 +, Yuri Levinsky wrote:
Dear Depesz,
This is very problematic solution: I have to change whole!!! my code to put
appropriate comment with query text before any query execution. In addition I
would like to know current execution plan, that seems to be
Hi,
Le 5 nov. 2014 22:34, Tory M Blue tmb...@gmail.com a écrit :
log_temp_files (integer)
Controls logging of temporary file names and sizes. Temporary files can
be created for sorts, hashes, and temporary query results. A log entry is
made for each temporary file when it is deleted. A value
on that table ?
It usually is a good idea to parallelize index creation. That's one of the
good things that pg_restore does since the 8.4 release.
Nicolas PARIS
2015-03-07 12:56 GMT+01:00 Guillaume Lelarge guilla...@lelarge.info:
Le 7 mars 2015 11:32, Nicolas Paris nipari...@gmail.com a écrit
Le 7 mars 2015 11:32, Nicolas Paris nipari...@gmail.com a écrit :
Hello,
I wonder if the process of index creation can benefit from other indexes.
It cannot.
EG: Creating a partial index with predicat based on a boolean column,
will use an hypothetic index on that boolean column or always
Le 15 juil. 2015 11:16 PM, David G. Johnston david.g.johns...@gmail.com
a écrit :
On Wed, Jul 15, 2015 at 4:53 PM, Michael Nolan htf...@gmail.com wrote:
On Wed, Jul 15, 2015 at 3:16 PM, Robert DiFalco robert.difa...@gmail.com
wrote:
Thanks David, my example was a big simplification, but I
2015-07-09 22:34 GMT+02:00 Nicolas Paris nipari...@gmail.com:
Hello,
My 9.4 database is used as datawharehouse. I can't change the queries
generated.
first index : INDEX COL (A,B,C,D,E)
In case of query based on COL A, the query planner sometimes go to a seq
scan instead of using the
Le 26 sept. 2015 6:26 PM, "Adam Scott" a écrit :
>
> How do we measure queries per second (QPS), not transactions per second,
in PostgreSQL without turning on full logging which has a performance
penalty and can soak up lots of disk space?
>
The only way I can think of is
Le 27 sept. 2015 8:02 AM, "Guillaume Lelarge" <guilla...@lelarge.info> a
écrit :
>
> Le 26 sept. 2015 6:26 PM, "Adam Scott" <adam.c.sc...@gmail.com> a écrit :
> >
> > How do we measure queries per second (QPS), not transactions per
second, in Po
26 matches
Mail list logo