Wiadomość napisana przez Tom Lane w dniu 3 paź 2011, o godz. 17:12:
I'm thinking it probably sees the pkey index as cheaper because that's
highly correlated with the physical order of the table. (It would be
useful to see pg_stats.correlation for these columns.) With a
sufficiently
2011/10/4 Nowak Michał michal.no...@me.com:
a9-dev= select attname, null_frac, avg_width, n_distinct, correlation from
pg_stats where tablename = 'records';
attname | null_frac | avg_width | n_distinct |
correlation
Lowering random_page_cost didn't help -- I've tried values 2.0 and 1.5.
Then I tried order by id -1 hack Marcin Mańk proposed...
a9-dev= create index foo on records(source_id, (id - 1));
CREATE INDEX
a9-dev= explain analyze select * from records where source_id
I have the following setup:
create table test(id integer, seq integer);
insert into test select generate_series(0, 100), generate_series(0, 1000);
create unique index test_idx on test(id, seq);
analyze test;
Now I try to fetch the latest 5 values per id, ordered by seq from the
table:
select
Hello Everyone,
Generally when it comes to query performance, I check how the vacuuming and
statistics collection is performed on Tables and Indexes hit by the query.
Apart from the above i check the code logic ( for any bad joins ) and column
statistics as well.
I got hold of two catalog
Hello,
Sorry. I should have put some more details in the email.
I have got a situation where in i see the production system is loaded with
the checkpoints and at-least 1000+ buffers are being written for every
checkpoint.
Checkpoint occurs every 3 to 4 minutes and every checkpoint takes 150
On 04.10.2011 13:50, Venkat Balaji wrote:
I have got a situation where in i see the production system is loaded with
the checkpoints and at-least 1000+ buffers are being written for every
checkpoint.
1000 buffers isn't very much, that's only 8 MB, so that's not alarming
itself.
I am
Thanks Heikki !
Regards,
VB
On Tue, Oct 4, 2011 at 4:38 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 04.10.2011 13:50, Venkat Balaji wrote:
I have got a situation where in i see the production system is loaded with
the checkpoints and at-least 1000+ buffers are
Hi,
I need help to understand the issue on a productive database for a select
that takes more time than expected.
1- On a development database I ran the query (select) and I can see on
Explain Analyze pgAdmin use all the indexes and primary keys defined. Dev db
has almost 10% of
On 01/10/11 01:23, Vitalii Tymchyshyn wrote:
Since you are using except and not except all, you are not looking at
arrays with duplicates.
For this case next function what the fastest for me:
create or replace function array_except2(anyarray,anyarray) returns
anyarray as $$
select ARRAY(
(
On Mon, Oct 03, 2011 at 02:48:10PM -0300, Soporte @ TEKSOL S.A. wrote:
Hi,
I need help to understand the issue on a productive database for a select
that takes more time than expected.
1- On a development database I ran the query (select) and I can see on
Explain Analyze
On Mon, Oct 3, 2011 at 11:48 AM, Soporte @ TEKSOL S.A.
sopo...@teksol.com.ar wrote:
Hi,
I need help to understand the issue on a productive database for a select
that takes more time than expected.
1- On a development database I ran the query (select) and I can see on
Explain Analyze
On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen
anssi.kaariai...@thl.fi wrote:
I have the following setup:
create table test(id integer, seq integer);
insert into test select generate_series(0, 100), generate_series(0, 1000);
create unique index test_idx on test(id, seq);
analyze test;
On 10/04/2011 04:27 PM, Robert Klemme wrote:
On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen
anssi.kaariai...@thl.fi wrote:
I have the following setup:
create table test(id integer, seq integer);
insert into test select generate_series(0, 100), generate_series(0, 1000);
create unique index
8 checkpoint segments is very small, try 50
2011/10/4, Venkat Balaji venkat.bal...@verse.in:
Hello,
Sorry. I should have put some more details in the email.
I have got a situation where in i see the production system is loaded with
the checkpoints and at-least 1000+ buffers are being
=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= anssi.kaariai...@thl.fi writes:
I have the following setup:
create table test(id integer, seq integer);
insert into test select generate_series(0, 100), generate_series(0, 1000);
create unique index test_idx on test(id, seq);
analyze test;
Now I try
On 10/04/2011 05:36 PM, Tom Lane wrote:
The cost estimates I get are 806 for bitmap scan and sort, 2097 for
seqscan and sort, 4890 for indexscan without sort. It *can* use the
index for that query ... it just doesn't think it's a good idea. It's
probably right, too. At least, the actual
On 10/04/2011 05:52 PM, Robert Klemme wrote:
But then why do require using the second index column in the first
place? If the data set is small then the query is likely fast if the
selection via id can use any index.
I mean the fetched dataset is not large, I didn't mean the dataset in
total
On Tue, Oct 4, 2011 at 4:06 PM, Anssi Kääriäinen
anssi.kaariai...@thl.fi wrote:
On 10/04/2011 04:27 PM, Robert Klemme wrote:
On Tue, Oct 4, 2011 at 11:39 AM, Anssi Kääriäinen
anssi.kaariai...@thl.fi wrote:
I have the following setup:
create table test(id integer, seq integer);
insert
On Tue, Oct 4, 2011 at 2:16 AM, Gavin Flower
gavinflo...@archidevsys.co.nz wrote:
On 01/10/11 01:23, Vitalii Tymchyshyn wrote:
Since you are using except and not except all, you are not looking at
arrays with duplicates.
For this case next function what the fastest for me:
create or replace
Hi,
On 4 October 2011 21:25, Venkat Balaji venkat.bal...@verse.in wrote:
I got hold of two catalog tables pg_stats and pg_class.
Column avg_width and distinct in pg_stats gets me lot of sensible
information regarding, column values and size of the column.
Can someone help me know when the
On 10/04/2011 03:50 AM, Venkat Balaji wrote:
I had a look at the pg_stat_bgwriter as well.
Try saving it like this instead:
select now(),* from pg_stat_bgwriter;
And collect two data points, space a day or more apart. That gives a
lot more information about the rate at which things are
On Tue, Oct 4, 2011 at 4:32 PM, Greg Smith g...@2ndquadrant.com wrote:
On 10/04/2011 03:50 AM, Venkat Balaji wrote:
I had a look at the pg_stat_bgwriter as well.
Try saving it like this instead:
select now(),* from pg_stat_bgwriter;
And collect two data points, space a day or more apart.
Nowak Micha*michal.no...@me.com wrote:
Lowering random_page_cost didn't help -- I've tried values 2.0 and
1.5.
First off, I don't remember you saying how much RAM is on the
system, but be sure to set effective_cache_size to the sum of your
shared_buffers and OS cache. I've often found that
I was thinking to increase checkpoint_segments to around 16 or 20.
I think 50 is a bit higher.
Greg,
Sure. I would collect the info from pg_stat_bgwriter on regular intervals.
As we have too many transactions going on I am thinking to collect the info
every 6 or 8 hrs.
Thanks
VB
On Wed, Oct
On 10/04/2011 07:50 PM, Venkat Balaji wrote:
I was thinking to increase checkpoint_segments to around 16 or 20.
I think 50 is a bit higher.
Don't be afraid to increase that a lot. You could set it to 1000 and
that would be probably turn out fine; checkpoints will still happen
every 5
26 matches
Mail list logo