Thanks for your repply, but I still don"t understand why the
statistic logs :
! 0/0 [0/0] filesystem blocks in/out
it told me there is no hard disk access, I'm sure there is, I
heard my HDD, and see activity using gkrellm (even using my first
query ; big select *) ?
2004-10-08 10:40:05
The really tricky part is that a DISTINCT ON needs to know about a
first()
aggregate. And to make optimal use of indexes, a last() aggregate as
well. And
ideally the planner/executor needs to know something is magic about
first()/last() (and potentially min()/max() at some point) and that
I have a large table with a column:
ids integer[] not null
most of these entries (over 95%) contain only one array element, some
can contain up to 10 array elements. seqscan is naturally slow. GIST
on int_array works nice, but GIST isn't exactly a speed daemon when it
comes to updating.
So I
disclaimer : brainless proposition
(SELECT * FROM table WHERE (icount(ids) = 1 AND ids[1] = 33)
UNION ALL
(SELECT * FROM table WHERE (icount(ids) 1 AND ids '{33}'));
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Hashing is at least as fast, if not faster.
regards, tom lane
Probably quite faster if the dataset is not huge...
UniqueSort would be useful for GROUP BY x ORDER BY x though
---(end of broadcast)---
TIP 3: if
In article [EMAIL PROTECTED],
=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= [EMAIL PROTECTED] writes:
disclaimer : brainless proposition
(SELECT * FROM table WHERE (icount(ids) = 1 AND ids[1] = 33)
UNION ALL
(SELECT * FROM table WHERE (icount(ids) 1 AND ids '{33}'));
I guess my
=?ISO-8859-1?Q?=22Alban_M=E9dici_=28NetCentrex=29=22?= [EMAIL PROTECTED] writes:
Thanks for your repply, but I still dont understand why the statistic
logs :
! 0/0 [0/0] filesystem blocks in/out
it told me there is no hard disk access, I'm sure there is,
Complain to your friendly
Dawid Kuroczko [EMAIL PROTECTED] writes:
But when I phrase the query:
SELECT * FROM table WHERE (icount(ids) = 1 AND ids[1] = 33) OR
(icount(ids) 1 AND ids '{33}');
Planner insists on using seqscan. Even with enable_seqscan = off;
The OR-index-scan mechanism isn't currently smart enough
[EMAIL PROTECTED] writes:
Unfortunately, yes thats true - thats is for correctness, not an
optimization decision. Outer joins constrain you on both join order AND
on join type. Nested loops and hash joins avoid touching all rows in
the right hand table, which is exactly what you don't want
Hi,
I have a problem with the below query, when i do explain on the
below query on my live database it doesnt use any index specified on the
tables and does seq scan on the table which is 400k records. But if i
copy the same table onto a different database on a different machine it
uses
Tom Lane wrote:
Pallav Kalva [EMAIL PROTECTED] writes:
I have a problem with the below query, when i do explain on the
below query on my live database it doesnt use any index specified on the
tables and does seq scan on the table which is 400k records. But if i
copy the same table onto
Oops, forgot to mention:
PostgreSQL 8.0 beta 2 Windows.
Thanks,
Gary.
On 8 Oct 2004 at 20:32, Gary Doades wrote:
I'm looking at one of my standard queries and have encountered some strange
performance
problems.
The query below is to search for vacant staff member date/time slots
On 8 Oct 2004 at 16:04, Tom Lane wrote:
Gary Doades [EMAIL PROTECTED] writes:
If I remove the redundant clauses, the planner now estimates 1000 rows returned
from
the table, not unreasonable since it has no statistics. But *why* in that case,
with *more*
estimated rows does it
Folks,
I'm hoping that some of you can shed some light on this.
I've been trying to peg the sweet spot for shared memory using OSDL's
equipment. With Jan's new ARC patch, I was expecting that the desired
amount of shared_buffers to be greatly increased. This has not turned out to
be the
I have an idea that makes some assumptions about internals that I think
are correct.
When you have a huge number of buffers in a list that has to be
traversed to look for things in cache, e.g. 100k, you will generate an
almost equivalent number of cache line misses on the processor to jump
Josh Berkus [EMAIL PROTECTED] writes:
Here's a top-level summary:
shared_buffers% RAM NOTPM20*
1000 0.2%1287
23000 5% 1507
46000 10% 1481
69000 15% 1382
92000
J. Andrew Rogers [EMAIL PROTECTED] writes:
As I understand it (and I haven't looked so I could be wrong), the
buffer cache is searched by traversing it sequentially.
You really should look first.
The main-line code paths use hashed lookups. There are some cases that
do linear searches through
On Fri, Oct 08, 2004 at 06:32:32PM -0400, Tom Lane wrote:
This does raise a question for Josh though, which is where's the
oprofile results? If we do have major problems at the level of cache
misses then oprofile would be able to prove it.
Or cachegrind. I've found it to be really effective
Tom,
BTW, what is the actual size of the test database (disk footprint wise)
and how much of that do you think is heavily accessed during the run?
It's possible that the test conditions are such that adjusting
shared_buffers isn't going to mean anything anyway.
The raw data is 32GB, but a
[EMAIL PROTECTED] (Josh Berkus) wrote:
I've been trying to peg the sweet spot for shared memory using
OSDL's equipment. With Jan's new ARC patch, I was expecting that
the desired amount of shared_buffers to be greatly increased. This
has not turned out to be the case.
That doesn't surprise
20 matches
Mail list logo