> What is it about the buffer cache that makes it so unhappy being able
to
> hold everything? I don't want to be seen as a cache hit fascist, but
isn't
> it just better if the data is just *there*, available in the
postmaster's
> address space ready for each backend process to access it, rather tha
Thanks for this tip. Turns out there is a difference.
I am using cursors (i.e. calling setFetchSize(5000) on
my Statement) in JDBC. So the SQL statement is
preceded by:
DECLARE JDBC_CURS_1 CURSOR FOR ...
which is then followed by the SQL statemnt.
This is followed by the separate statement:
>
> > However, this query performs a sequence scan on the table, ignoring
the
> > call_idx13 index (the only difference is the addition of the aspid
field
> > in the order by clause):
>
> You do not have an index which matches the ORDER BY, so PostgreSQL
> cannot simply scan the index for the dat
> OK, that makes sense; however, this doesn't:
>
> elon2=# explain analyse select * from call where aspid='123C' and
> OpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24
> 23:59:59.999' order by aspid asc, openeddatetime asc, callstatus asc,
> calltype asc, callkey asc;
> I've modifie
Thanks to all who have responded. I now think my
problem is not related to deleting/recreating indexes.
Somehow it is related to JDBC cursors. It appears
that what is happening is that since I'm using
a fetch size of 5000, the command:
FETCH FORWARD 5000 FROM JDBC_CURS_1
is being repeatedly se
On Fri, 9 Jul 2004, Bill Chandler wrote:
> Thanks to all who have responded. I now think my
> problem is not related to deleting/recreating indexes.
> Somehow it is related to JDBC cursors. It appears
> that what is happening is that since I'm using
> a fetch size of 5000, the command:
>
> F
Bill,
What happens if you do this in psql, also you can turn on duration
logging in the backend and log the queries.
dave
On Fri, 2004-07-09 at 16:24, Bill Chandler wrote:
> Thanks to all who have responded. I now think my
> problem is not related to deleting/recreating indexes.
> Somehow it is
>The disk cache on most operating systems is optimized. Plus, keeping
shared buffers low gives you more room to bump up the sort memory, which
will make your big queries run faster.
Thanks merlin,
Whether the OS caches the data or PG does, you still want it cached. If your
sorting backends gobb
Thanks, Chris.
> > What is it about the buffer cache that makes it so unhappy being able to
> > hold everything? I don't want to be seen as a cache hit fascist, but
isn't
> > it just better if the data is just *there*, available in the
postmaster's
> > address space ready for each backend process
Bill Chandler wrote:
Now the mystery is why is this happening all of the
sudden? I have been running w/ fetch size set to 5000
for the last couple of weeks and it did not appear to
be doing this (i.e. re-running the entire select
statement again). Is this what I should expect when
using cursors?
"Missner, T. R." <[EMAIL PROTECTED]> writes:
> ... Each day a brand new set of tables is
> created and eventually the old tables are dropped.
You did not say which PG version you are using (tut tut) but my first
thought is that it's a pre-7.4 release and your problems trace to bloat
in the system-
Might be worth doing a little test:
i) modify your code to fetch 1 row at a time
ii) set log_duration=true in your postgresql.conf (as the other posters
have suggested)
Then compare with running the query in psql.
regards
Mark
Bill Chandler wrote:
Thanks to all who have responded. I now think m
Eugene <[EMAIL PROTECTED]> writes:
> Can I force HashAggregation before index scan?
No. But look into why the planner's rows estimate is so bad here:
>-> Index Scan using refseq_sequence_key2 on refseq_sequence s
> (cost=0.00..1516.06 rows=389
> width=24) (actual time=0.352..491.107 rows
13 matches
Mail list logo