Missner, T. R. wrote:
Hello,
I have been a happy postgresql developer for a few years now. Recently
I have discovered a very strange phenomenon in regards to inserting
rows.
My app inserts millions of records a day, averaging about 30 rows a
second. I use autovac to make sure my stats and indexes
Bill Chandler wrote:
Hi,
Using PostgreSQL 7.4.2 on Solaris. I'm trying to
improve performance on some queries to my databases so
I wanted to try out various index structures.
Since I'm going to be running my performance tests
repeatedly, I created some SQL scripts to delete and
recreate
[Apologies if this reaches the list twice -- I sent a copy before
subscribing, but it seems to be stuck waiting for listmaster forever, so I
subscribed and sent it again.]
Hi,
I'm trying to find out why one of my queries is so slow -- I'm primarily
using PostgreSQL 7.2 (Debian stable), but I
Edoardo Ceccarelli [EMAIL PROTECTED] writes:
This is the query:
select max(KA) from annuncio
field KA is indexed and is int4,
explaining gives:
explain select max(KA) from annuncio;
QUERY PLAN
---
Aggregate
The limit is tricking you.
I guess a sequential scan is cheaper than an index scan with the limit 26 found there.
I am wrong?
Greets
--
---
Guido Barosio
Buenos Aires, Argentina
---
However, this query performs a sequence scan on the table, ignoring the
call_idx13 index (the only difference is the addition of the aspid field
in the order by clause):
You do not have an index which matches the ORDER BY, so PostgreSQL
cannot simply scan the index for the data you want. Thus
Hi,
I tested vacuum_mem setting under a
4CPU and 4G RAM machine. I am the only person
on that machine.
The table:
tablename | size_kb | reltuples
---+-
big_t | 2048392 | 7.51515e+06
Case 1:
1. vacuum
It seems vacuum_mem does not have performance
effect at all.
Wrong conclusion. It implies that your test case takes less than 64M of
memory to track your removed tuples. I think it takes 8 bytes to track a
tuple for vacuuming an index, which means it should be able to track
80 deletions.
Thanks for the advice.
On further review it appears I am only getting this
performance degradation when I run the command via
a JDBC app. If I do the exact same query from
psql, the performance is fine. I've tried both the
JDBC2 and JDBC3 jars. Same results.
It definitely seems to correspond
That is interesting - both psql and JDBC merely submit statements for
the backend to process, so generally you would expect no difference in
execution plan or performance.
It might be worth setting log_statement=true in postgresql.conf and
checking that you are executing *exactly* the same
Hi,
I'm really stuck and I wonder if any of you could help.
I have an application which will be sitting on a quite large database
(roughly 8-16GB). The nature of the application is such that, on a second by
second basis, the working set of the database is likely to be a substantial
portion (e.g.
What is it about the buffer cache that makes it so unhappy being able to
hold everything? I don't want to be seen as a cache hit fascist, but isn't
it just better if the data is just *there*, available in the postmaster's
address space ready for each backend process to access it, rather than
12 matches
Mail list logo