If your nightly process is heavily read-only, then raid5 is probably
fine. If however, there is a significant write component then it would
perhaps be worth getting another disk and converting to raid10
(alternatively - see previous postings about raid cards with on-board
cache). Are you
Hi,
Make
multi-column indexes, using the columns from your most typical queries, putting
the most selective columns first (ie; you don't need to make indexes with
columns in the same order as they are used in the query).
For
instance, an index on cp, effectif could likely benefit both
Hi everybody!
I cant make use of indexes even I tried the same test by changing different settings
in
postgres.conf like geqo to off/on geqo related parameters, enable_seqscan off/on
so on. Result
is the same.
Here is test itself:
Ive created simplest table test and executed the same
Artimenko Igor wrote:
id int8 NOT NULL DEFAULT nextval('next_id_seq'::text) INIQUE,
ID column is bigint, but '5' is int, therefore the index does not
match. You need to cast your clause like this:
select id from test where id = 5::int8
Also, issue VACUUM ANALYZE, so Postgres knows
An index
on cp and effectif would help your first query. An index on naf, cp and effectif would help your second
query.
Something
like this:
CREATE INDEX base_aveugle_cp_key2 ON
base_aveugle USING btree (cp, effectif);
CREATE INDEX base_aveugle_naf_key2 ON base_aveugle USING btree
test where id = 5; Few times I added 100,000 records, applied
cast the 5 to int8 and it will use the index
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes
Hi,
I'm seeing the following behaviour with the table and functions given below:
db=# insert into f select * from full_sequence(1, 1000);
INSERT 0 1000
Time: 197,507 ms
db=# insert into f select * from full_sequence(1, 1000);
INSERT 0 1000
Time: 341,880 ms
db=# insert into f select * from
Obviously,
this part of tr_f_def():
**
-- delete the contents
-- delete from f;
IF EXISTS (SELECT 1 FROM f) THEN
DELETE FROM F;
VACUUM F;
END IF;
Frank,
It seems in this case the time needed for a single deferred trigger somehow
depends on the number of dead tuples in the table, because a vacuum of the
table will 'reset' the query-times. However, even if I wanted to, vacuum is
not allowed from within a function.
What is happening
Hi,
I am working on a project which explore postgresql to
store multimedia data.
In details, i am trying to work with the buffer
management part of postgres source code. And try to
improve the performance. I had search on the web but
could not find much usefull information.
It would be great if
10 matches
Mail list logo