On 07/12/2012 08:48 PM, Yan Chunlu wrote:
explain analyze INSERT INTO vote_content ( thing1_id, thing2_id, name,
date) VALUES (1,1, E'1', '2012-07-12T12:34:29.926863+00:00'::timestamptz)
QUERY PLAN
------------------------------------------------------------------------------------------
Insert (cost=0.00..0.01 rows=1 width=0) (actual time=79.610..79.610
rows=0 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual
time=0.058..0.060 rows=1 loops=1)
Total runtime: 79.656 ms
it is a table with *50 million* rows, so not sure if it is too
large... I have attached the schema below:
You have eight indexes on that table according to the schema you showed.
Three of them cover three columns. Those indexes are going to be
expensive to update; frankly I'm amazed it's that FAST to update them
when they're that big.
Use pg_size_pretty(pg_relation_size('index_name')) to get the index
sizes and compare to the pg_relation_size of the table. It might be
informative.
You may see some insert performance benefits with a non-100% fill factor
on the indexes, but with possible performance costs to index scans.
--
Craig Ringer