Hi all,
I have noted that Postgresql don't make a good memory handle. I have
made the tables/procedure (in attached file) and run it as "select bench(10,
5000)". This will give a 50000 records inserts (5 x 10000). (well, I run it
on a P200+64MB of RAM, under Linux, and Postgres 7.0.2. In a more powerfull
machine, you can try other values).
I get as result, the following times:
id | objname | benchtime
----+---------+-----------
1 | group 1 | 00:00:32
2 | group 2 | 00:00:47
3 | group 3 | 00:01:13
4 | group 4 | 00:01:41
5 | group 5 | 00:02:08
(5 rows)
Note that, with memory increse, the system becomes slow, even if the
system has free memory to alocate (yes, 64MB is enough to this test). I
didn't see the source code (yet), but I think that the data estructure used
to keep the changed records is a kind of chained list; and to insert a new
item, you have to walk to the end of this list. Can it be otimized?
The system that I'm developing, I have about 25000 (persons) x 8 (exams)
x 15 (answers per exam) = 3000000 records to process and it is VERY SLOW.
thanks,
Edipo Elder
[[EMAIL PROTECTED]]
_________________________________________________________
Oi! Você quer um iG-mail gratuito?
Então clique aqui: http://www.ig.com.br/paginas/assineigmail.html
teste.zip
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
http://www.postgresql.org/search.mpl