If I do a simple query like:
Select ids, keywords from dict where keywords='blabla' ('blabla' is a single word);
The table have 200 million rows, I have index the keywords field. On the first time my query seem to slow to get the result, about 15-60 sec to get the result. But if I repeat the query I will get fast result. My question is why on the first time the query seem very slow.
Table structure is quite simple:
Ids bigint, keywords varchar(150), weight varchar(1), dpos int.
I use latest pgAdmin3 to test all queries. My linux box is Redhat 4 AS, kernel 2.6.9-11, postgresql version 8.0.3, 2x200 GB SATA 7200 RPM configure as RAID0 with ext3 file system for postgresql data only. 80 GB EIDE 7200 RPM with ext3 file system for OS only. The server has 2 GB RAM with P4 3,2 GHz.
If I do this query on mssql server, with the same hardware spesification and same data, mssql server beat postgresql, the query about 0-4 sec to get the result. What wrong with my postgresql.
- [PERFORM] Query seem to slow if table have more than 200 mil... Ahmad Fajar
- Re: [PERFORM] Query seem to slow if table have more tha... Qingqing Zhou
- Re: [PERFORM] Query seem to slow if table have more... Ahmad Fajar
- Re: [PERFORM] Query seem to slow if table have ... Qingqing Zhou