Joost wrote: > My system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC gcc > (GCC) 20020903 (Red Hat Linux 8.0 3.2-7). It has a Pentium III-733 Mhz > with 512 MB ram. It is connected to my workststation (dual XEON 1700 with > 1 Gb RAM) with a 100 Mb switched network. > > I have a table with 31 columns, all fixed size datatypes. It contains > 88393 rows. Doing a "select * from table" with PGAdmin III in it's SQL > window, it takes a total of 9206 ms query runtime an a 40638 ms data > retrievel runtime. > > Is this a reasonable time to get 88393 rows from the database? > > If not, what can I do to find the bottleneck (and eventually make it > faster)?
The 9206 ms time is what the database actually spent gathering the data and sending it to you. This is non-negotiable unless you bump up hardware, etc, or fetch less data. This time usually scales linearly (or close to it) with the size of the dataset you fetch. The 40638 ms time is pgAdmin putting the data in the grid. This time spent here is dependant on your client and starts to get really nasty with large tables. Future versions of pgAdmin might be able to deal better with large datasets (cursor based fetch is one proposed solution). In the meantime, I would suggest using queries to refine your terms a little bit...(do you really need to view all 80k records at once?). Merlin ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])