Tom Lane wrote:
> Perhaps it's time to remind people that what we want to measure
> is the performance seen by a C program issuing write() and read()
 >commands, transferring 8K at a time, on a regular Unix filesystem

Yes...and at the risk of being accused of marketing ;-) , that is 
exactly what the 3 programs in my archive do (see previous post for url) :

- one called 'write' creates a suitably sized data file (8k at a time - 
configurable), using the write() call
- another called 'read' does sequential reads (8k at a time - 
configurable), using the read() call
- finally one called 'seek' does random reads (8k chunks - 
configurable), using the lseek() and read() calls

I tried to use code as similar as possible to how Postgres does its 
io....so the results *should* be meaningful !
Large file support in enabled too (as you need to use a file several 
times bigger than your RAM - and everyone seems to have >1G of it these 
days...)

I think the code is reasonably readable too....
Its been *tested* on Linux, Freebsd, Solaris, MacosX.


The only downer is that they don't automatically compute 
random_page_cost for you..(I was more interested in the raw sequential 
read, write and random read rates at the time). However it would be a 
fairly simple modification to combine the all 3 programs into one 
executable that outputs random_page_cost...

regards

Mark







---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Reply via email to