[GENERAL] Profiling custom datatypes

2009-03-25 Thread William Harrower

Hi,

I'm attempting to profile (the memory usage and CPU time of) some code 
I've written as part of a custom datatype. I've attempted to utilise 
valgrind and cachegrind, but this doesn't seem to work as expected. The 
following is the command used:


valgrind --tool=cachegrind --trace-children=yes ./postgres -D ../data

Running this and then invoking a SQL query that causes my code to 
execute doesn't seem to result in any output relating to my datatype, 
even though its code is taking the majority of the CPU time.


Does anyone know what I'm doing wrong -- do I have to do something 
special for valgrind to inspect shared libraries? I have debug symbols 
compiled in everywhere.


Ignoring valgrind specifically, does anyone know of any other tools that 
can be used to profile the memory usage and CPU time/load of a custom 
datatype library? Recent changes I made to client-side code resulted in 
an increase in the size of each instance of the type it uploads to the 
database, which, for reasons unknown, has caused the search time (using 
a custom 'match' operator) to go through the roof. My suspicions suggest 
the cache memory used isn't large enough to contain the entire table 
(though perhaps it was before the change) and because of this far more 
disk reads are necessary. Hopefully a decent profiler should be able to 
make this clear.


Many thanks for any help,
Will.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Profiling custom datatypes

2009-03-25 Thread Tom Lane
William Harrower wjh...@doc.ic.ac.uk writes:
 Ignoring valgrind specifically, does anyone know of any other tools that 
 can be used to profile the memory usage and CPU time/load of a custom 
 datatype library?

oprofile on recent Fedora (and probably other Linux distros) pretty much
just works for shared libraries, though it only tells you about CPU
profile not memory usage.  I've never been able to get gprof to do
anything useful with shlibs, on any platform :-(

 Recent changes I made to client-side code resulted in 
 an increase in the size of each instance of the type it uploads to the 
 database, which, for reasons unknown, has caused the search time (using 
 a custom 'match' operator) to go through the roof. My suspicions suggest 
 the cache memory used isn't large enough to contain the entire table 
 (though perhaps it was before the change) and because of this far more 
 disk reads are necessary. Hopefully a decent profiler should be able to 
 make this clear.

Surely just watching iostat or vmstat would prove or disprove that
theory.  Keep in mind also that CPU profilers aren't going to tell
you much about I/O costs anyway.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general