On Friday 24 September 2004 7:32 am, Tom Lane wrote: > David Helgason <[EMAIL PROTECTED]> writes: > > I'm calling one stored procedure with a prepared statement on the > > server with 6 arrays of around 1200 elements each as parameters. > > The parameters are around 220K in total. > > Exactly how are you fetching, building, or otherwise providing the > arrays? Which PG version is this exactly? > > > Any suggestions how I go about finding the bottleneck here? What > > tools do other people use for profiling on Linux. > > Rebuild with profiling enabled (make clean; make PROFILE="-pg > -DLINUX_PROFILE") and then use gprof to produce a report from the > trace file that the backend drops when it exits. > > If that sounds out of your league, send along a self-contained test > case and I'll be glad to take a look. > > > This might sound like a "then don't do that" situation. > > I'm betting on some O(N^2) behavior in the array code, but it'll be > difficult to pinpoint without profile results.
Possibly the bug we discussed early last year (IIRC added to todo but not fixed): http://archives.postgresql.org/pgsql-performance/2003-01/msg00235.php Cheers, Steve ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend