Tobias Brox wrote:
[Madison Kelly - Mon at 08:48:19AM -0500]
Ah, sorry, long single queries is what you meant.

No - long running single transactions :-)  If it's only read-only
queries, one will probably benefit by having one transaction for every
query.


In this case, what happens is one kinda ugly big transaction is read into a hash, and then looped through (usually ~10,000 rows). On each loop another, slightly less ugly query is performed based on the first query's values now in the hash (these queries being where throttling might help). Then after the second query is parsed a PDF file is created (also a big source of slowness). It isn't entirely read-only though because as the PDFs are created a flag is updated in the given record's row. So yeah, need to experiment some. :)

Madi

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to