I know that it's much faster (for us) to run many smaller queries than
one large query, and I think that it's primarily because of your reason
a. Most of our problems come from Pg misunderstanding the results of a
join and making a bad plan decision. Batching dramatically reduces the
liklihood o
On Tue, 2008-11-18 at 17:38 +0100, [EMAIL PROTECTED] wrote:
> I bet there is no 'critical' length - this is just another case of
> index
> scan vs. seqscan. The efficiency depends on the size of the table /
> row,
> amount of data in the table, variability of the column used in the IN
> clause, et
I bet there is no 'critical' length - this is just another case of index
scan vs. seqscan. The efficiency depends on the size of the table / row,
amount of data in the table, variability of the column used in the IN
clause, etc.
Splitting the query with 1000 items into 10 separate queries, the sma
On Tue, 18 Nov 2008, Kynn Jones wrote:
Also, assuming that the optimal way to write the query depends on the length of
$node_list, how can I estimate the
"critical length" at which I should switch from one form of the query to the
other?
In the past, I have found the fastest way to do this wa
Hi. I have a Perl script whose main loop generates thousands of SQL updates
of the form
UPDATE edge SET keep = true WHERE node1 IN ( $node_list ) AND node2 =
$node_id;
...where here $node_list stands for a comma-separated list of integers, and
$node_id stands for some integer.
The list represent