On Thu, Aug 21, 2014 at 4:29 PM, Eli Naeher enae...@gmail.com wrote:
Clearly the bulk of the time is spent sorting the rows in the original
table, and then again sorting the results of the subselect. But I'm afraid I
don't really know what to do with this information. Is there any way I can
On 08/21/2014 08:29 AM, Eli Naeher wrote:
With around 1.2 million rows, this takes 20 seconds to run. 1.2 million
rows is only about a week's worth of data, so I'd like to figure out a
way to make this faster.
Well, you'll probably be able to reduce the run time a bit, but even
with really
Upping work_mem did roughly halve the time, but after thinking about
Shaun's suggestion, I figured it's better to calculate this stuff once and
then store it. So here is how the table looks now:
Table public.stop_event
Column|
Oops, I forgot to include the test self-join query I'm using. It is simply:
SELECT se1.stop_time AS curr, se2.stop_time AS prev
FROM stop_event se1
JOIN stop_event se2 ON se1.previous_stop_event = se2.id;
On Thu, Aug 21, 2014 at 11:19 AM, Eli Naeher enae...@gmail.com wrote:
Upping work_mem
On Thu, Aug 21, 2014 at 7:19 PM, Eli Naeher enae...@gmail.com wrote:
However, when I try to do a
test self-join using it, Postgres does two seq scans across the whole table,
even though I have indexes on both id and previous_stop_event:
http://explain.depesz.com/s/ctck. Any idea why those