On Fri, Apr 5, 2013 at 8:51 AM, Mark Davidson <m...@4each.co.uk> wrote:

> Hi All,
>
> Hoping someone can help me out with some performance issues I'm having
> with the INDEX on my database. I've got a database that has a data table
> containing ~55,000,000 rows which have point data and an area table
> containing ~3,500 rows which have polygon data. A user queries the data by
> selecting what areas they want to view and using some other filters such as
> datatime and what datasets they want to query. This all works fine and
> previously the intersect of the data rows to the areas was being done on
> the fly with PostGIS ST_Intersects. However as the data table grow we
> decided it would make sense to offload the data processing and not
> calculate the intersect for a row on the fly each time, but to
> pre-calculate it and store the result in the join table. Resultantly this
> produce a table data_area which contains ~250,000,000 rows.
>


I think your old method is likely the better option, especially if the
intersect can be offloaded to the client or app server (I don't know enough
about ST_Intersects to know how likely that is).

What is the difference in performance between the old method and the new
method?

Cheers,

Jeff

Reply via email to