Greetings.

I have a question regarding storing computed values. Essentially, it's a question about caching, and I'm willing to implement a cache on the client side or using pg_memcache, but I wonder if I can do this in a PostgreSQL table, instead?

The background is that I'm using PostGIS to store thousands of points across the globe. Once in a while, I want to connect a subset of these points using an algorithm I have in a PL/Perl procedure. Depending on where the endpoints are located, this algorithm takes between 10 ms and 40 ms to run for each pair of points. Hundreds of these points are connected and then displayed over the web, which, essentially, takes a really long time. Some connections are calculated very frequently, and some connections will probably never be calculated, though it's difficult to determine which connections fit those descriptions.

The locations of these points sometimes change, and, relatively infrequently, new points are added. Even more infrequently, some points are deleted.

I would like to have a table that contains a connection for each distinct pair of points (point1 to point2 is the same as point2 to point1). This table would then be automatically updated every time a modification is made to the reference table. If my calculation is correct, the new table would contain 3,654,456 rows using the current data set.

I realize I could use a TRIGGER to keep the connections table fresh, and perhaps that's also a solution.

But, really, I'm just wondering if PostgreSQL has some automated, built-in facility for situations like this?

Thanks.

Colin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to