greetings -
I have noticed for a long time, that despite checking the “use estimated
metadata” box when adding PostGIS layers, QGIS seems to issue 2 full-table scan
queries. Although I do notice a number of queries to pg_stats, etc looking for
various things, NET/NET is 2 full table scans looking like:
SELECT count(distinct ("block_id"))=count(("block_id")) FROM
"christest"."block_th_view"
SELECT st_extent("the_geom") FROM "christest"."block_th_view"
Consider a well-indexed table with billions of GPS points with a free running
counter as its unique key, that only allows display up to a zoom scale of
1:1000. Good GIST indexing and fast RAID disks make such a huge table very
usable ... ONCE given a chance… The user KNOWS it’s extent is the whole world
and the primary key is indeed unique. But there seems to be no way to “just add
it”…
For tables with many hundreds of millions of objects, this is an unreasonable
operation to perform every time that layer is added.
I am a complete newbie to the QGIS source, but am wondering if
QgsPostgresProvider::extent() might be enhanced, or perhaps more generally
QgsVectorDataProvider::extent() , to store some metadata in the local sqlite
database, and recognize a new “extent” tag on a maplayer within a project, and
a “skip metadata checks” preference to just trust these values. Similarly for
QgsPostgresProvider::uniqueData() .
Helping user avoid unneeded full data table scans will make QGIS *-far-* more
usable.thank you Chris Nicholas
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ Qgis-developer mailing list [email protected] List info: http://lists.osgeo.org/mailman/listinfo/qgis-developer Unsubscribe: http://lists.osgeo.org/mailman/listinfo/qgis-developer
