As Remi notes, going with a pointcloud approach might be wiser, particularly if 
you aren’t storing much more about the points that coordinates and other lidar 
return information. Since you’re only working with points, depending on your 
spatial distribution (over poles? dateline?) you might just geohash them and 
index them with a btree instead. The index will work better than a rtree for 
points, efficiencywise, however you’ll still have a multi-billion record table, 
which could cause other slowdowns, depending on your plans for accessing this 
data once you’ve indexed it.

P.

-- 
Paul Ramsey
http://cleverelephant.ca
http://postgis.net

On January 15, 2015 at 8:44:03 AM, Rémi Cura (remi.c...@gmail.com) wrote:

Hey,
You may want to post this on postGIS list.

I take that so many rows mean either raster or point cloud.
If it is point cloud simply consider using pg_pointcloud.
A 6 billion point cloud is about 600 k lines for one of my data set.

If it is raster, you may consider using postgis raster type.
If you really want to keep that much geometry,
you may want to partition your data on a regular grid.
Cheers,
Rémi-C

2015-01-15 15:45 GMT+01:00 Andy Colson <a...@squeakycode.net>:
On 1/15/2015 6:44 AM, Daniel Begin wrote:
Hi, I'm trying to create an index on coordinates (geography type) over a
large table (4.5 billion records) using GiST...

CREATE INDEX nodes_geom_idx ON nodes USING gist (geom);

The command ran for 5 days until my computer stops because a power outage!
Before restarting the index creation, I am asking the community if there are
ways to shorten the time it took the first time :-)

Any idea?

Daniel




Set maintenance_work_mem as large as you can.

-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to