On 18/09/13 10:51, Luca Delucchi wrote:
On 17 September 2013 22:10, Markus Neteler<[email protected]> wrote:
Hi,
I came across this question:
http://gis.stackexchange.com/questions/71734/how-to-calculate-mean-coordinates-from-big-point-datasets
and wondered if this approach would be the fasted:
# http://grass.osgeo.org/sampledata/north_carolina/points.las
v.in.lidar input=points.las output=lidarpoints -o
...
Number of points: 1287775
...
Now I would use
v.univar -d lidarpoints type=point
(still calculating here...)
Is it the best way?
maybe v.median [0] could help?
[1] http://trac.osgeo.org/grass/browser/grass-addons/grass7/vector/v.median
Right.
Here's a little test:
$time v.median in=elev_lid792_randpts
638648.500000|220378.500000
real 0m0.249s
user 0m0.180s
sys 0m0.044s
$time v.to.db elev_lid792_randpts op=coor -p | awk -F'|' 'BEGIN{SUMX=0;
SUMY=0; N=0} {N+=1;SUMX+=$2;SUMY+=$3} END{print SUMX/N, SUMY/N}'
Reading features...
100%
638544 220339
real 0m0.106s
user 0m0.100s
sys 0m0.020s
Would be interesting to see results for big data. And AFAIK median is a
bit more difficult to do in awk. I imagine that replacing the median by
the mean in numpy is no problem (might be a flag to add to v.median).
I didn't try v.points.cog as that actually creates a new vector map.
Moritz
_______________________________________________
grass-dev mailing list
[email protected]
http://lists.osgeo.org/mailman/listinfo/grass-dev