After watching this group for a while, thought it would be interesting to 
bring up a topic I have been working on for several years and see if I can 
get any help from the geowanking crowd.

Goal:  Create highly accurate and complete digital maps of the 
transportation network suitable for safety of life applications with 
accuracy commensurate with future GNSS systems (decimeters).  It seems to 
me that this can only be done through a statistical, probe based, approach 
since imagery and 'mobile mapping' approaches are error prone with low 
revisit rates.

Problem:  Given a very large set of vehicle PVT (position, velocity, time) 
information, 
1) derive the location of the centerline of every lane, along with lane 
attributes such as direction and ability to cross to the adjacent lane, 
2) derive the location of all turn restrictions and traffic controls, and 
3) relate the PVT accuracy of the data to the accuracy of the resulting 
'map' for different quantities of data. 
For extra credit, identify movements within lanes that indicate a vehicle 
intends to turn, stop, or execute some other maneuver.  Of course, all of 
these answers must come with a statistical accuracy metric.

Background: There are a lot of GPS units in a lot of cars collecting a lot 
of data on where the cars (roads) are and how they move (controls such as 
yields and stops).  This data is then thrown away.  If this data can be 
captured (and there are efforts underway to do this), how does one build a 
map of the roads and all of the signs and signals that control the motion 
of vehicles?  I believe that the entire infrastructure that influences the 
behavior of vehicles is captured in this data, and that, by the central 
limit theorem, the data has ever increasing (and quantifiable!) accuracy.  
 This is exactly what is needed for map based transportation safety 
systems currently under development.   This is one very promising way to 
address the 40,000+ fatalities/ $200B a year caused by accidents on US 
roads.

We spent a couple of years looking at this and devised a k-means approach 
bundling data across the direction of travel to pull out the lanes.  The 
data could then be grouped by lanes to derive centerlines.  Stop signs and 
traffic lights were easy, we never got to yields or speed limits.  Our 
approach was successful, but computationally intensive, and required that 
one work with the entire data set rather than a Kalman filter approach 
where data can be incrementally added to improve the solutions validity 
(or indicate that the world has changed).   We also did not get far on the 
accuracy metrics.  The key to this problem seems to be grouping vehicles 
into like groups going from 'A' to 'B', where 'A' and 'B' are any two 
arbitrary points on the road network with an accuracy of around 30 cm. We 
can 'generally' assume that a vehicle is within 30cm of the 'lane center'. 
 One problem, of course, is that the accuracy of any individual vehicle's 
position is generally somewhat larger than the lane width.

Does anyone know anybody working this (or similar) problems?

Any ideas on how to approach this from the geo-statistical crowd out 
there?  We came at this from an AI perspective, and I think a 
geo-statistical approach might have gone a different direction.

Other thoughts?

-=Chris

PS-  This approach is really promising for getting public, low cost, 
accurate maps of transportation networks, and yes, there are some serious 
privacy issues to work through.  There will never be unique identifiers in 
the data, and we can cut out the first and last mile.

[EMAIL PROTECTED]
650/845-2579
_______________________________________________
Geowanking mailing list
[email protected]
http://lists.burri.to/mailman/listinfo/geowanking

Reply via email to