Hi Folks,
I have to come back to that Issue.
Doing a little research on it, I came to the following conclusion: that
some sort of a W3DS[1] implementation for delivering the augmentation
for the to be augmented geographic features would be sufficient.
The Application chain would look something like this:
1. Geometry and semantics of to be augmented real world objects are
modeled for example in GML (or one of its derivates like CityGML), with
a representation in an absolute (geographic or geocentric) coordinate
system. (Including the model of the augmentation itself)
2. The mobile client connects to an Implementation of a W3DS which
transforms the modeled real world objects from 1. into display elements
represented in a format like VRML/X3D or KML.
3. The client recieves according to his position (no orientation (pitch,
yaw, roll) is needed for this step) a scene of the sourrounding area
from the W3DS.
4. By gaining orientation (pitch, yaw and roll) through some sensors,
geometry matching etc., the mobile client is able to navigate through
the scene and augment/overlay its objects on to the real wolrd objects
measured by its camera.
Does that make sense?
Do I miss something or are there better approaches?
best regards,
Christian
[1] http://portal.opengeospatial.org/files/?artifact_id=8869
_______________________________________________
Geowanking mailing list
[email protected]
http://geowanking.org/mailman/listinfo/geowanking_geowanking.org