On Thu, Aug 27, 2009 at 7:20 PM, David Colleen<[email protected]> wrote:
> Today, we have a lot of great, super smart neo-geographers that may have
> first experienced 3D via a game platform, SL or Google Earth. Their first 2D
> mash ups initially used quasi-open platforms, like Google Maps and then
> segued into truly open platforms such as Open Street Map. Some are beginning
> to explore 3D for Earth viewing and augmented reality uses. Their first
> experience with 3D may have come via SketchUp and GE. This was a great
> starting point but they learned the limits of this approach and they are now
> scanning the horizons for truly open, industrial strength 3D. I would
> encourage such people to check out X3D (www.web3d.org ) and give the topic a
> fair hearing (please put your ear buds in when you hear old negative knee
> jerk rants). X3D/VRML is also part of other standards efforts such as OGC's
> CityGML and MPEG4. If you like Earth globes, check out the X3D Earth effort
> to make an open standards / open source analog to Google Earth. Most X3D
> viewers are free or open source... so indulge!


I agree with most everything that is stated here, and I will confess
that I am not all that familiar with X3D though it looks pretty
well-engineered as such things go at first glance.  However, when I
look at the docs -- and standards are on my mind at the moment --
there is a giant question that looms large and for which I cannot
easily find an answer. How well does the protocol design scale for
real workloads? What kinds of scales were considered in its design?
Many elements of it seem biased toward mostly static models.

Consider, for example, workloads where you are supporting a sustained
update rate of tens of millions of geospatial polygons per second to a
single, contiguous earth model with billions to trillions of polygon
records. That is not an unrealistic application by any stretch of the
imagination, but there are many aspects of the protocol design that
while negligible on a small scale seem likely to become expensive when
scaled up.  By analogy, consider the evolution of a binary-encoded XML
protocols because real XML lacked properties that would allow it to
scale well for that purpose (so they reinvented ASN.1 wire encoding).
Good for when apps are small-ish, but that order of magnitude
performance penalty adds up when apps get big. I am having a hard time
thinking of *any* protocol that was properly engineered for
scalability in its early releases.

I'm not saying it is not designed to scale well generally, and I am
asking because I am too lazy and short on time to do serious research.
;-)  Just how suited is the standard for non-trivial real-time 3-d
geospatial models?  Obviously Google Earth comes up very short in this
domain, but is X3D just buying a modest extension of capability or is
it a genuinely robust model that can handle anything thrown at it?

-- 
J. Andrew Rogers
realityminer.blogspot.com

_______________________________________________
Geowanking mailing list
[email protected]
http://geowanking.org/mailman/listinfo/geowanking_geowanking.org

Reply via email to