On Fri, Aug 28, 2009 at 8:05 AM, David Colleen<[email protected]> wrote: > Clearly, GE has developed a robust, scalable system and much > could be learned from their experience if they would share. > Michael has participated in X3D Earth sessions. One interesting > comment that he made was the OGC's WMS/WFS were designed > in a way that could not support intense user loads.
Yes, the OGC's standards have a number of issues which degrades their utility; they are not entirely suitable for many use cases. Scalability is one of those problems. Hence why I am hoping there are better alternatives. > We built P9's streaming MU server along similar lines. Over the > past year, we have been ramping up to support millions of 3D > social networking users. We recently decided to switch from > PostgreSQL to MySQL for scaling reasons. A familiar path. In practice, MySQL is an incremental extension in scalability -- if you hit the wall with PostgreSQL, MySQL's wall is coming up soon. And MySQL's additional scalability is largely because it allows you to disable expensive safety features that PostgreSQL does not. The scalability of the standard geospatial databases is too poor for many applications by orders of magnitude now. More and more apps are on the other side of that hockey stick due to the implicit scaling requirements. > Now, coming back to one of your original questions Andrew.... > almost all 3D scene graphs are syntactically similar and the > resulting file sizes are also very similar. VRML, encoded as > X3D (xml) is about 5-10% larger in file size. Encoded as > Collada, the file grow by 40% in my tests. ... Collada was > designed as a data storage and exchange format... not for > real time use. Real-time is the use case here; data size does not matter too much beyond the implied bandwidth constraints to the end-user. The use case, to tie it to something familiar, is a huge city model where that model is being updated millions of times per second with complex geospatial data. The end user will see almost none of those updates due to LOD, viewport, and other constraints but they will need to see a modest number of those updates. Obviously it is implausible to constantly generate Collada-type files but it should be plausible to have an efficient protocol that captures the continuous diffs relevant to a specific end-user. In short, it is not so much that an individual user sees large amounts of dynamic data but that there is a vast back-end where little can be assumed to be static. The fine point, I think, is the difference between static data and dynamic data that almost never changes, and how this affects protocol optimization. There are two questions here. First, is there an existing geospatial visualization environment designed to work well for a non-static model assumption at scale? Second, even if no one has implemented such an application environment, does an existing protocol exist that could support such an application being built? I can easily see that something like an OTOY-like model would probably work for this, but that almost entirely blackboxes the environment inside an opaque stream -- not so good for system interoperability. -- J. Andrew Rogers realityminer.blogspot.com _______________________________________________ Geowanking mailing list [email protected] http://geowanking.org/mailman/listinfo/geowanking_geowanking.org
