On 8/9/2011 5:37 PM, David Barbour wrote:
On Tue, Aug 9, 2011 at 3:40 PM, BGB <[email protected] <mailto:[email protected]>> wrote:

    ideally, we should probably be working with higher-level
    "entities" instead of lower-level geometry.


I agree with rendering high-level concepts rather than low-level geometries.

But I favor a more logical model - i.e. rendering a set of logical "predicates".

Either way, we have a set of records to render. But predicates can be computed dynamically, a result of composing queries and computing views. Predicates lack identity or state. This greatly affects how we manage the opposite direction: modeling user input.


note that at a conceptual level (in the map format), entities are still declarative. whether or not they have "identity" is also uncertain.

at runtime, entities have state and identity, but need not necessarily map 1:1 with those present in the map definition. in my engine, both types of entity actually have different in-memory types and representations.



    possibly, ultimately all levels should be expressed, but what
    should be fundamental, what should be expressed in each map, ...
    is potentially a subject of debate.


I wouldn't want to build in any 'fundamental' features, except maybe strings and numbers. But we should expect a lot of de-facto standards - including forms, rooms, avatars, clothing, doors, buildings, landscapes, materials, some SVG equivalent, common image formats, video, et cetera - as a natural consequence of the development model. It would pay to make sure we have a lot of /good/ standards from the very start, along with a flexible model (e.g. supporting declarative mixins might be nice).

fair enough, albeit how I imagined it was potentially a little lower-level.

possibly much of the "baseline" would be defined in terms of various core entity types, matters of basic scene rendering and representation, ...



    I am not familiar with the Teatime protocol. apparently Wikipedia
    doesn't really know about it either...


Teatime was developed for Croquet. You can look it up on the VPRI site. But the short summary is:
* Each computer has a redundant copy of the world.
* New (or recovering) participant gets snapshot + set of recent messages.
* User input is sent to every computer by distributed transaction.
* Messages generated within the world run normally.
* Logical discrete clock with millisecond precision; you can schedule incremental events for future. * Smooth interpolation of more cyclic animations without discrete events is achieved indirectly: renderer provides render-time.


sounds vaguely similar to something I had done long ago.


This works well for medium-sized worlds and medium numbers of participants. It scales further by connecting a lot of smaller worlds together (via 'portals'), which will have separate transaction queues.

It is feasible to make it scale further yet using specialized protocols for handling 'crowds', e.g. if we were to model 10k participants viewing a stage, we could model most of the crowd as relatively static NPCs, and use some content-distribution techniques. But at this point we're already fighting the technology, and there are still security concerns, disruption tolerance concerns, and so on.


fair enough.

I would likely assume using a client/server model and file-based worlds.
granted, an issue is that "level of abstraction" could become an issue.


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to