Hi folks, I'm experimenting with Broomstick here at Adobe, and wanted feedback on an enhancement I'm thinking of making to the core framework. I'm very new to Away3D and relatively new to 3D stuff in general, so please be gentle if I say something dumb :)
The enhancement I'm working on consists of two interrelated concepts: object tagging and multiple active cameras. The idea is to support use cases like picture-in-picture, x-ray vision, "always-on-top" guns, design-time gizmos that need to draw on top of geometry, etc. The way I'm thinking about the model is this: * You can assign one or more arbitrary tags (strings) to any renderable object. * A light can specify a set of "target tags". Only objects with one or more of those tags will be affected by that light. (By default, lights affect all objects.) * A camera can also specify a set of target tags. Only objects with one or more of those tags will be rendered by that camera. (By default, cameras render all objects.) * Instead of a single active camera, a view can have multiple active cameras. Cameras draw in an order specified by a "draw order" property added to Camera3D (higher numbers draw on top). Each camera can also specify a set of "clear flags", so instead of clearing everything in back, it can just clear the depth buffer, for example. * Hit testing would also be affected by camera order, so items drawn by cameras higher in the draw order would get hit before items drawn by cameras lower in the draw order. * I haven't thought about implementing this yet, but eventually each camera should have its own viewport (for the picture-in-picture case). So, for example, to implement an "always-on-top" gun that doesn't get clipped by geometry, you tag the gun with the "weapon" tag and set up a gun camera that only renders the "weapon" tag (and make your main camera not render that tag). Then you set both the main and the gun cameras as active, and set the gun camera to draw after the main camera, clearing only the depth buffer. First off, does this model make sense? Second, I've actually started implementing this on top of Broomstick (currently by subclassing rather than modifying the core code, although it's starting to get a little unwieldy to do it that way as I add more features). One major problem I'm running into is the way lights are managed. I was assuming that on each frame, I could just turn on the lights for tags targeted by the first camera, render that camera, turn on the lights for the second camera, render that camera, etc. But because lights are actually directly referred to by materials, I don't have a good way to do this. I'd basically have to walk all the materials and modify their light references on each frame (and there isn't actually a good way to do this right now, since the only way to get a material to re-think its lights is to invalidate its shader program). Any thoughts on this? Thanks, nj
