Hi,

Sorry for the endless curiosity around motion blur and depth of field in
blender.

Blender has Adaptive QMC for things like ambient occlusion, glossy
reflections and area lights.
What's the showstopper from implementing motion blur and depth of field
using this technique? Are there any specific limitations here? I know it's
probably time consuming but I'm refering to actual limitations in the render
arcitecture.

Would rendering transform motion blur using instances interpolated with
Adaptive QMC be impossible with the current implementation? This wouldn't
account for deformation blur of course, but it's a small step atleast.
Heard some mention of shading and sampling having to be decoupled, is this
the one?

Thanks
_______________________________________________
Bf-committers mailing list
[email protected]
http://lists.blender.org/mailman/listinfo/bf-committers

Reply via email to