There are a couple of things i'd like to note, especially those not directly related to OpenCL vs. CPU code (most arguments have been voiced already):
* On the question whether horizontal layout (Blender nodes, Softimage), vertical layout (Houdini, Aviary) or completely customized layout (Nuke) is preferable: I'd like to point out that it would probably be difficult to use socket names and default input values for sockets with anything other than horizontal nodes. Most softwares that use a different layout approach seem to have just one single type of socket data, depending on the type of tree. For compositing systems this is simply the image buffer you want to manipulate, for more complex systems (such as Houdini) a socket connection can mean a parent-child object relation or vertex or particle data, etc., depending on the type of tree. * While the restriction to one single data type in a tree allows very clean layout and easily understandable data flow in trees, it also means that there needs to be a different way of controlling node parameters, which usually means scripted expressions. Currently many nodes in Blender have sockets that simply allow you to use variable parameters, calculated from input data with math nodes or other node's results. Afaik the equivalent to expressions in Blender would be the driver system, but making this into a feature that is generic enough to replace node-based inputs is probably a lot more work than "only" a compositor recode (correct me if i'm wrong). * Having a general system for referencing scene data could be extremely useful, especially for the types of trees in the domain i am working in: particle sims (and mesh modifiers lately). In compositor nodes the only real data that must occasionally be referenced is the camera (maybe later on curves can be useful for masking? just a rough idea). For simulation nodes having access to objects, textures, lamps, etc. is much more crucial even. We discussed already that such references/pointers would have to be constants, which means that their concrete value is already defined during tree construction and not only when executing. This makes it possible to read the data at the beginning of execution and convert it to OpenCL readable format. Also it will allow to keep track of data dependencies (not much of an issue in compositor, but again very important for simulations). Note that there are already some places where data is linked in a tree (e.g. material and texture nodes), but these are not implemented as sockets and so don't allow efficient reuse of their input values by linking. * I would love to see the memory manager you are planning for tiled compositing be abstracted just a little more, so that it can be used for data other than image buffers too. In simulations of millions of particles the buffers could easily reach sizes comparable to those in compositing, so it would be a good idea to split them into parts and process these individually where possible. In images the pixels all have fixed locations and you can easily define neighboring tiles to do convolutions. This kind of calculation is usually not present in "arbitrary" or unconnected data, such as particles or mesh vertices, so an element/tile/part will either depend on just one of the input parts or all of them. But still having a generic manager for loading parts into memory could avoid some double work. Cheers, Lukas _______________________________________________ Bf-committers mailing list [email protected] http://lists.blender.org/mailman/listinfo/bf-committers
