XSI's ICE is the use of a customizable node which can be compared to the VOP node of Houdini, although ICE is even easier to use than VOP (which at times can be more complicated).
Houdini uses a more advanced network of nodes which is integrated throughout the application, resulting in it being more difficult to learn and use but ultimately more powerful. Look for example at what this person has done (it shows the power of Houdini quite well): http://www.youtube.com/watch?v=jOLhnwllpgs . Houdini is extremely powerful for things like visual effects and certain types of modelling (and animation). It's power, because of its advanced node-based procedural architecture, is still unexplored. It's simulation ablities are unmatched (only XSI's ICE can come close), and the ability for the user to make changes with real-time feedback that can only be made through an advanced node-based procedural method cannot be done through traditional non-procedural methods or easily done through unintuitive python scripts. Softimage's objective was to create a node-based environment that would be easier to use than Houdini and for some tasks almost as powerful. That is why ICE and Houdini are not the same thing, and should be seen as having different strengths and thus complementing each other. Textual programming was created because of the need to optimize programming code as well as possible per cpu clock without regards to readability, given the fact that cpu's were at the time inherently constrained in terms of performance - python and c look the same and are mostly the same in regards to intuitiveness. Visual programming is inherently more intuitive. The use of charts, diagrams, and graphs combined with some readable text is preferable to raw text. This is the basis of visual programming, the next step that presents a visual and interactive environment for the user, something that textual-based programming could never achieve and which is why is seen as not being as intuitive by the common user. Houdini has been stated many times as being able to save countless hours of programming by using its well thought-out nodes instead of textual-based programming (if we take into account the difference between the amounts of time needed to learn visual and textual-based programming, and slower than ideal usage of textual-based programming procedures given the lesser intuitiveness and more complicated nature, for the common user, of the textual approach). Studios use Houdini because it is more intuitive and also because in being more intuitive it is faster to set-up and use. However, Houdini is not exempt from criticism regarding intuitiveness. These are two good examples that I have found: "POPS, this for me needs a re-write, I'd actually like to see more of a VOPS style system with multiple inputs/outputs on nodes, I guess after using VOPS and softimage ICE, this seems like a friendlier way of constructing a complex particle system." "Some attention to VOPs perhaps? I really believe that VOPs are very powerful, but could we get some more functionalities as nodes? I really liked XSI ICE's user friendliness.. Also I kind of liked the fact that we could create and delete points from within the ICE network.. Something like this in Houdini could be very helpful (AFAIK using VEX/VOPs we can't create or delete data inside VOP Networks.. Please correct me if I'm wrong)" ICE is the most intuitive and useful for some tasks. Houdini is used for when the user needs more control and power. Blender needs to excel in being able to create 3d animation as easily, as quickly, and as well as possible by combining non-procedural and procedural workflow as well as possible. This is therefore the next step that Blender needs to take after 2.6 2.6 will be released 2 and half years after it has been announced. 2.8 can take a similar amount of time. 2.6 was focused on restructuring and redesigning Blender and making it competitive against all other packages except Houdini. For 2.8, Blender needs to become competitive against Houdini by integrating a Houdini-like advanced node-based all-permeating procedural system (which allows for, and is not limited to, modelling, animation, rigging, and more), and also integrate a system like ICE. When this happens, Blender will have finally reached completion from a theoretical and fundamental point of view. What it will then be refining will be the integration of non-procedural and procedural workflow so that 3d animation may be created as easily, quickly, and as well as possible. Blender needs to also be multi-threaded and fully supportive of OpenCL programming - this will help speed its simulation abilities immensely. But the most important thing, as always, is to first implement the main functionalities, and then think about how to optimize and speed things up. This was posted in one thread that I was reading: "The next release is supposed to be a major overhaul of Houdini's underpinnings, making everything more threadsafe etc. In a fantasy world, Houdini would have a core scheduler that can schedule across CPU and GPU cores, and have OPs tell the scheduler what sort of core they need. A lot of OPs execute a small piece of code a lot of times - perfect candidates for GPU execution (unified shader model GPUs have hundreds of "cores"). Also, VEX should be extended to make it possible to implement most SOPs in it. And VEX should also be comparatively trivial to port to GPUs using OpenCL and similar APIs. All of this would not only speed up Houdini, but make it possibly the fastest 3D environment available, essentially jumping ahead of the rest of the pack, which sidefx is known for doing traditionally . Also, there is no reason why POPs aren't multithreaded. They should be "embarrassingly parallel" in most cases. Mantra shaders are a different beast altogether as production level raytracing isn't feasible on GPU's yet - main memory bandwidth being a major issue among others. In terms of features, Mantra is missing irradiance caching, raytraced SSS (for PBR), a mechanism for defining custom BSDF's, and tone-mapping. IPR should be able to cache rays in a fine-grained manner for way quicker relighting, and also support multi-host, essentially bringing it closer to ILM's lightspeed or Pixar's Lpics. Support for spherical HDRs is one little thing that would go a long way for many artists. Shader-wise, every exposed input parameter on a shader should be able to have an input on the shader node that allows you to connect an exported parameter from another shader right into it without having to wade through code or VOP orgies just to add noise as a texture. Basically a mechanism comparable to co-shaders. Also, finer grained control over PBR bounces per object/material. Incorporate Mario's IES light into the default ASAD light. And one ubershader for everyone who just wants to "get there" 90% of the times without reinventing the wheel. DOPs... It's possible to build simpler fluid solvers if you delve into microsolvers and construct your own, and it will be faster than the default ones that tend to support EVERYTHING which also slows them down. How about a few "simpler" fluid solvers out of the box? Also, one of Houdini's big selling points as a simulation environment is how integrated everything is, but now we have a lot of special cases where certain collisions aren't propagated from one type of object to a certain other - the feeling of confidence that you just set up something and press play and it will somehow react as expected is gone to a certain extent. And how about having Hqueue run in "realtime"? you press play and your sim is calculated across the network. Somewhat like multihost Mantra." -nautilus _______________________________________________ Bf-committers mailing list [email protected] http://lists.blender.org/mailman/listinfo/bf-committers
