On Thu, Nov 8, 2012 at 3:43 AM, Christian Freisleder <m...@buntepixel.eu>wrote:
> What happened to Zap Anderson (aka. Master Zap). > didn't he leave Mental Images and went to Autodesk to help with the > integration of MR? > http://mentalraytips.blogspot.de/2011/09/this-is-100th-post.html > maybe 1 year isn't enough time to do it in all applications and thats > maybe why 3ds catches up with features, but if he is still there there > might be hope. > While Master Zap has unquestionably done a huge amount of good in the MR community, and absolutely knows more about MR than I ever will, I think we ought to temper our enthusiasm for what he might be able to achieve. He is one person, and if the implementation of his work in AD products is any example, even having the benefit of his efforts can lead to mediocre results. The history of unresolved bugs and poorly conceived workflow in the arch materials' implementation is more than annoying. The amount of person-hours and CPU-hours wasted by people who simply don't know what all the settings do, nor which ones should be used in what situation, must represent a substantial fraction of the CG budget of any company that has had to rely on them. The mere existence of Felix Geremus's much-improved shader, and the fact that it had to be built, after years of complaints unaddressed by Autodesk, by a generous individual and distributed for free, is pretty clear evidence of at least one missed opportunity. Technology evolves. Software-based technology is supposed to be improvable, not static. The whole point of the current architecture of computers is to allow for changes to be made. For all of the base shaders in AD's products to remain unchanged after 3, 5 or 20 (!) years of steady and proven improvements in shader design is shameful. (And yes, I realize that you can't "improve," say, Lambert or Phong shading, as they are specific algorithms -- but you could for example replace the glossiness code with the better one that came along years later, but is only available in the mia and mib shaders.) How would you feel if, say, you had to use Office '95 to this day? What is even more shameful is the fact that Mental Images *has* been improving their code, but that the improvements are poorly or not at all implemented in AD's products. Some might protest that AD (and Avid and Microsoft) have no obligation to provide continuous improvement, or add more modern tools as time passes; that providing a platform for others to build on is enough. If that were the case, then these products should have been sold that way, as dev platforms and frameworks, not as cutting-edge applications. These packages have always been represented as cutting-edge *solutions* and we pay dearly for support. Look, I know it's easier to market a completely new tool than an improvement to an old one. But AD has an obligation to maintain the viability of the toolset they provide. What if your car had all modern amenities and safety equipment, like power locks, air bags, air conditioning, anti-lock brakes, traction control, satellite nav, a fancy audio system, but *ONLY* the 1.0 version of each of those things -- and *ONLY* the 1.0 version of the throttle (a knob or lever, not a gas pedal), the steering (a tiller, not a wheel), the tires (unvulcanized rubber with inner tubes), and an engine that required a mechanic to ride aboard? Would you even buy it? Would anyone even be able to drive it safely? As Andy pointed out earlier, rendering is in a way the whole point of the exercise. Yet of all the tools in the toolset, it seems to be the one without any incremental improvements or bugfixes. We get whole new tools like FG, or IP, but any improvement to those things comes years late if ever. I'm not asking for new features. I want the features we've had for years to work properly. I want simple, clear workflows and clean UIs. I want default materials that use modern algorithms. I want UI defaults that are approximately "correct." I want controls that have actual units (like, say lux, or candelas) when appropriate. I want sliders that don't have their meaningful range compressed into 1/50th of the width of the slider, or totally off the scale. I want sliders that *HAVE* a scale, for crying out loud (look at Nuke -- some sliders are linear, some are log, some exponential -- and they all have tickmarks and numbers). Yes, it's great to have all the controls available in one place, like the arch mat. But that doesn't change the fact that for 99.9% of real-world materials (which is what we spend most of our time trying to simulate), you only need *one* color to describe the material color. They don't have separate reflection, refraction, translucency, irradiance, and incandescence colors. If it's a dielectric, the reflections are *white*, period -- only their intensity varies. If it's transparent or translucent (I'm ignoring scattering here, because so do most of our shaders), the transmission has *one* color, not one for refraction, one for adsorption, one for "falloff." And of course, and the VRay guys remind us, EVERYTHING HAS FRESNEL. And energy is conserved, always. Sorry for the rant, but my point is this. The mia_arch_mat PPG has 69 parameters (if you count colors as either 3 or 4 params, you have a lot more) and several more that have ports but no controls (like texture coords), and *MANY* of them need to be set to produce even a minimally-useful render. It's enormously useful to have those params when you need them (if they actually work). But that is rare. Most materials that we make with the arch mat could be very well described by: 1. color (RGB) 2. luminosity, if it glows (scalar) 3. adsorption distance (if transparent or translucent) (scalar) 4. IOR (complex please) (1 or 2 scalars) 5. refractive diffusion (scalar) 6. reflective diffusion (glossiness) (scalar) 7. reflectivity (scalar) 8. bump/normals (3-vector) 9. UVW values (3-vector) 10. Opacity/output alpha (scalar) That's IT. Most of the time you don't need sampling or optimization controls -- you want the samples = "enough to not buzz" so obviously the more diffuse something is the more samples need to be taken, and when *wouldn't* you want your shader to be "optimized" as long as it doesn't add nasty artifacts? This looks like the basic Lambert or Phong controls, doesn't it? But those legacy shaders don't actually interconnect most params internally, so changing IOR doesn't affect reflectivity, for example, and energy isn't automatically conserved. So they're pretty much useless for modern rendering of physically-plausible materials. Now, I can build (and have built) presets in the render tree that take care of much of this. But that's messy, frightens the kids, and is ridiculously slow to load if I make it a compound and give it a clean PPG. Not to mention, not very optimized. We need compiled code, or at least a UI that doesn't bog down traversing a graph with lots of interdependencies. Mental Mill may have been a step in the right direction, but it was a small step and is deprecated now anyway. So is MetaSL. So. Please, Autodesk & nVidia: fix this. I don't care how we got here or who was responsible. I don't care if people messed up or not. I just want to be able to use the tools in which I've invested many tens of thousands of dollars and virtually all of my waking hours for 15 years. I made those investments, in Softimage and Mental Ray, deliberately, in every sense of the word, because I believed that they were the best tools available. They may or may not be the best tools available now, but they are my tools and I need you guys to step up and make them work properly. That probably means that you need to work together rather than independently. I'm glad to hear vague talk that that is happening. Here's hoping that you all can follow through on this in time for all of us out here. etm