Hey Christopher,

I think i can give my 2 cents on maxwell, as i have been on its beta
as well a few years back. This is from what was going on then. I
cannot say anything about the current state of the engine as i have
not touched it since.
Purely from a rendering standpoint, maxwell felt slow, first and
foremost because it is an unbiased engine, and it does not cheat its
solution. That means in order to get rid of the sampling it needs to
do a ton of passes to get an accurate convergence. What that meant for
me, as an individual, was that animation was out of the question
unless i was willing to work with a grainy image or if i chose to wait
a long time for the frames to be rendered.
Most people these days rely on farms to render with maxwell in an
animation environment (rendernet.se comes to mind).
This was the low side of it, and i hear it is quite similar to arnold
from this standpoint (good quality takes more samples which in turn
takes a longer time to achieve). This is because both engines do not
precompute or cache anything. Brute force is the word here, whereas
vray, even if it does brute force well, it has a ton of other choices
to "cheat" its way through, resulting in a faster rendertime, which in
turn, unfortunately, requires greater knowledge from the user.

On the upside, the shading system was nice, had the usual ubershader
approach, tons of shaders available in the community. Did not use
light sources, but instead turned objects into emitters using a
special shader. That meant the shadows and everything else looked very
realistic. Its preview system was way ahead of anything at that time
in terms of seeing the final look of the image, in the first pass, so
you could get a very good idea if you needed to adjust things before
waiting for 2 hours. Now this has been updated to the maxwell "fire"
engine. But most renderers today give you this (modo's preview or
vray's light cache come to mind). By far the most useful feature of
the engine for me, was its mxi image format (similar to a raw file),
which stored lighting information from all the light sources. That
meant if you had screwed up your exposure, lights etc, you could fix
everything afterwards, and i don't mean brightness/contrast fix. You
could dial the lights in and out, change their intensity, etc, and
everything would update realtime in it's "image editor". I hear now
they have a nuke plugin for this.
Worked for sequences of frames as well, and was a lifesaver.
I remember this one time i had an interior to render for a client, and
it had around 50 lights total.
The guy did a dozen variations, changing colors and turning lights on
and off. Had it not been for this feature,
i would have been rendering a week on the project.
With it, i just waited a couple of hours, and then did a dozen
variations in half an hour from the same render.

Final thing i'd like to point out, was that its xsi integration was
not that good nor stable back then.
Maybe now things have changed, but last i looked, it was pretty much
the same workflow.

Cheers,
O

Reply via email to