I really think the fact that Sony built Katana to manage their large
scene complexity rather than somehow pipelining Maya speaks to a
fundamental lack of scalability inherent in the organization of the
scene graph.  Look at a render layer in the hypergraph.  There are
nodes, but the nodes that exist have little regard for streamlining
execution.

I think for scalability beyond what ICE and lots of RAM offers, you
have to be thinking in terms of cloud computing, cluster computing,
and similar distributed models.  GPU computing essentially follows a
similar need for a mapReduce scheme as well, as you need to be able to
manage the large data sets in a massively parallel way in order for
them to be useful.  Honestly, though, I think building scalability on
this magnitude is far more feasible for specialized tools designed to
solve a particular problem (like a massive fluid sim) than as a
modified framework for a general application.  The reason is that
unless you're talking about a specific tool (city generator, fluid
sim, etc) actual scene content is human-bound, and therefore already
mapReduced by the production workflow.  Beyond that, the scene graph's
purpose is to be a top-down interface to the entire scene graph.  By
it's nature it either fits everything at once, or has to load pieces
on demand and/or use a proxy representation system in order to be
useful as a complete view of your scene.

Personally, I think for any of the big 3 apps, the scalability answer
actually has more to do with offloading than changing the
architecture.  And the irony there is that offloading is actually a
relatively simple problem to solve, compared to modifying node-based
workflows.  Exocortex will probably streamline it between the three
apps long before Autodesk does.  I mean, they're already well on their
way with their Alembic plugin products.

Once scenes are offloading efficiently, you simply design your
specialty software to talk to the same offloading protocol, or act as
a plug-in in software that already speaks that protocol.

The bleeding edge for scene scalability is found in render software,
not DCC apps.  And it's pretty much always been that way.

- Andy

On Thu, Sep 6, 2012 at 9:30 AM, Williams, Wayne
<wayne.willi...@xaviant.com> wrote:
> What I'd like to know is how the devs feel about the core of Maya in 
> comparison to Soft now that they have access to the code (I'm guessing this 
> is the case, please let me know if wrong).  Are there any things you devs see 
> that were done extremely well in Maya and Soft could have taken a cue from in 
> that regard or vice versa? I realize that you can't go into specifics but 
> figured I'd put the general question out there.
> -wayne
>
>
> -----Original Message-----
> From: softimage-boun...@listproc.autodesk.com 
> [mailto:softimage-boun...@listproc.autodesk.com] On Behalf Of Stefan Kubicek
> Sent: Thursday, September 06, 2012 12:05 PM
> To: softimage@listproc.autodesk.com
> Subject: Re: ICE in Maya is an engineer's worst nightmare
>
> Fair enough and agreed on, but why would Maya be a better candidate to be 
> developed in that direction than any other app?
>
>
>> On Thu, Sep 6, 2012 at 11:22 AM, Stefan Kubicek <s...@tidbit-images.com> 
>> wrote:
>>> Scalability is a good buzzword, but what does it actually mean?
>>
>> In the specific context of FX, scalability means very large number of
>> objects, billions of particles, huge fluid grids, etc. Stuff that may
>> not even fit in RAM at once.  Juhani's mention of Katana is a good
>> one; it doesn't just everything in RAM at once and process it the way
>> traditional apps do, it creates a receipe that will run in the
>> renderer as needed.  For very large data sets, different tools and
>> approach are required other than just adding more RAM to a single PC
>> and doing things the old way.  It's also difficult to reference,
>> track, change  all those assets if the system isn't thought for that.
>> Again, truck vs family car.
>>
>>> Does it mean you can "process" more "data" in the same amount of time
>>> compared to another app? And what kind of data? Procedural geometry?
>>> Rendered Images? Does it mean you can load more assets into the same
>>> amount of available RAM on a machine compared to another app?
>>>
>>> How would the automation of such processes need to look like to scale well?
>>> Scripted? C++? Node-based like ICE?
>>> Multithreading across the board? Or is it a question of architecture
>>> rather than which programming language was used to implement it (scripted 
>>> vs C++)?
>>> What does Maya offer in this regard, or where does it differ, to
>>> scale well/better than Soft or app X in your opinion?
>>>
>>> In my experience Softimage offers pretty much the same mechanisms to
>>> automate processes and handle scene complexity as Maya does, + ICE on
>>> top, and I found it can load a good chunk more data simultaneously
>>> than Maya can fit into the same amount of memory, especially when it
>>> comes to working with textures and realtime shaders. That was up
>>> until two versions ago, maybe that has changed?
>>>
>>> If all that doesn't mean it scales well, what exactly does it mean then?
>>>
>>> Note: I'm not a Softimage fanboy or Maya hater (ok, just a little,
>>> but not enough to not use it if it offers something that helps me to
>>> do my work), I just try to understand what scalability means by your
>>> (or anyones) standards compared to how I understand it.
>>
>
>
> --
> -------------------------------------------
> Stefan Kubicek                   Co-founder
> -------------------------------------------
>            keyvis digital imagery
>           Wehrgasse 9 - GrĂ¼ner Hof
>            1050 Vienna  Austria
>          Phone:    +43/699/12614231
> --- www.keyvis.at  ste...@keyvis.at ---
> --  This email and its attachments are
> --confidential and for the recipient only--
>
>

Reply via email to