Hi Phil - the problem is that the GPU is not as broad in capabilities as
the CPU. What this means is that many kinds of work have to be moved back
and forth from the CPU memory to the GPU memory in order to complete them -
they can't just stay on the GPU. The cost of this movement usually kills
any advantage you may have gained from using all the GPU cores, as the bus
speed is extremely slow by comparison. If you can do the calculations and
stay on the GPU then it can be very powerful, which is why GPU renderers
are so fast. The other issue is writing code for GPUs is quite time
consuming, and still hardware specific (OpenCL for AMD/Intel, CUDA for
NVidia) - so generally you have to pick problems that merit the investment
of resources - i.e. hair sims

Now, it's not all bad news - the future is looking pretty awesome. There
are various types of shared memory architectures coming through (HSA from
AMD, CUDA6 from NVidia, Intel and AMD with SPIR). What this means is that
the hardware vendor starts taking care of the memory management - through
having CPU and GPU sharing physical memory, and through smart management of
the memory. This is awesome.

*warning, imminent Fabric plug* - at GTC (NVidia's conference in a few
weeks) we will be showing our KL language executing on CUDA6 without making
any changes to the KL code. So we'll be showing a KL deformer running in
Maya via Fabric Splice, running at some crazy speed.

This means that some KL code is going to be running on the GPU without
requiring any special investment of effort - what's nice here is that if
you have a CUDA6 capable card, you're going to have this capability as soon
as those drivers become available (it's initially a software solution). We
should also be shipping the same support for AMD's HSA architecture around
the same time.

We're able to do this because of the design of Fabric, but I expect other
companies will have their own ways of taking advantage of this stuff. I
think it's going to be an awesome time - enabling a TD to author tools that
are GPU accelerated will be amazing. (I can see all the R&D people holding
their heads already).

I hope that made some kind of sense, it really is an amazing time for
hardware.

Paul


On 16 March 2014 18:51, phil harbath <[email protected]> wrote:

>   I going to ask something and in my defense I am completely ignorant of
> most things technical,  given how fast redshift is at GPU rendering (and to
> me it is just magic how much faster it is than plain old 6-core cpu
> rendering), why can't more parts of these Programs which require a lot of
> calculations be moved to the GPU (I guess lack of ram would be one thing).
> I guess for me that would be modernization.
>   *From:* Andy Goehler <[email protected]>
> *Sent:* Sunday, March 16, 2014 6:14 PM
> *To:* [email protected]
> *Subject:* Re: ICE in Maya is it really possible?
>
> Amen. Finally some sensible words -- thank you Raffaele.
>
> Andy
>
>
>  On Mar 16, 2014, at 12:11, Raffaele Fragapane <
> [email protected]> wrote:
>
>  Honestly, I don't know why people keep  mentioning things such as she of
> vote or rewrites and the such. XSI wasn't rewritten to get ICE with its
> current considerable set of limitations (some of which aren't present in
> splice on Maya), but everybody wants so badly for Maya to be called older,
> when in actuality it's clunkier, horribly fragmented, but arguably a lot
> younger and more modern than Soft at this point.
>
> People have a perception of modernity based on their interaction with a
> mix of look, slickness and use experience, and then think and wasn't too
> the mythical "core" of the app to be equally modern or ancient based on how
> it feels, but the two things are seldom related.
>
>
>

Reply via email to