This is interesting.

One of the poster children for Google's map-reduce is rendering for Google
maps.  Each object in the world is keyed according to the tile that it
affects in the map and then the reduce renders the tile given all of the
objects that affect it.  Very slick.  Very fast.

The question with 3d rendering is whether you can limit effects in this way,
especially if you are using things like radiosity where illumination on
objects way across the screen can affect the lighting on other objects.

It may be that multiple map-reduce passes could be used to do this, but I
don't know.

If you are only passing the entire scene to independent tile renderers, then
you really don't have much to do.  Just put your scene description into the
cluster with a huge replication and run.


On 10/16/07 8:37 AM, "Robert Jessop" <[EMAIL PROTECTED]> wrote:

> Hi there. I did a search of the mailing list archives looking for
> something similar to this, but I didn't find anything so apologies if
> this has been discussed before.
> 
> I'm investigating using Hadoop for distributed rendering. The Mapper
> would define the tiles to be rendered and the nodes would render them
> using the scene data (which is for the sake of argument, all wrapped up
> in one big binary file on the HDFS). The reducer would take the output
> tiles and stitch them together to form the final image. I have a system
> that does this already but it doesn't have any of the advantages of a
> distributed file system, there are lots of IO bottlenecks and
> communication overheads. All the code is currently in C++.
> 
> Does this sound like a good use for Hadoop?
> -Rob
> 

Reply via email to