On Fri, Feb 15, 2002 at 01:41:58AM +0800, Benj wrote (wyy sez):
> On Fri, Feb 15, 2002 at 12:22:06AM +0800, Horatio B. Bogbindero wrote:
> > > 
> > > If 1 scene is passed to the cluster of 10 machines, won't the whole
> > > cluster work on it with 10 machines simultaneously rendering the
> > > single scene?
> > >
> > that 1 scene is further divide into smaller scenes that are then farmed
> > out into the cluster. cluster computing for rendering typically
> > use a divide and conquer strategy to parallel computing.
> >  
> > > I thought a cluster can do this, acting as a supercomputer from the
> > > combined cpu power of the separate machines. Hmmm, maybe I hit on a
> > > common misconception.
> > > 
> > it may just be just be a semantic problem. 
> 
> You're right about the semantic problem. It's naive of me to
> think that a cluster is equivalent to a supercomputer since
> a cluster is limited only to processes that can be sub-divided
> and can be distributed.
>
even supercomputers come in different classes. as i mentioned before
the parallel computing world is a big mess. most supercomputers still
use message passing thus they still divide the scenes into little 
itty bitty pieces. therefore, most supercomputers are also like clusters.
maybe you are refering to big iron supercomputers which we called
shared memory supercomputers.

however, they differ in a thing called granularity. granularity is a
term that refers to how well the problem can be chopped up. course
grain programs are good for beowulf cluster and distributed computing
because each subdivision of the main job is large. fine grain programs
are good for shared memory supercomputers because these machines can
handle lots of smaller subdivisions of the main job well. however,
the common thing is that it still subdivides the problem into little
pieces. 

an example of really fine grain computing is symmetric multiprocessing. 
here the job is divided at the level of hardware (with hints from the 
software of course). an example of really course grain computing is
distributed computing like seti@home and distributed.net

however, all of these types are still considered parallel computing.
hence, these are all supercomputers. as you said, just a play of words.
 
> But how is it that a cluster works with less overhead than
> a commercial render manager, as you mentioned earlier?
> 

this has do with granularity again. cluster rendering software (POVRAY)
divide the frame into smaller subframes. however, render managers
divide a movie into smaller frames. this is an example of granularity.
in this case, cluster rendering software is more fine grain than
render managers.

main difference is that the render manager distributes larger job sizes
because it has to deal with more communications overhead. however,
clusters are better coupled (they are specially constructed to limit
communications and setup overhead) thus can deal with lots of smaller sized
jobs.

movie -> frame -> subframe
 
however, render managers tend to perform better for larger projects.
as i said before, it really depends on the application.



--------------------------------------
William Emmanuel S. Yu
Ateneo Cervini-Eliazo Networks (ACENT)
email  :  wyy at admu dot edu dot ph
web    :  http://CNG.ateneo.net/wyu/
phone  :  63(2)4266001-4186
GPG    :  http://CNG.ateneo.net/wyu/wyy.pgp
 
War spares not the brave, but the cowardly.
                -- Anacreon
 

Attachment: msg14823/pgp00000.pgp
Description: PGP signature

Reply via email to