On Wed, Feb 13, 2002 at 09:59:00AM +0800, Benj wrote (wyy sez):
> Hello there Horatio,
>
hello there benj,
 
> On Wed, Feb 13, 2002 at 08:21:49AM +0800, Horatio B. Bogbindero wrote:
> > On Tue, Feb 12, 2002 at 11:50:03PM +0800, Benj wrote (wyy sez):
> > > Hi Xander
> > >
> > hello, like any cluster person will say. it depends on the application.
> > there is still currently no solve-all-cluster-problems solution to
> > PC clustering. 
> 
> But is it possible to converge the "3 types of clustering" for a
> certain application? Maybe this is what Xander is asking about.
> 
well, currently there is not clustering cure all. or... i could be
wrong. hehehe.

> However that may be, I'm not a clustering person yet. I haven't
> set up a cluster. I'm still in the armchair stage, so to speak.
> But I'm interested in applying clustering to rendering images in
> particular. So I have my own questions about clustering.
> 
> We would be building a render farm where I'm working, and I'd like
> to find out if clustering can boost rendering performance. So my
> interest is in high performance clustering really. Beowulf might
> be enough for me.
> 
of rendering can be boosted by a cluster. but, it also depends if 
your software is supported. for rendering i still use the POVRAY
ray tracer software. it is open source and it has MPI and PVM
patches that allow it to run in beowulf type clusters.

> However, there are commercial rendering software packages available
> that distributes the rendering tasks to multiple machines. My
> own question is would a cluster of, say, 10 machines render 10
> scenes faster than would 10 machines managed by a commercical
> render manager rendering the same number of scenes? The difference
> between the two, as you know, is that the cluster combines the cpu
> power of the 10 machines while the render manager distributes the
> 10 scenes evenly to the 10 render machines which renders separately.
> Both groups of machines have the same specs, btw.
> 
??? the cluster does what the render manager is doing with lesser
over head. meaning... when a cluster renders images it chops up the
images into ity-bity pieces and farms it out. the benefit of a 
cluster setup is that it has less overhead. the beowulf cluster does
not combine CPU or computing resources together it still farms
out processes.

commercial rendering software also works in the same manner.  

> The rendering speed of the two setups might come out to be the
> same for the 10 scenes taken together. But would it? This is what
> I'd like to test.
> 
the behavior of the beowulf cluster running some rendering software
and a commercial render manager should scale similarly. ideally,
the scaling should both be linear but in reality it is logarithmic
due to other limitation like interconnection networks. 

> > MOSIX migrates system processes to other machines. however, if
> > the processes is I/O bound or uses shared memory (most of the
> > useful ones are. DB, httpd and others) they cannot be migrated.
> 
> Can MOSIX migrate rendering processes? I'd like to test this too.
> 
if the program written will fork and forget then it will. if the
program uses message passing or shared memory then it won't. as
i said it depends on the problem.

> > Beowulf clustering is just a general purpose cluster that aims
> > to provide developers with a media to run standard clustering
> > libraries on PC clusters. such libraries are PVM, MPI, BSP,
> > ScalaPACK, PETSc and friends.
> 
> I wonder if these libraries are applicable to rendering
> applications without extensive programming. Probably not, no?
> 
the solution would be to find ready made software the uses these
libraries. POVRAY does this.
 
--------------------------------------
William Emmanuel S. Yu
Ateneo Cervini-Eliazo Networks (ACENT)
email  :  wyy at admu dot edu dot ph
web    :  http://CNG.ateneo.net/wyu/
phone  :  63(2)4266001-4186
GPG    :  http://CNG.ateneo.net/wyu/wyy.pgp
 
War spares not the brave, but the cowardly.
                -- Anacreon
 

Attachment: msg14771/pgp00000.pgp
Description: PGP signature

Reply via email to