Yeah, there was a POVray with parallelism that I've used. And a variety of "video wall" kind of things.
What I was looking for was something that you could give as an assignment to a student "go code this in parallel, using this MPI-lite style library, and measure the performance". Rendering is almost EP, and is more about shuffling files (or pieces thereof). That's why the sorting/searching kinds of problems are interesting. I was also thinking that some sort of simple finite element code, where you have to tile it. Say you're doing a 1000x1000 grid, and spreading it across 8 processors. The computation could be simply solving Laplace's equation for heat in an iterative fashion. It doesn't have to be multidimensional Navier-Stokes with compressibility and inhomogenous media. Jim Lux -----Original Message----- From: Douglas Eadline [mailto:[email protected]] Sent: Wednesday, August 21, 2013 9:45 AM To: Lux, Jim (337C) Cc: Max R. Dechantsreiter; [email protected] Subject: Re: [Beowulf] Good demo applications for small, slow cluster > Sorts in general.. Good idea. > > Yes, we'll do a distributed computing bubble sort. > > Interesting, though.. There are probably simple algorithms which are > efficient in a single processor environment, but become egregiously > inefficient when distributed. e.g. The NAS parallel suite has an integer sort (IS) that is very latency sensitive. For demo purposes, nothing beats parallel rendering. There used to be a PVM and MPI POVRay packages that demonstrated faster completion times as more nodes were added. -- Doug > > Jim > > > > On 8/20/13 12:11 PM, "Max R. Dechantsreiter" > <[email protected]> > wrote: > >>Hi Jim, >> >>How about bucket sort? >> >>Make N as small as need be for cluster capability. >> >>Regards, >> >>Max >>--- >> >> >> >>On Tue, 20 Aug 2013 [email protected] wrote: >> >>> Date: Tue, 20 Aug 2013 00:23:53 +0000 >>> From: "Lux, Jim (337C)" <[email protected]> >>> Subject: [Beowulf] Good demo applications for small, slow cluster >>> To: "[email protected]" <[email protected]> >>> Message-ID: >>> <[email protected]> >>> Content-Type: text/plain; charset="us-ascii" >>> >>> I'm looking for some simple demo applications for a small, very slow >>>cluster that would provide a good introduction to using message >>>passing to implement parallelism. >>> >>> The processors are quite limited in performance (maybe a few >>>MFLOP), and they can be arranged in a variety of topologies (shared >>>bus, rings, >>>hypercube) with 3 network interfaces on each node. The processor to >>>processor link probably runs at about 1 Mbit/second, so sending 1 >>>kByte takes 8 milliseconds >>> >>> >>> So I'd like some computational problems that can be given as >>>assignments on this toy cluster, and someone can thrash through >>>getting it to work, and in the course of things, understand about >>>things like bus contention, multihop vs single hop paths, >>>distributing data and collecting results, etc. >>> >>> There's things like N-body gravity simulations, parallelized FFTs, >>>and so forth. All of these would run faster in parallel than >>>serially on one node, and the performance should be strongly affected >>>by the interconnect topology. They also have real-world uses (so, >>>while toys, they are representative of what people really do with >>>clusters) >>> >>> Since sending data takes milliseconds, it seems that computational >>>chunks which also take milliseconds is of the right scale. And, of >>>course, we could always slow down the communication, to look at the >>>effect. >>> >>> There's no I/O on the nodes other than some LEDs, which could blink >>>in different colors to indicate what's going on in that node (e.g. >>>communicating, computing, waiting) >>> >>> Yes, this could all be done in simulation with virtual machines (and >>>probably cheaper), but it's more visceral and tactile if you're >>>physically connecting and disconnecting cables between nodes, and >>>it's learning about error behaviors and such that's what I'm getting at. >>> >>> Kind of like doing biology dissection, physics lab or chem lab for >>>real, as opposed to simulation. You want the experience of "oops, I >>>connected the cables in the wrong order" >>> >>> Jim Lux >>> >>_______________________________________________ >>Beowulf mailing list, [email protected] sponsored by Penguin >>Computing To change your subscription (digest mode or unsubscribe) >>visit http://www.beowulf.org/mailman/listinfo/beowulf > > _______________________________________________ > Beowulf mailing list, [email protected] sponsored by Penguin > Computing To change your subscription (digest mode or unsubscribe) > visit http://www.beowulf.org/mailman/listinfo/beowulf > > -- > Mailscanner: Clean > -- Doug -- Mailscanner: Clean _______________________________________________ Beowulf mailing list, [email protected] sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
