Hi Douglas,

Yes, "IS" - also "GUPS" is closely related (and easier
to code, aside from its formal "lookahead" constraints).

But I recommend crafting one's own, in order to have
control over the key distribution: teach a lesson in
load-balancing!

Regards,

Max
---



On Wed, 21 Aug 2013, Douglas Eadline wrote:

>
>> Sorts in general.. Good idea.
>>
>> Yes, we'll do a distributed computing bubble sort.
>>
>> Interesting, though.. There are probably simple algorithms which are
>> efficient in a single processor environment, but become egregiously
>> inefficient when distributed.
>
> e.g. The NAS parallel suite has an integer sort (IS) that is
> very latency sensitive.
>
> For demo purposes, nothing beats parallel rendering.
> There used to be a PVM and MPI POVRay packages
> that demonstrated faster completion times as more nodes were
> added.
>
> --
> Doug
>
>
>>
>> Jim
>>
>>
>>
>> On 8/20/13 12:11 PM, "Max R. Dechantsreiter" <[email protected]>
>> wrote:
>>
>>> Hi Jim,
>>>
>>> How about bucket sort?
>>>
>>> Make N as small as need be for cluster capability.
>>>
>>> Regards,
>>>
>>> Max
>>> ---
>>>
>>>
>>>
>>> On Tue, 20 Aug 2013 [email protected] wrote:
>>>
>>>> Date: Tue, 20 Aug 2013 00:23:53 +0000
>>>> From: "Lux, Jim (337C)" <[email protected]>
>>>> Subject: [Beowulf] Good demo applications for small, slow cluster
>>>> To: "[email protected]" <[email protected]>
>>>> Message-ID:
>>>>    <[email protected]>
>>>> Content-Type: text/plain; charset="us-ascii"
>>>>
>>>> I'm looking for some simple demo applications for a small, very slow
>>>> cluster that would provide a good introduction to using message passing
>>>> to implement parallelism.
>>>>
>>>> The processors are quite limited in performance (maybe a  few MFLOP),
>>>> and they can be arranged in a variety of topologies (shared bus, rings,
>>>> hypercube) with 3 network interfaces on each node.  The processor to
>>>> processor link probably runs at about 1 Mbit/second, so sending 1 kByte
>>>> takes 8 milliseconds
>>>>
>>>>
>>>> So I'd like some computational problems that can be given as
>>>> assignments on this toy cluster, and someone can thrash through getting
>>>> it to work, and in the course of things, understand about things like
>>>> bus contention, multihop vs single hop paths, distributing data and
>>>> collecting results, etc.
>>>>
>>>> There's things like N-body gravity simulations, parallelized FFTs, and
>>>> so forth.  All of these would run faster in parallel than serially on
>>>> one node, and the performance should be strongly affected by the
>>>> interconnect topology.  They also have real-world uses (so, while toys,
>>>> they are representative of what people really do with clusters)
>>>>
>>>> Since sending data takes milliseconds, it seems that computational
>>>> chunks which also take milliseconds is of the right scale.  And, of
>>>> course, we could always slow down the communication, to look at the
>>>> effect.
>>>>
>>>> There's no I/O on the nodes other than some LEDs, which could blink in
>>>> different colors to indicate what's going on in that node (e.g.
>>>> communicating, computing, waiting)
>>>>
>>>> Yes, this could all be done in simulation with virtual machines (and
>>>> probably cheaper), but it's more visceral and tactile if you're
>>>> physically connecting and disconnecting cables between nodes, and it's
>>>> learning about error behaviors and such that's what I'm getting at.
>>>>
>>>> Kind of like doing biology dissection, physics lab or chem lab for
>>>> real, as opposed to simulation.  You want the experience of "oops, I
>>>> connected the cables in the wrong order"
>>>>
>>>> Jim Lux
>>>>
>>> _______________________________________________
>>> Beowulf mailing list, [email protected] sponsored by Penguin Computing
>>> To change your subscription (digest mode or unsubscribe) visit
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>> _______________________________________________
>> Beowulf mailing list, [email protected] sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>> --
>> Mailscanner: Clean
>>
>
>
> -- 
> Doug
>
> -- 
> Mailscanner: Clean
>
>
_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to