Howdy folks, 

Here's a tentative idea I'd like to get some feedback on. 

I like the idea of crowdsourcing big computations (e.g. Folding@Home 
<http://folding.stanford.edu/>). If you have a large problem to solve, you 
can distribute bits of work to a network of people. Your program can then 
run cheaply on the excess capacity of people's machines. Elm makes 
distributing 'untrusted' code simple, since you can guarantee the purity of 
the code being used. 

Here's how it might work. 
1. You have a large, parallelizable computation
2. You upload a package exposing a 'run' function that accepts and emits 
JSON, along with a list of JSON records to be used as inputs. 
3. The platform compiles your code
4. Users come to the site, and a scheduler presents them with a constant 
stream of work for each browser tab they open. 
5. You download a list of JSON results. 

At this point, I'm really interested in hearing about potential use cases, 
dangers, or existing work in this area I should check out. 

Thanks!
Carlos

-- 
You received this message because you are subscribed to the Google Groups "Elm 
Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to