Hello Everyone,

I've been looking at doing a small scale test of an "embarrassingly
parallel" algorithm as a proof of concept.  I'd like to use hadoop &
mapreduce so that I can scale it out once I've proven the concept.
Because I have limited resources I'd like to do a community, points based
reward system like seti@home or folding@home, but it means every
participant would either need to install hadoop themselves or I would need
to find a way to package it.

Everything I've found so far points towards the "download & install hadoop,
then run your service on top of it."

The problem I see with this is what I like to call Steve's law.
"The number of people running your application is inversely proportional to
the square of the steps required to install and run it."

Thus I would like to keep this as simple as possible and just package it
with the binary.
Since I'm writing this in Java it seems I should just be able to import and
package a jar file, but everything I've found says you need to setup a
running daemon as well.

Any thoughts?
Thanks!

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to