This may be a stupid question, but how about distributing the workload
to multiple node processes?

This should sidestep the memory barrier and also happens to leverage
all cores. IPC is pretty straight forward with node
(http://nodejs.org/api/child_process.html#child_process_child_send_message_sendhandle).
And should the amount of data ever grow to surpass the capabilities of
one machine, you can swap out the IPC for some network protocol and
run the stuff on multiple machines.

Without knowing more about the problem, I would suggest aiming for a
map-reduce-ish information flow between the distinct processes. Maybe
this is a good starting point: http://www.mapjs.org/

Regards,
Juraj

-- 
-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to