You will find that when you share the memory the hashes are not copied to each thread. The docs are a little misleading. On Feb 3, 2015, at 11:54 AM, Alan Raetz <alanra...@gmail.com<mailto:alanra...@gmail.com>> wrote:
Thanks for all the input. I was considering threads, but according to how I read the perlthreadtut (http://search.cpan.org/~rjbs/perl-5.18.4/pod/perlthrtut.pod#Threads_And_Data), quote: "When a new Perl thread is created, all the data associated with the current thread is copied to the new thread, and is subsequently private to that new thread" So in my application, each thread would get the entire memory footprint copied. So although the data is "shared" in terms of application usage, in terms of physical memory limitations, I would quickly use up the machine memory. If you're reading/writing files, this may not be a significant difference, but with the way this app is structured now, I think it may be. One of my thoughts as far as reducing the impact of request overhead was to "bundle" requests, such that a single request is getting multiple tasks. I will look into some of these suggestions more, thanks again.