> Can you give an
> example of your overall thread application architecture is designed,
in
> how you distribute resources for processing and aggregate results?

To prove the results of my analysis, I must re-run the analysis against
bogus data and confirm that the statistical significance of my results
fall above a certain percentile of random chance happenings. So I have:

- 1 thread running an analysis on my REAL data
- 100-1000 threads running analysis on randomly generated data

All the analyses are identical. The closest description in perlthrtut
that I saw was 'work crew'.

As for sharing resources, the data that I am analysing is a 3-4 Gs (more
than half of what I have available in RAM), so I wasn't too keen to copy
all of this into 101-1001 processes, although forks.pm shows promise.
Sounds a lot easier than convincing my sysadmin to recompile Perl.

Another thing to note is that I do not use locking because this is an
analysis and I do not write to my data.

Reply via email to