Michael Schnell wrote:
For clusters there is already a de facto standard: MPI. It works with
FPC.

  AFAIK OpenMP and MPI work well together and are separate.
Right now my concerns are not about how and what features should be implemented (in the libraries), but only about how they are presented by _language_extensions. (And about the interface the libraries offer to the user)

Here an as broad as possible range of ways to do parallel processing should be allowed - hiding the details (i.e. if using MPI or OpenMP or something else) should be hidden in the implementation in the library in a way as transparent as possible to the user.

I think that modifying the language to incorporate MPI would be extremely difficult and would be so near the bleeding edge of computer science that the next person to do it would design something equally good but utterly different. As I understand it MPI is generally used to pass a result vector between nodes, at that point determining the extent of the vector and when it's ready for transfer is going to be highly application-specific i.e. a library issue.

Looking elsewhere in this thread, I believe that the number of CPUs can be got as part of the thread-affinity API which I believe would be of use in TThread, at worst if you set the affinity to 0xffffffff you get back a vector showing which CPUs actually exist.

However an interesting issue is that on some architectures the list of available CPUs can have "holes" in it, e.g. I've got a system here with CPUs 0, 1 4..15 available.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to