On 07/21/14 17:35, Roland Kaufmann wrote:

On 2014-07-22 00:14, wireless wrote:
Once these modules are verified to work, I'll be posting the ebuilds
(gentoo software packages) up on the net for others to enjoy.

Please consider donating them to a public Git repository here, e.g. we
could create opm/opm-gentoo, so that users could add them directly using
something like `layman --add opm
--overlays=https://raw.github.com/opm/opm-gentoo/master/overlay.xml`

Quick answer: Non_problemo.

There is a bit of cleanup, testing and organizational work on the gentoo modules as they stand before publically releasing the codes (ebuilds). Even in an Overlay, the Gentoo community has very high (the highest?) standards related to building up codes. Not only must they work, but we have to have the "flag options" properly identified and tested. Having a long relationship to Gentoo, I'm not about to release something that brings about bitching (scientist types are the worst, which is good) even in Overlays. I'm guessing a few more weeks, but if "Lady Luck" finds pleasure in the Gentoo effort, we shall see how quickly we can get these modules ready. That said, Gentoo is the best distro for building from 100% sources and optimization. Gentoo also has a very active ARM64 (aarch64) team that surely will be one of the first linux distros to run on massively parallel ARM64 bit processors, very soon.....


Additionally, from my review and scant understanding of the codes, they
needs to be re-organized such that every module can be tested and (where applicable) the mathematical techniques underlying the codes need to be very easy to replace and individually test for performance improvements and accuracy. Ideally where a choice exists for the various (competing) math_codes it would be nice if a simple (gui) button could make those simulation runs easily selectable while ensuring that both data sets kept for future runs and reference.


If we break out the mathematics, clearly, we can get all
sorts of phenomenal coders to create components to compete on performance vs accuracy in the model. These underlying math_code choices needs to be clearly delineated and separated from the model assumption to enable a collection of simulation runs to be collect up in preparation for "REAL-TIME" rendering. Whether database support is necessary needs to be tested and evaluated. Similarly, porting the various modules that are bottlenecks to run over a distributed (clustered) file system will afford the opportunity for both massively parallel approaches as well as using specialty processors, such as GPUs and FPGAs and arm64.


Arm64 is the dominant processor, because of a paradigm shift in the fundamentals of processor design. The old constraint that Intel dominated was based on compaction; that is the simple fact that the best gains in processor design were do to how densely one could compact the logic gates. But the shift has resulted because of the fact that the geometries of the "logic gates" now being sub 10 nm, minimization of the "heat generation" dominants processor design. So for general purpose processors, ARM has laid that competition to rest (at least for the next decade), evidence my the myriad of companies implementing ARM design for competitive advantage. I look forward to kicking Intel to the curb. For those protions of the codes requiring SIMD or MIMD, custom, finely tune algorithms, porting to run on 'bare metal' via either GPUs or FPGAs is still the current best solution.


Fully implementing competing approachs will allow the simulation runs to be "component aware" so that repeated simulation runs only have to "grind" (re_compute) the numbers on data that will change. If/when we reach this level of intelligence in the simulation architecture, WE can run simulations in real time (defined here as less than 1 second latency on the graphical output) while simultaneously making "what if" changes to the model assumptions and the underlying math_codes used to grind out the new numbers.


Speed kills; this is a universal accepted concept. If you want OPM to be "the killer application" (used to benchmark clusters and such) then every piece of code needs to be "well defined" and modular so it can be "fair game" for replacement by a better (faster/accurate) performing component. I hope (and pray) that we are all on the same page? I do strive to be a "team player" but compromise does not equate to mediocrity, in my genetics; hence the need to let every piece of code become "fair game" for improvement.


Is there an "OPM repository" to collect up scholarly publications, particularly those of mathematical significance, for the convenience of OUR TEAM ?


sincerely,
James


_______________________________________________
Opm mailing list
[email protected]
http://www.opm-project.org/mailman/listinfo/opm

Reply via email to