Re: [fpc-other] Build farms etc.

2016-02-27 Thread Mark Morgan Lloyd

Lukasz Sokol wrote:

On 26/02/16 10:20, Mark Morgan Lloyd wrote:
[the history of OpenMOSIX, really good writeup, thanks!]

For me what dragged me to OpenMOSIX was that, unlike Beowulf,
it did not require to recompile any programs it was to run,
with some nifty special libraries. And the members of the cluster
still are usable machines.

Back in uni, in 2004, I attempted to create a lab environment,
with OpenMOSIX booted off couple of CD's of remastered Knoppix, on 4 computers,
running Octave for extensively used Matlab-like calculations,
since some people only could be bothered running Matlab or alikes.

When it did run, it ran well. 
Enough for the members of the exam panel to be convinced anyway.


It would also run with graphical programs (you probably know that since 
you mention Octave), provided that they only used OS facilities (i.e. no 
direct hardware access) and didn't use shared memory. That last is a 
killer as far as things like Mozilla/Firefox are concerned, I've got a 
colleague who keeps a lot of browser instances open for extended periods 
but since each one uses a block of shared memory to coordinate multiple 
windows it's not possible to spread the load. I've ended up setting him 
up a big AMD system, and it's interesting comparing its memory 
management performance with the slightly smaller Sun he was using until 
recently: different flavours of Linux kernel behave very differently.


The sort of thing that interested me was the case where somebody had 
PDA-type programs running on a portable system ("Minnie") which could 
offload work to something more capable ("Mike") when its owner got home 
and docked it (wireless doesn't really work here, since connections have 
to be broken fairly carefully). However that obviously mandates that all 
cooperating systems are binary-compatible, unless this sort of thing is 
reengineered using e.g. Java (I'm sure somebody has by now) and that all 
required apps are available (more of an issue).


I think it does rather more than the virtualised etc. systems which are 
so popular these days, and is in practice much closer to some of the 
classic IBM mainframe OSes which could distribute work over sysplexes- 
which is probably why Moshe Barr is seen in some of the IBM foramina.


RIP, there's no equivalent replacement on PCs (which probably suits IBM 
fine).


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other


[fpc-other] Build farms etc.

2016-02-26 Thread Mark Morgan Lloyd
I mentioned LinuxPMI (formerly OpenMOSIX) yesterday, and thought it 
might be of interest to somebody if I wrote a few more lines on it.


My understanding is that MOSIX was originally a Linux (or possibly some 
other unix) fork written by (students of) Moshe Barr. It was 
open-sourced in the early 00s, but has since been abandoned since he 
decided that supercomputer-style clustering (MPI, OpenMPI etc.) made it 
redundant.


OpenMOSIX comprised patches to the Linux 2.4.x kernel which allowed the 
main code of an application program to be moved between machines while 
it was running, with a stub left on the original system to handle kernel 
interaction. It worked at the process (not thread) level, and would 
decline to handle a program that it detected was using something 
unmappable like shared memory.


It did not attempt to implement a single system image spread over all 
available processors, and within limits each collaborating system could 
have a different kernel build. Irrespective of that the result was 
extraordinarily flexible, with long-runtime processes being moved 
between systems depending on resource availability and with a system 
being purged on (controlled) shutdown.


As a particular example, it was trivial to start a kernel build on one 
system and then to watch it spread over others as they were added to the 
pool. Quite frankly, this was one of the most impressive things I've 
seen during my time in the industry.


Unfortunately, it never made the transition to kernel 2.6 or to non-x86 
processors. I still think this was a mistake on the part of the owner, 
since while build farms are good at distributing work at the makefile 
level and things like OpenMPI work well for specially-written programs, 
OenMOSIX did a particularly good job with arbitrary unmodified code.


There's been an attempt to migrate the kernel patches to Linux 2.6 as 
LinuxPMI (Process Migration Infrastructure), but they will only apply to 
a very limited version range and are untested. What's more, the 2.6 
kernel will only compile on a limited number of mainstream distreaux, 
(from memory) I was able to find a small overlap between viable kernel 
versions and Debian "Lenny" but that's about as far as I got.


So basically, that's it. Moshe Barr has moved on (he occasionally 
appears in some of the IBM mainframe groups on Yahoo!, and this sort of 
thing really is a mainframe capability), there's few if any active 
programmers in the open source community who understand how it worked, 
and I for one am far too low on the kernel and C learning curves to be 
much use by myself... not to mention having far too much on my plate as 
it is.


http://linuxpmi.org/trac/ plus a FreeNode channel.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]
___
fpc-other maillist  -  fpc-other@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-other