On 31 Jan 00, at 11:38, George Woltman wrote:
> GIMPS has always had a good reputation for not interfering with
> your normal work. To preserve GIMPS' reputation, I'm thinking of
> implementing the following. In the Options/CPU dialog, prime95 will let
> you select the maximum amount of memory the program can use and the hours
> of the day it can use it. The default would be 80% of RAM (divided by the
> number of CPUs) during nighttime hours only.
Hmm. Small memory systems tend to have a smaller percentage of system
RAM available to applications than systems with larger memory. The
point is that the OS kernel & "essential" DLLs/loadable modules are
fixed in size (for a particular hardware & OS setup).
>
> Finally, the questions:
>
> Would we be better off disabling P-1 factoring unless the user explicitly
> activates it (knowing that most users won't read enough to turn it on)?
Sounds sensible - especially as an interim measure. When we get more
experience we may be able to change this decision in favour of
enabling P-1 with a small memory work space as the default.
Probably we should tell users what is going on the first time they
run a V20 program (i.e. create local.ini using V20 or run V20 for the
first time on an existing local.ini file). Whatever the default is.
We should also be looking at bringing P-1 factoring into the PrimeNet
assignment allocation/results reporting system, if we want to
encourage people to participate on a reasonable scale.
>
> Are there better solutions? It would be nice if prime95 could detect that
> memory thrashing was happening and pause itself until more memory was
> available. Can Windows programs do this?
Yes - there must be APIs since there are existing Windows
applications which measure such things. Including tuning aids
supplied in the Microsoft Resource Kits for Win 95 and NT. Look at
the page fault I/O rate - anything above 1 per second sustained for
any period of time indicates a problem with memory quantity. Note, it
is _usual_ for systems to operate with more virtual memory demand
summed over active applications than physical memory available, since
many applications contain static work arrays and/or major blocks of
code (especially called from language support libraries) which are
either shared or not used by whatever tasks the applications are
doing at the time.
Measuring the page fault rate (or, crudely, the I/O rate on whichever
device(s) have swap files mounted) is also as good a way as any of
identifying memory shortages on a linux system.
Is there any realistic way of implementing the required memory space
as "virtual memory" using a random access file instead of a plain
memory workspace, or would that cause excessive overheads? The reason
I ask is that, if it were done that way, the system would (more or
less) tune itself according to memory availability on all major OS.
Windows 9x, NT and linux all use "slack" memory to buffer disk I/O.
> Are the defaults too aggressive (especially the 80% of RAM)?
If the system is going to be "self tuning" in any way, it should
endeavour to use _all_ the available RAM (not just 80%) providing
that it gets itself out of the way when a foreground application
demands memory. Preferably Prime95 would contract its workspace - at
the expense of its own efficiency - rather than just suspending
itself. Or automatically switch to something else with a smaller
memory footprint (ECM on small exponents, or trial factoring - or
starting Phase 1 on another P-1 job, if the problem is caused by
being in phase 2) if there really isn't sufficient memory available
to continue running P-1 (phase 2) profitably for the time being -
automatically switching back when the crisis passes (this would mean
interrupting the other task for a "system status check" every few
minutes, though the overhead should be manageable).
Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers