glen herrmannsfeldt <g...@ugcs.caltech.edu> writes:
> I sort of know how the algorithms work, but now I looked at:
>
> http://en.wikipedia.org/wiki/Page_replacement_algorithm
>
> I had thought that for the clock algorithm that there would be
> some parameter that affects how the clock works, a time
> constant of some kind. The above page doesn't seem to describe
> one, though. But for the adaptive CAR algorithm, I could easily
> imagine the two would sync with each other. On the other hand,
> random replacement shouldn't have such problems. 

re:
http://www.garlic.com/~lynn/2012b.html#98 5 Byte Device Addresses?
http://www.garlic.com/~lynn/2012b.html#100 5 Byte Device Addresses?
http://www.garlic.com/~lynn/2012c.html#16 5 Byte Device Addresses?
http://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses?

misc. past posts mentioning page replacement & virtual memory management
http://www.garlic.com/~lynn/subtopic.html#clock

long winded description of clock (which is class of algorithms that
attempt to approximate LRU replacement) and slight-of-hand hack on clock
that I did in the early 70s that would dynamically switch between
approximate-LRU and random.

so the simplest that I did in the 60s was one-handed clock that rotated
around resetting/selecting virtual page ... so the elapsed time between
resetting reference bit and again examining the page for use/replacement
was the time to completely examine all pages. This resulted in a
dynamically adapting algorithm ... the greater the demand, the faster it
rotated ... however, the faster it rotated, the smaller the interval
between reset & re-examine ... the smaller interval which increases the
number of pages not referenced, which slows things down ... two opposing
effects that results in dynamically adapting to configuration/supply and
workload/demand. The idea isn't to find page that hasn't been used in
fixed amount of time but to differentiate the lower used from the higher
used (which is going to be relative passed on configuration and load).

So one-handed clock has the cursors doing the resetting & selecting
traveling around all virtual pages in sync. Two-handed clock has the
hand/cursor doing the resetting traveling around all pages at a fixed
offset ahead of the hand/cursor doing the selection. The issue here is
that while one-handed clock dynamically adapts ... that past a certain
elapsed time when there are really large number of pages, LRU
assumptions break down ... if you haven't reset/examined virtual page
for very long time ... there is little predictive correlation about
whether a specific page will be used or not used in near future. Having
the reset of the used/reference less than full rotation around all pages
tries to keep the elapsed time between reset & examine below threshold
where the interval is predictive.

So that is the standard clock ... which attempts to approximate "true
LRU" (where all virtual pages are exactly ordered as to most recent
reference ... based on theory that the page that has been least recently
used in the past is least likely to be used in the future ... for some
specific kinds of access patterns).

There is a problem that there are number situations that violate the
correlation between use in the past and use in the future. In the early
70s, I did a slight-of-hand hack on two-handed clock ... where the code
appeared to look&taste almostly exactly like two-handed block ... except
it had peculiar characteristic of approximating "true LRU" in conditions
were LRU did well and approximate "random" in conditions that LRU
performed poorly (dynamically w/o any observable change in the code
executed).

In simulations studies with full instruction tracing ... it was possible
to compare various clock implementations as well as various other kinds
of LRU-approximation algorithms ... against a "true LRU" (i.e.  keeping
exact ordering of page references and exactly choosing the least
recently used) ... various approximatations would tend to perform within
approximately 10-15percent of "true LRU". However, for my slight-of-hand
hack on clock ... it was possible to perform approximately 10percent
better than "true LRU".

However two recursive algorithms (one running virtually under the other)
where both approximate LRU (even if the exact code is different) ... the
2nd level algorithm would tend to exhibit the behavior that the least
recently pages were the most likely to be used next (because they are
selected for replacement) ... as least from the standpoint of the lowest
level algorithm (violating the LRU assumption that the least recently
used pages are the least likely to be used in the near future).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

Reply via email to