"Eric P." <[EMAIL PROTECTED]> writes:
> VMS and WNT use Second Chance Fifo, which has very different behavior
> to strict Fifo, and is reputed to have the same behavior as WSClock.
> VMS also has an option for a third chance - I don't know if WNT
> also has that. This gives them all the control advantages that
> local working sets allow with the same paging statistics as global.
>
> In second chance fifo, pages removed from a local working set are
> tossed into a global Valid list to become a candidate for recycling.
> If referenced again quickly the page is pulled page into the local
> working set for almost no cost. This is essentially the same as
> the WSClock and its referenced bits.
>
> In 3rd chance, VMS allows a page to make 2 trips through the working
> set list. After the first trip a flag is set on the working set entry
> it goes to the tail of the list and the PTE's valid flag is cleared.
> If it gets touched again then the handler just enables the PTE.
> When it gets to the head of the list again the PTE is checked to
> see if it was referenced. If is was, it cycles again, otherwise
> it goes into the global Valid list. [1]

as mentioned in the previous post
http://www.garlic.com/~lynn/2006i.html#22 virtual memory
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in
S/370
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in
S/370
http://www.garlic.com/~lynn/2006i.html#28 virtual memory
http://www.garlic.com/~lynn/2006i.html#30 virtual memory
http://www.garlic.com/~lynn/2006i.html#31 virtual memory
http://www.garlic.com/~lynn/2006i.html#32 virtual memory
http://www.garlic.com/~lynn/2006i.html#33 virtual memory
http://www.garlic.com/~lynn/2006i.html#36 virtual memory
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006i.html#38 virtual memory
http://www.garlic.com/~lynn/2006i.html#39 virtual memory
http://www.garlic.com/~lynn/2006i.html#40 virtual memory
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006i.html#42 virtual memory

some of the work that i had done in the 60s as an undergraduate for
cp67 ... had been dropped in the morph from cp67 to vm370. i was able
to rectify that with the resource manager released 11may1976.

the cp67 "clock" scanned pages by their real storage address.
basically the idea behind a "clock" reset and testing the reference
bit is that the time it takes to cycle completely around all virtual
pages represents approximately the same interval between the resetting
and testing for all pages.

one of the advantages of clock type implementation that i had done in
the 60s was that it had some interesting dynamic adaptive stuff.  if
there weren't enuf pages, the replacement algorithm would be called
more frequently ... causing it to cycle through more pages faster.  as
it did the cycle quicker ... there was shortened time between the time
a page had its reference reset and then tested again. with the
shortened cycle time, there tended to be more pages that hadn't a
chance to be used and therefor have their reference bit turned on
again. as a result, each time the selection was called on a page
fault, fewer pages had to be examined before finding one w/o its
reference set. if the avg. number of pages examined per page fault
was reduced ... then it increased the total time to cycle through
all pages (allowing more pages to have a chance to be used and
have their reference bit set).

part of the vm370 morph was that it change the page scanning from real
storage address (which basically distributed virtual pages fairly
randomly) to a link list. one of the side-effects of the link list
management was that it drastically disturbed the basic premise under
which clock operated. with the virtual pages position in the list
constantly being perturbed ... it was no longer possible to assert
that the time between a page had its reference reset and not taken and
the time it was examined again ... was in anyway related to the
avg. time it took clock to cycle through all pages.

basically a single reference bit represented some amount of activity
history related to the use of that specific page. in clock the
avg. amount of activity history that a reference bit represents is the
interval between the time the bit was reset and the time it was
examined again. on the avg. that is the interval that it takes clock
to cycle through all pages ... and is approximately the same for all
pages. if pages were being constantly re-ordered on the list (that is
being used by clock to examine pages) ... there is much less assurance
that the inverval between times that a specific page was examined in
any way relates to the avg. elapsed time it takes clock to make one
complete cycle through all pages. this perturbs and biases how pages
are selected in ways totally unrelated to the overall system avg. of
the interval between one reset/examination and the next ... basically
violating any claim as to approximating a least recently used
replacement strategy.

because of the constant list re-order in the initial vm370
implementation ... it was no longer possible to claim that it actually
approached a real approximation of a least recently used replacement
strategy. on the "micro" level ... they claimed that the code made
complete cycles through the list ... just like the implementation that
cycled through real storage. however, at the "macro" level, they
didn't see that the constant list re-ordering invalidated basic
assumptions about approximating least recently used replacement
strategy.

the other thing that the initial morph to vm370 was that "shared"
virtual memory pages were not included in the standard list for
selection ... so they were subject to the same examine/reset/examine
replacement cycle as non-shared pages. this was downplayed by saying
that it only amounted to, at most, 16 shared pages.

well a couple releases came and went ... and they then decided
to release a small subset of my memory mapping stuff as
something called discontiguous shared segments. recent post
on the subject in this thread
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in
S/370
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in
S/370

basically the support in vm370 for having more than single shared
segment ... and some number of my changes to cms code to make it r/o
(i.e. be able to operate in a r/o protected shared segment)
... various collected posts
http://www.garlic.com/~lynn/subtopic.html#mmap
http://www.garlic.com/~lynn/subtopic.html#adcon

in any case, this drastically increased the possible amount of shared
virtual pages ... that were being treated specially by the list-based
replacement algorithm ... and not subject to the same least recently
used replacement strategies and normal virtual pages ... some shared
virtual page at any specific moment might only be relatively lightly
used by a single virtual address space ... even tho it appeared in
multiple different virtual address spaces; aka its "sharing"
characteristic might have little to do with its "used" characteristic
(but the "sharing" characteristic was somewhat being used in place of
its actual "use" characteristic for determing replacement selection).

however, i was able to rectify that when they allowed me to ship
resource manager several months later on 11may76 ... and put the
replacement strategy back to the way I had implemented it for
cp67 as an undergraduate in the 60s.
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock

so i had a similar but different argument with the group doing os/vs2
... the morph of real-memory os/360 mvt with support for virtual
memory. recent post in this thread about other aspects of that
effort
http://www.garlic.com/~lynn/2006i.html#33 virtual memory

they were also claiming that they were doing a least recently used
replacement stragegy. however, their performance group did some simple
modeling and found that if they choose non-changed least recently used
pages ... before choosing changed least recently used pages ... that
the service time to handle the replacement was drastically reduced.  a
non-changed page already had an exact duplicate out on disk ...  and
therefor replacement processing could simply discard the copy in
virtual memory and make the real memory location available. a
"changed" page selected for replacement, first had to be written to
disk before the real memory location was available. first attempting
to select non-changed pages for replacement significantly reduced the
service time and processing. I argued that such approach basically
perturbed and violated any claim as to approximating least recently
used replacement strategy. they did it any way.

so os/vs2 svs eventually morphed into os/vs2 mvs ... and then they
shortened the name to just calling it mvs. customers had been using it
for some number of years ... it was coming up in 1980 ... and somebody
discovered that high-useage, shared executable images (i.e. same
executable image appearing in lots of different virtual address spaces
and being executed by lots of different applications) were being
selected for replacement before low-useage application data pages. The
high-useage, shared executable images were "read-only" ...  aka they
were never modified and/or changed. The low-useage application data
areas were constantly being changed. As a result, the high-useage
(execution, shared) pages were being selected for replacement before
low-useage application data pages.

in much the same way that the vm370 page list management was
constantly and significantly changing the order that pages were
examined for replacement ... invalidating basic premise of least
recently used replacement stragegies ... the os/vs2 (svs and mvs) was
also creating an ordering different than based on purely use ... also
invalidating basic premise of least recently used replacement
strategies.

some past posts mentiong the os/vs2 early forey into least
recently used replacement strategy:
http://www.garlic.com/~lynn/94.html#4 Schedulers
http://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
http://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for
a computer to Love?
http://www.garlic.com/~lynn/2001b.html#61 Disks size growing while disk count
shrinking = bad performance
http://www.garlic.com/~lynn/2002.html#6 index searching
http://www.garlic.com/~lynn/2002c.html#52 Swapper was Re: History of Login
Names
http://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format
(long post)
http://www.garlic.com/~lynn/2004.html#13 Holee shit!  30 years ago!
http://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the
line
http://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Reply via email to