I failed in that intention, and GC running time depends on the depth of the data structure with O(n*log n). That is very good. But there is something strange, however.
Here is the code:
#####################################################################
record double_list(sublist,superlist)
procedure nothing()
end
procedure main()
ca:=read()
write(ca,"=========================")
List_of_lists:=list(ca,double_list([],[]))
repeat
{ j:= if /j then ca+1 else j+1
put(List_of_lists, new_double_list:=double_list([],[]) )
every i:=1 to ca do
{ old_double_list:=List_of_lists[j-i]
put(new_double_list.sublist, old_double_list)
#put(old_double_list.superlist, new_double_list)
}
if j % 500=0 then #this part measures time and it is less important
{ time:=&time
every x:=1 to 500 do every collect(!3,1)
t:=&time-time
time2:=&time
every x:=1 to 500 do every nothing(!3,1) #to eliminate loop time
t2:=t-(&time-time2)
write (j," ",real(t2)/j) #GC time per node
}
}
end
#####################################################It is easier to collect data structure that resemble 'f(n)=f(n-1)+f(n-2)+f(n-3)' i.e. three edges per node, than 'f(n)=f(n-1)+f(n-2)' i.e. two edges per node, for same number of nodes - if algorithm is intelligent enough - because in first case, there are less recursive steps from n to 0. That really happens, however in strange way:
1========================= #ca is in this line 1000 1.802 #first number is number of nodes, second is GC time/node 2000 2.128 >Exit code: 128
continues to
8========================= 1000 1.803 2000 1.968 #even faster, but no more nodes! >Exit code: 128
and then, suddenly
9========================= 1000 2.003 2000 2.078 .... 16000 2.1174375 >Exit code: 128
It stays exactly same for 9-16 and then again jump at 17, etc. What is the magic in the number eight here?
=================================
One comment: those recursive procedures in GC, they shouldn't be really there, right? Recursion is traditionally known as inefficient and it is pretty strange that recursion has find it's way in such an essential system. One of those two procedures Clint mentioned is in Griswold's implementation book, I didn't succeeded to find them in Icon source as distributed (maybe Windows 'Find' doesn't work well) but for that one, it really doesn't look like anything one who is familiar with compiling Icon source code couldn't fix in less than one day.
I suppose that original author only prototyped GC with that, and that somehow happened to stay there. Some healthy reserves could be find there.
Another one idea as emergency solution for problem with growing stack in windows, if there will be large release without recursion fix: you can produce two versions of important binaries, one with some moderate stack size, let's say 2 MB, and another with big one, lets say 32 MB for particularly demanding programs. It's kinda ugly, and exposes weak sides of language, but it can improve quality to the degree and buy some time. Borland did something like that with their Turbo Pascal and 'protected mode' in early 90's.
--------------
Kazimir Majorinc, Zagreb, Croatia
------------------------------------------------------- This SF.Net email sponsored by: Free pre-built ASP.NET sites including Data Reports, E-commerce, Portals, and Forums are available now. Download today and enter to win an XBOX or Visual Studio .NET. http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 _______________________________________________ Unicon-group mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/unicon-group
