2008/7/2 Vladimir Nesov <[EMAIL PROTECTED]>: > On Wed, Jul 2, 2008 at 9:09 PM, William Pearson <[EMAIL PROTECTED]> wrote: >> They would get less credit from the human supervisor. Let me expand on >> what I meant about the economic competition. Let us say vmprogram A >> makes a copy of itself, called A', with some purposeful tweaks, trying >> to make itself more efficient. > > So, this process performs optimization, A has a goal that it tries to > express in form of A'. What is the problem with the algorithm that A > uses? If this algorithm is stupid (in a technical sense), A' is worse > than A and we can detect that. But this means that in fact, A' doesn't > do its job and all the search pressure comes from program B that ranks > the performance of A or A'. This > generate-blindly-or-even-stupidly-and-check is a very inefficient > algorithm. If, on the other hand, A happens to be a good program, then > A' has a good change of being better than A, and anyway A has some > understanding of what 'better' means, then what is the role of B? B > adds almost no additional pressure, almost everything is done by A. > > How do you distribute the optimization pressure between generating > programs (A) and checking programs (B)? Why do you need to do that at > all, what is the benefit of generating and checking separately, > compared to reliably generating from the same point (A alone)? If > generation is not reliable enough, it probably won't be useful as > optimization pressure anyway. >
The point of A and A' is that A', if better, may one day completely replace A. What is very good? Is 1 in 100 chances of making a mistake when generating its successor very good? If you want A' to be able to replace A, that is only 100 generations before you have made a bad mistake, and then where do you go? You have a bugged program and nothing to act as a watchdog. Also if A' is better than time A at time t, there is no guarantee that it will stay that way. Changes in the environment might favour one optimisation over another. If they both do things well, but different things then both A and A' might survive in different niches. I would also be interested in why you think we have programmers and system testers in the real world. Also worth noting is most optimisation will be done inside the vmprograms, this process is only for very fundamental code changes, e.g. changing representations, biases, ways of creating offspring. Things that cannot be tested easily any other way. I'm quite happy for it to be slow, because this process is not where the majority of quickness of the system will rest. But this process is needed for intelligence else you will be stuck with certain ways of doing things when they are not useful. Will Pearson ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225 Powered by Listbox: http://www.listbox.com
