Mike Tintner wrote:
Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational. My system is not "rational" in that sense at all.
Richard,
Out of interest, rather than pursuing the original argument:
1) Who are these programmers/ systembuilders who try to create programs
(and what are the programs/ systems) that are either "irrational" or
"non-rational" (and described as such)?
I'm a little partied out right now, so all I have time for is to
suggest: Hofstadter's group builds all kinds of programs that do things
without logic. Phil Johnson-Laird (and students) used to try to model
reasoning ability using systems that did not do logic. All kinds of
language processing people use various kinds of neural nets: see my
earlier research papers with Gordon Brown et al, as well as folks like
Mark Seidenberg, Kim Plunkett etc. Marslen-Wilson and Tyler used
something called a "Cohort Model" to describe some aspects of language.
I am just dragging up the name of anyone who has ever done any kind of
computer modelling of some aspect of cognition: all of these people do
not use systems that do any kind of "logical" processing. I could go on
indefinitely. There are probably hundreds of them. They do not try to
build complete systems, of course, just local models.
When I have proposed (in different threads) that the mind is not
rationally, algorithmically programmed I have been met with uniform and
often fierce resistance both on this and another AI forum.
Hey, join the club! You have read my little brouhaha with Yudkowsky
last year I presume? A lot of AI people have their heads up their
asses, so yes, they believe that rationality is God.
It does depend how you put it though: sometimes you use rationality to
not mean what they mean, so that might explain the ferocity.
My argument
re the philosophy of mind of cog sci & other sciences is of course not
based on such reactions, but they do confirm my argument. And the
position you at first appear to be adopting is unique both in my
experience and my reading.
2) How is your system "not rational"? Does it not use algorithms?
It uses "dynamic relaxation" in a "generalized neural net". Too much to
explain in a hurry.
And could you give a specific example or two of the kind of problem that
it deals with - non-rationally? (BTW I don't think I've seen any
problem examples for your system anywhere, period - for all I know, it
could be designed to read children' stories, bomb Iraq, do syllogisms,
work out your domestic budget, or work out the meaning of life - or play
and develop in virtual worlds).
I am playing this close, for the time being, but I have released a small
amount of it in a forthcoming neuroscience paper. I'll send it to you
tomorrow if you like, but it does not go into a lot of detail.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73425500-35e13a