On 6/21/08, I wrote:
The major problem I have is that writing a really really complicated computer
program is really really difficult.
----------------------------------
Steve Richfield replied:
Jim,
The ONLY rational approach to this (that I know of) is to construct an "engine"
that develops and applies machine knowledge, wisdom, or whatever, and NOT write
code yourself that actually deals with articles of knowledge/wisdom.
---------------------------------
I agree with that, (assuming that I understand what you meant).
----------------------------------
Steve wrote:
REALLY complex systems may require multi-level interpreters, where a low-level
interpreter provides a pseudo-machine on which to program a really smart
high-level interpreter, on which you program your AGI. In ~1970 I wrote an
ALGOL/FORTRAN/BASIC compiler that ran in just 16K bytes this way. At the bottom
was a pseudo-computer whose primitives were fundamental to compiling. That
pseudo-machine was then fed a program to read BNF and make compilers, which was
then fed a BNF description of my compiler, with the output being my compiler in
pseudo-machine code. One feature of this approach is that for anything to work,
everything had to work, so once past initial debugging, it worked perfectly!
Contrast this with "modern" methods that consume megabytes and never work quite
right.
----------------------------------
A compiler may be a useful tool to use in an advanced AI program (just as we
all use compilers in our programming), but I don't feel that a compiler is a
good basis for or a good metaphor for advanced AI.
----------------------------------
Steve wrote:
The more complex the software, the better the design must be, and the more
protected the execution must be. You can NEVER anticipate everything that might
go into a program, so they must fail ever so softly.
Much of what I have been challenging others on this form for came out of the
analysis and design of Dr. Eliza. The real world definitely has some
interesting structure, e.g. the figure 6 shape of cause-and-effect chains, and
that problems are a phenomenon that exists behind people's eyeballs and NOT
otherwise in the real world. Ignoring such things and "diving in" and hoping
that machine intelligence will resolve all (as many/most here seem to believe)
IMHO is a rookie error that leads nowhere useful.
Steve Richfield
-----------------------------------
I don't think that most people in this group think that machine intelligence
will resolve all the remaining problems in designing artificial intelligence,
although I have talked to people who feel that way, and the lack of discussion
about resolving some of the complexity issues does seem curious to me. Where
are they coming from? I don't know. I think most of the people feel that once
they get their basic programs working, that they will be able to figure out the
rest on the fly. This method hasn't worked yet, but as I mentioned I do think
it has something to do with the difficulty of writing complicated computer
programs. I know that you are one of the outspoken critics of faith-based
programming, so at least there is some consistency in your comments. I mention
this because, I (seriously) believe that that the Lord may have indicated that
my algorithm to solve the logical satisfiability problem will work, and if this
is true, then that may mean
that the algorithm may help resolve some lesser logical complexity problems.
Although we cannot use pure logic to represent knowable knowledge, I can use
logic to represent theory-like relations between references to knowable
components of knowledge. (By the way, please note that I did not claim that I
presently have a polynomial time solution to SAT, and I did not say that I was
absolutely certain that God pronounced my SAT algorithm to be workable. I have
carefully qualified my statements about this. I would also suggest that you
think about the fact that we have to use different kinds of reasoning with
different kinds of questions. Regardless of your own beliefs, the topic about
the necessity of using different kinds of reasoning for different kinds of
question is very relevant to discussions about advanced AI.)
What do you mean by the figure 6 shape of cause-and-effect chains. It must
refer to some kind of feedback-like effect.
Jim Bromer
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com