On 14/01/2008, Pei Wang <[EMAIL PROTECTED]> wrote:
> 2008/1/14 William Pearson <[EMAIL PROTECTED]>:
>
> > I would define the similarity of the
> > functions that it is possible to be interested in as.
> >
> > St =  F(S(t-1),P)
> >
> > That is the current state is important to what change is made to the
> > state. For example a man coming across the percept "Oui, bien sieur,"
> > would change his state in a different way depending upon whether he
> > was already fluent in french or not.
> >
> > This doesn't really change the rest of your argument, but I feel it is
> > important.
>
> That is correct for all deterministic systems, like Turing Machine.
> However, I really don't like to describe the internal situations of a
> system (or the external situation of its environment) using "state".
> Though it is the common practice, this notion implies that the
> description is complete and precise, which is often impossible. In
> this paper, you can see that I only mentioned "state" in the first
> category (Structure-AI), and leave it out for the other categories,
> even though for those we still can discuss their states, as you
> suggested.

Well, to a certain extent I have the same opinion on "actions" as you
do "states".  I consider any affect the computer system has on the
world as an action, so radiation, heating up and using up energy
whilst computing are all actions in my view. I don't pretend to have a
complete and precise description of all the possible actions either.
Percepts are similar, encompassing bit flips and other errors to the
system (actions of the environment upon the system). I have long since
given up trying to fully define all three. but I recognise their
general usefulness in discussing systems.

For details: http://codesoup.sourceforge.net/easa.pdf

>
> No, that is not the kind of situation I'm talking about. At the
> current stage, I'm not really trying to propose a quantitative
> measurement for intelligence or the similarity between systems.
> Instead, I'm looking for qualitative difference among working
> definitions of intelligence. I just have to assume that it is
> meaningful to talk about the similarity between systems in several
> aspects, and that will be enough for the conclusion of the paper.

Which is why I warned you I was being pedantic. My approach to AI I
expect is very principled, just possibly not in the narrow definition
of principled you gave.
  Will

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=85954122-be2542

Reply via email to