I thought that your comments about using logic were interesting. To the
best of my understanding and memory you started by saying that being an
'element-of' is not the same as being a 'subset-of'. And (I think you were
saying) 'equals' is different than 'equivalence'.  And  it is very
difficult to express how they vary using logic even though they are
obviously related. So these basic ideas have some characteristics which
were simultaneously very different and strongly similar. And I think the
fact that these relations might be essential to an application of logic
implies that AI-logic might be too difficult to work with.

I think that discrete logic-like AI looked like it held a lot of promise
because it seemed like you could start out with some simple ideas and then
build on them. The initial ideas did not have to be extremely elementary
because you could attach limited meanings to ideas that you used and still
build on it as you went. Your experience, however, suggests that this kind
of reasoning is illusory because applied logic quickly becomes too
complicated. Someone might try using statistics or discrete fuzzy reasoning
but these strategies have also turned out to be too weak. Some ideas (or
idea-like artificial mentations ) just need to be strong and precisely
bound. Although we can claim that a logic-like system could be strong
enough representationally, what you found was that the effort was way
too complicated. So you want to use neural networks and machine learning.
That makes sense but I think there is another way to deal with this problem.

Although the essence of the similarity and differences of
'being-an-element-of' and 'being-a-subset-of' may be nearly indescribable
for you or me or 99% of the rest of humanity, isn't it really similar to
the problems of ambiguity or finding attachments between words in a
sentence (like anaphors) or to other words or ideas?
(Anaphoric-like connections are too subtle for hard edged automated Boolean
Logic and it takes a lot of work for us to analyze them even when we start
to identify them.)

I use logic in my thinking almost all of the time. But it is not a single
totally integrated system of logic. On the other hand, it is not just
thousands of totally independent logical statements either. So if I have
thought about something carefully (or learned about something through a lot
of experience) I can somehow define logical relationships that are well
integrated to other issues relevant to the thought. And although this
logical thinking is not totally integrated, I can usually find numerous
overlaps across subject or sub-subject domains.

I think the problem might be (at least partially) resolved through usage
patterns. Most people are not writers and most writers are not technical
writers so it is difficult to perfectly describe the differences and
similarities of provocative abstract features of intelligence. And
therefore it is also difficult to program them in as fundamental operators.
I may have misunderstood you but regardless, the features 'is an element
of' and 'is a subset of' are labels, categorical operations and relations.
And they are other things and processes as well. It is easier to say
something like that then it is to describe them logically or to make sure
that their active definitions work perfectly every time. So rather than
trying to divine them as underlying principles I would start out by trying
to associate their usage with actual cases and then see if I could get the
program to develop abstractions about what is common and what is
different in the usages. Then using that information the program could try
to make intelligent guesses about other similar cases that it would have to
deal with in the future.

So understanding that the line between a label and an operator is not
always absolute and that some operational rules, including fundamental
rules, have to be learned through usage I think that someone might be able
to figure a way develop a mostly discrete AI program that can start off
with extreme simplifications of sophisticated ideas and build on that.

And one other thing. Usage-based learning (that can also look for
abstractions of similarities and differences) will tend to build
distributed systems of (mostly) discrete knowledge. Distributed systems of
strongly related discrete knowledge can be too complicated to manage but if
the management of something like that proves to be feasible then the
distribution of related knowledge should tend to strengthen the knowledge
base of the AI program. I don't know if I can effectively use these ideas
in an actual AI program but I think that I have some well-founded ideas
to start with.

Jim Bromer

On Thu, Feb 25, 2016 at 1:23 PM, YKY (Yan King Yin, 甄景贤) <
[email protected]> wrote:

> This 8-minute video (now with Chinese and English subtitles) explains my
> latest AGI theory:
>
> https://www.youtube.com/watch?v=c9HWcYd36E8
>
> The main idea is
> ​
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to