Why hasn't AGI progressed further? (Why hasn't semi-strong AI
progressed further than it has?) The reason, in my opinion, is that
there is something about being able to figure things that is essential
to intelligence and without it incremental improvements on a viable
AGI strategy are going to either be intractable or shunted off because
the programmer is so familiar with narrow methods that he is going to
tend to be over-reliant on them.

I want to make a simple semi-strong AI program. As the data relations
increase complexity is going to be more and more of a problem so I
expect that my would-be AGI program will become bogged down before it
became very strong. I want to try to use indexing as a way to
alleviate some of the complexity. However, as the indexing becomes
heavier it is going to increasingly become part of the problem. But I
think that new ways of looking at indexing may be useful. I just
started thinking about that again.

So what I am saying is that you and Aaron have convinced me that I
should get my program to the next stage so I can start testing some of
these ideas (I am almost there) even though I do not have the enough
of the answers to semi-strong AI or AI-complexity. On the other hand,
primitive solutions to problems of getting the AI program to be able
to figure things out are, in my opinion, absolutely necessary even for
the first steps of the development of semi-strong AI or prototype AGI.

Imagine the program has developed a primitive ability to 'understand'
something from simple English sentences. As the program accumulates
more and more details and as the indexing becomes more and more
complicated (to handle all the details) it will become more and more
difficult for the program to make any sense out of the simple forms of
sentences that it had been able to 'understand'.  This is one of the
problems that I am interested in.

So look at your thermostat problem. How would the program become aware
of the relationship between humidity, temperature and human comfort?
We would have to explain it to the program. As long as the program was
only able to detect humidity and temperature or as long as the program
had a well defined method to sense that its humidity sensor values was
an instrument of what we referred to as humidity then it could begin
to question what the relationship between humidity, temperature and
comfort were. If the AGI program had millions of bits of sensor data
streaming into it (like video feed) it would be very difficult for it
to guess which sensor reading was humidity based on our comments. And
remember that our activity and exposure to other heat sources (like
direct sunlight) also changes the relationship values.

To me it seems obvious that complexity is the problem.


Jim Bromer


On Sat, Dec 12, 2015 at 11:11 PM, Stanley Nilsen <[email protected]> wrote:
> Hi Jim,
> I realized that I didn't effectively make my point about "strategy to
> evaluate."  This in reference the the comment  - "
>
> But the program has to be able to develop its
> own strategies to 'evaluate' some things because that is a good
> strategy for a computer program to use - in some cases. And the
> usefulness of logical 'evaluation' implies that some strategy for
> evaluating conceptual relationships other than simple numerical
> methods would also be a good strategy to use. But this would be
> complicated.
>
> ------
> My real point was meant to be that, as you pointed out, it is very
> complicated to evaluate conceptual relationships, so, there may be an
> alternative way to "grow" the complexity that can take on such a thing.   I
> see the alternative strategy to be one of starting with specifics and later
> generalizing as one obtains rules of generalization.
>
> For example, imagine the system that is given a specific rule that says when
> the temperature is 87 degrees start the fan.  Okay, the system has a choice
> - not much of one, but it's smarter than never or always having the fan on.
> Later the code gets a bit more complex by having a humidity detector (a new
> part of the system adopted because someone thought "seeing" humidity
> provides a benefit...)  Now we can rewrite our "turn on the fan" rule to
> better control the fan - an energy saving benefit.  We use both temperature
> and humidity in a simple formula to determine if it is "sweltering hot" or
> just hot.  Every few minutes we run an "evaluate conditions routine" which
> might have nice math that uses a temperature range and humidity range.  Now
> our rule can be more general and say "if it's sweltering hot then turn on
> the fan."  In this simple way we start to generalize and enhance the system.
> (replacing 87 degrees with the generalization "sweltering hot.")
>
> Fans are not that interesting, but the point is, we don't need a system that
> knows all about evaporation, moving air and water vapor...  It simply picks
> up new features that make is more versatile and eventually more capable of
> adding benefit to the corner where it is at.  Stretch the imagination to
> seeing a system where eventually the unit has lots of rules about what
> constitutes benefit and has converted specifics into a more general
> interpretation.
>
> Point is, its really hard to "develop a strategy" for evaluation, but it may
> be less hard to get a crude system going and watch the evaluation develop as
> the system is enhanced.
>
> Stan
>
> On 12/06/2015 09:17 AM, Jim Bromer wrote:
>>
>> You might be able to think of ways to benefit the poor but you would
>> have a lot of trouble to implement them. You might be able to help a
>> few people but if you are like most of the rest of us that would be
>> it.
>>
>> So you think that there are a lot of opportunities to use basic
>> implementation strategies to get the AI/AGI program to do something
>> that would be beneficial in some way? But the only problem that you
>> foresee is the coding? But why would that be difficult? For example, I
>> think that I could develop a prototype of an AGI program using text
>> only. If you start with something like that then it would be simple to
>> get started because you can find code that contains the basic forms
>> for text IO. The problem that I am having is that even when I strip
>> the plan down to what I think would be a minimum for a simple database
>> management program (of my own design) it still cannot be done on the
>> little time I have to code, and without any reason to believe that I
>> could get past something that would not work too well I don't have
>> much commitment to get going on it.
>>
>> You said:
>> "Values (rules about values) come into play as the AGI picks the next
>> thing to do.  But, we already know that early AGI doesn't have a
>> "values" structure to refer to.  To program one is really not much of
>> an option - it is too complex to "calculate" what the value of
>> something is.  To test the validity of my statement that it is too
>> complex to calculate, try it. Imagine that you are writing this into
>> code!"
>>
>> I have tried to imagine writing that into code! (Why wouldn't I have
>> tried to imagine that?) But the program has to be able to develop its
>> own strategies to 'evaluate' some things because that is a good
>> strategy for a computer program to use - in some cases. And the
>> usefulness of logical 'evaluation' implies that some strategy for
>> evaluating conceptual relationships other than simple numerical
>> methods would also be a good strategy to use. But this would be
>> complicated. I think the opportunities that you mentioned would be
>> difficult to code as well - if you wanted to avoid getting bogged down
>> in code that is good for narrow-AI. The problem is that once you make
>> the commitment to do something that is effectively narrow-AI then
>> there are all sorts of enticing shortcuts that become available but
>> that you really need to keep to a minimum.
>>
>> Using a text-only program that has to start so that it can only act on
>> the simple 'opportunities' (or 'low hanging fruit') of text (and
>> conversation of course) is where I would start. But it should be clear
>> that I don't want to take all the shortcuts that sort of situation
>> would offer. So I want my program to 'look' for opportunities on its
>> own so to speak. It may not be possible for a program to do that at a
>> very sophisticated level from our point of view, but we know that
>> computer programs are good at some things that we are not so good at.
>> So, my point of view is that the program should be able to pick up all
>> sorts of patterns (opportunities) that we would miss so that is where
>> I want to start at. Having thought about that I concluded that it
>> would have to be looking at the recombination of all sorts of odd
>> kinds of data in order to find a few combinations that might be
>> useful.
>> Jim Bromer
>>
>>
>> On Fri, Dec 4, 2015 at 5:27 PM, Stanley Nilsen <[email protected]>
>> wrote:
>>>
>>> On 12/04/2015 11:24 AM, Jim Bromer wrote:
>>>>
>>>> If meta-data can be used to invoke rules, and rules (systems of rules
>>>> and conditional data) can be learned or acquired (perhaps implicitly)
>>>> then the program would have to have a way to govern the actions the
>>>> program might take. One way might be through the use of goals. But I
>>>> would want my program to be able to derive or develop some of its own
>>>> goals.
>>>
>>> The problem I see with goals is the way we tend to think of them. We
>>> humans
>>> set goals, change goals and dream of goals without knowing much about how
>>> we
>>> will make the goal happen.  We acquire ideas about reaching the goal and
>>> eventually take steps related to the goal. Fine, but we also have already
>>> developed strategies for pursuit. The AGI unit is far from developing
>>> much
>>> of anything, let alone a general strategy for reaching goals.
>>>
>>> In my thinking about AGI I rarely use the term goal, but rather think of
>>> governing the actions in terms of benefit.  Benefit ties things together
>>> for
>>> me.   If you lived in a country with really poor people, you would have
>>> very
>>> little trouble coming up with ways to benefit those poor.  And so it
>>> might
>>> be with the fledgling AGI.  The wannabe AGI is "functionality" poor, and
>>> needs to have more methods to increase the chance that it will be able to
>>> do
>>> something beneficial.   The AGI is a long way from having a world concept
>>> that allows it to assess what is beneficial to others.
>>>
>>> Values (rules about values) come into play as the AGI picks the next
>>> thing
>>> to do.  But, we already know that early AGI doesn't have a "values"
>>> structure to refer to.  To program one is really not much of an option -
>>> it
>>> is too complex to "calculate" what the value of something is.  To test
>>> the
>>> validity of my statement that it is too complex to calculate, try it.
>>> Imagine that you are writing this into code!
>>>
>>> What's the alternative to calculating a value factor?   Adoption (my
>>> preferred term.)
>>>
>>> What I mean by Adoption is the acquiring of a "behavior" that the AGI
>>> could
>>> perform and along with instructions to implement the behavior, also
>>> acquire
>>> the data that tells the AGI when and if important.  When is this behavior
>>> to
>>> be used? what is the combination or triggers? and how significant is this
>>> behavior in terms of priority to be executed?
>>>
>>> In my design concept, this package of information is referred to as the
>>> "opportunity."  I like the term opportunity because we relate to it as
>>> human
>>> beings.  People can share opportunity with each other. In describing an
>>> opportunity we offer the "when" can this be done; and are given a rough
>>> idea
>>> of why this is considered important, or at least given a recommendation.
>>> It
>>> is the recommendation that is of value to us if we ever come to the
>>> situation where the opportunity is an option for the moment.
>>>
>>> If the AGI had a large database of opportunity available to it, wouldn't
>>> that be smart!  It could probably produce some benefit.
>>>
>>> Stan
>>>
>>>
>>>
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/24379807-653794b5
>>> Modify Your Subscription:
>>> https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/9320387-ea529a81
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to