One other thing that I started thinking of earlier. Suppose you have a
Bayesian method that finds the most likely responses to a multiple
choice event. If the likelihood of the range of responses (or
event-actions) were constantly changing, the pure Bayesian method
could not reliably predict the more likely responses. Interestingly,
it could not predict it even if the likelihood was changing in a
periodic manner. (It would require an additional abstraction of
analysis.) If the number of possible event-actions were great enough
the possible orderings could quickly go out of range. So the
complexity argument is definitely relative to Bayesian methods as
well. (That is not what I was thinking about before, I lost the
previous thought as I thought about that. But it was in the same
ballpark.)

And the mush-mellow methods like neural nets, excessive reliance on
logic, and excessive reliance on weighted reasoning do not impress me.
I am surprised by how much researchers have gotten out of them. I mean
wow!

If I am not a logic-based AGI yahoo then why am I so interested in SAT
in p? Because the advance would be so powerful that it could be used
to advance a range of complex computational methods including AGI.
Jim Bromer


On Wed, Dec 2, 2015 at 1:43 PM, Jim Bromer <[email protected]> wrote:
> Stanley,
> Thank you for asking me these questions! I have thought about them. I
> want to respond over a few brief replies. The baby does not learn
> language by trying every possibility and a baby does need good sources
> to act as a guide in learning. (I mean I agree with those views.) But,
> how do you get that in a computer program? One of the most significant
> models of knowledge, (really the only one that I can imagine as
> viable) is a component model of knowledge. These component parts can
> be used in 'generating' different outputs. So we do not need to learn
> a separate name and symbol for every number, we can use 'components'
> to both represent them and to name them up to our limits of knowledge.
> But we only need to remember maybe a hundred different labels and how
> to attach them and we can thereby represent and talk about (samples)
> of octodecillions of numbers (if you can find the right page on
> Wikipedia to get the names of large numbers).
> But, you still have to follow certain rules when using one of those
> numbers. These rules are themselves expressed in the n-ary system as
> compressions that are convenient to use with many large numbers. (It
> is not convenient for us to do calculations on numbers in the
> octodecillion range but it is convenient for a computer to do
> calculations on numbers that large.)
>
> Some methods, like neural networks tend to combine the components 'of
> knowledge' but the successes in the field seem to reflect somewhat
> narrow AI applications. In other words the more interesting uses in AI
> are when they are treated more like component methods. My argument
> against them is that important knowledge (knowledge-stuff) is lost
> when you methodically trash or mush the component representations of
> that knowledge.
>
> However, there is no evidence -as of yet- that general knowledge can
> be so conveniently broken down to a convenient number of components as
> it has been done with the n-ary representations of numbers. So even
> though our would be AGI programs would use component models of
> knowledge it still would have to be able to go through heaps of stuff
> in order to find the best responses for a given situation.
>
> In a simple world model an AGI program could do deep searches without
> any problem. But as the world model becomes more and more nuanced the
> deep searches would *themselves* become nuanced just because they too
> are based on component models. The problem is that their are no
> conveniently compressed component methods that can be used on general
> AI.
>
> Imagination gives us the ability to think outside the box. It is at
> the least an ingredient of the 'magic sauce'. Once you begin to think
> deeply about reinforcement methods that are operating on an
> imaginative AGI program you will find that the simplicity of the
> method would start failing rather quickly at the least sign of
> conceptual complexity.
> Jim Bromer
>
>
> On Wed, Dec 2, 2015 at 11:35 AM, Stanley Nilsen <[email protected]> wrote:
>> Greetings Jim,
>> Have some questions and comments about the recent email...
>> (thread used to be: Scholarpedia article on AGI published)
>>
>> On generating possibilities...
>>
>>     Complexity seems to be the issue that lies underneath much of your
>> efforts.  And, there is no doubt that life has it's complexity.  I don't
>> recall (didn't search past emails) any explanation from you, that ties this
>> complexity to an AGI implementation.
>>
>> I personally have an issue with complexity and that is because it is
>> over-rated.  In many writings there is an implication that AGI type
>> intelligence will emerge through the construction of a unit that will try
>> lots of things and get feedback and thereby learn what creates the best
>> reward.  Sort of an evolutionary intelligence.  Do we really believe that?
>> I don't, do you?
>>
>> I've seen reference to "search space" and the implication is that the
>> intelligent unit is going to find an efficient way to navigate this huge
>> space of possible "stuff."  Am I off track in assuming that this may be the
>> connection with trying to discover the p=np solution?  Thus enabling the
>> traversing this search space more efficiently.  Is this how you see a better
>> intelligence created?
>>
>> Perhaps a simple example will suffice to see my contrasting view of the
>> development of intelligence.  Consider a new born baby, or imagine you
>> create a computer baby that has a sound generating capability.  One could
>> program the computer to generate random sounds and then watch for feedback
>> at the keyboard or through a listening device.  Good luck with that.  There
>> are several ways one might alter the experiment and produce suggestive
>> results. But, lets save time because we know that a human intelligence, the
>> baby, does not "simply" try lots of sounds and thereby become a speaking
>> person.  Instead, the baby learns by lots of feedback and, importantly, by
>> mimicking the sounds that it hears.  What does this prove?
>>
>> First, it shows that the baby isn't developing an intelligence by randomly
>> trying every combination.  Rather, the baby is "adopting" behaviors that
>> produce positive feedback from the adult who is nurturing the baby.  Enough
>> about babies...
>>
>> The computer intelligence will not evolve from nothing, far from it. It will
>> need a core set of behaviors to work with.  "Thousands of different
>> approaches" could refer to the thousands of "cores" that are eventually
>> developed by different researchers.  The core will be a big factor in how
>> the intelligent unit goes about acquiring behaviors to add to it's core.
>> And, as you might suppose, the unit will be influenced by it's surroundings,
>> the core isn't everything.
>>
>> To net it out...  It isn't the "learning" by trial and error that is
>> significant, it is the adoption process that dwarfs the "scientific"
>> gathering of truth.  The baby AGI doesn't need lots of experiments, it needs
>> good sources to guide it's adoption process.
>>
>> Stan
>>
>> On 12/02/2015 07:36 AM, Jim Bromer wrote:
>>>
>>> ...
>>>
>>> It was useful to me only because it showed me how I might generate
>>> variations on the abstractions underlying some purpose. So my methodology
>>> might be used in an actual AI program to generate possibilities, or more
>>> precisely, generate the basis's for the possibilities. It would not stand as
>>> a basis for AI itself. Solomonoff's methods do not stand as a basis for AI.
>>> (Years from now, when there are thousands of different approaches to AI and
>>> AGI we might still disagree about this.) But it should be clear that I am
>>> saying that Solomonoff's universal prediction methods are not actually
>>> standards for viable AI. Compression methods are not a viable basis for AI
>>> either. They could be used as part of the process but it is pretty far
>>> fetched to say that any computational technique that could be used in an AI
>>> or AGI program could be used as a basis for AI.
>>> I just wanted to follow through since you wrote a reasonable reply to my
>>> first reply.
>>>
>>>
>>
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to