Ok, which dimension are you attempting to scale up? Triviality corresponds
to minimal representational capacity, both in the environment and the agent
operating within it. Human beings are (currently) at the other end of that
scale, with enormous representational capacity for dealing with a highly
complex environment. The more complex (non-trivial) the environment, the
greater the representational capacity required of agents operating within
it in order to effectively make decisions. It is this dimension that I am
looking at.

Learning algorithms are easy to understand, design, and implement. They are
just solutions to optimization problems. I do not think learning itself is
where the bottleneck lies. Instead I look at the representational systems
underlying those learning algorithms. The simplest learning algorithms
operate over tables of choices. They tabulate expected returns or error
levels for each choice, over many repetitions, and gradually settle on the
choice(s) with the maximum expected return or minimum expected error level.
Adding layers of sophistication, we begin to see context matter more and
more: Conditional choices and statefulness result in much more interesting
and coherent behavior. Generalizing over choices and conditions and actions
to those that are similar, we see an additional gain in coherency, with
algorithms that can deal with new situations robustly based on previous
experience with other situations.

What is needed is to increase the expressivity of the underlying
representational schemes used by learning algorithms. Moving up to the
representational complexity level of ontologies, episodic memory, etc., the
representational scheme becomes ever more capable. In order to reason about
things, we need to represent those things effectively. Once we have a fully
capable representational scheme -- a programmatic framework for the
representation of Meaning, in all its forms, with all its inherent
ambiguities -- we can begin writing learning algorithms to extract meaning
from the environment, generate rules for predicting arbitrary unobserved
phenomena from arbitrary observed phenomena, recombining meanings to
produce new ones, choosing contextually appropriate and meaningful
behavior, etc. There is no understanding without meaning, and there is no
intelligence without understanding.


On Sat, Dec 21, 2013 at 1:37 AM, Steve Richfield
<[email protected]>wrote:

> Aaron,
>
> On Thu, Dec 19, 2013 at 3:42 PM, Aaron Hosford <[email protected]>wrote:
>
>> Perhaps we **MUST** be as complex as we are, just to be able to gather
>>> enough data to reduce the possibility-space down to a tractable size?!!!
>>> This could bode poorly for small implementations.
>>
>>
>> We evolved gradually from a lower intelligence level. That means there
>> are functional intermediate steps.
>>
>
> Maybe the "trick" is that a mouse's world is a LOT smaller than our own.
> Perhaps it takes a bigger system to operate in a bigger and more complex
> world.
>
> Perhaps the challenge in MVP is charting a monotonic path from the trivial
> to the human scale. Evolution has already done this for us. Now, we MUST do
> this for our machines if we are to get anything funded.
>
> Steve
> ================
>
>> On Thu, Dec 19, 2013 at 3:43 PM, Steve Richfield <
>> [email protected]> wrote:
>>
>>> AT,
>>>
>>> I suspect that there is some sort of new logic at work in Google,
>>> machine vision and other areas of pre-AGI, specifically...
>>>
>>> Google FINALLY made their search engine look for all synonyms and
>>> variations of each word (unless you surround it with "quotes"). Machine
>>> vision often becomes much more useful when a "recognition" is redefined as
>>> an INability to eliminate the prospect of an object being present. This
>>> handily solves the partial obscurement challenges, unusual orientations,
>>> etc.
>>>
>>> I suspect that AGI-logic will become the process of manipulating
>>> POSSIbilities rather than manipulating Bayesian PROBAbilities. There is
>>> 100% probability of whatever is present being there, and 0% probability of
>>> objects that are not present being there. Only in VERY familiar
>>> circumstances are Chi Square computed conditional probabilities, Bayesian
>>> computations, etc. worth anything at all.
>>>
>>> Then, the process of learning becomes the discovery of which
>>> possibilities *might* be relevant, and which possibilities are clearly
>>> irrelevant. This has the possibility of sidestepping the "probabilities of
>>> probabilities of probabilities" conundrum, where you are trying to
>>> manipulate probabilities through a haphazard process that itself is full of
>>> probabilities of being grossly wrong. This has apparently sunk the
>>> implementation of practical learning of Bayesian computations.
>>>
>>> When you have eliminated the impossible, whatever remains, *however
>>> improbable*, must be the truth.
>>> Sherlock Holmes
>>>
>>> This approach runs into problems when there is insufficient information
>>> to eliminate multiple "truths" **AND** those truths lead to different
>>> actions that have different-valued outcomes. Perhaps we **MUST** be as
>>> complex as we are, just to be able to gather enough data to reduce the
>>> possibility-space down to a tractable size?!!! This could bode poorly for
>>> small implementations.
>>>
>>> Any thoughts?
>>>
>>> Steve
>>> =======================
>>> On Wed, Dec 18, 2013 at 1:54 PM, Anastasios Tsiolakidis <
>>> [email protected]> wrote:
>>>
>>>> I am in favour of all kinds of Products on top of the current AGI
>>>> codebases, as opposed to Ben who has OKed only some kinds of Products. But
>>>> I see no place for a Minimum Viable Product as scalability is they key
>>>> unknown of search space algorithms, if I may call it that. Perhaps Siri is
>>>> a brilliant NLP assistant or Siri 3 will be a brilliant NLP assistant,
>>>> mastering 1000 words, but as I have pointed out before, by no means being
>>>> the first one to say so, full-blown language depends on a constant
>>>> goertzelification, constantly redefining words and occasionally creating
>>>> new ones, trying to produce a tight fitting film over a much more expansive
>>>> reality, a bit like the rubber case for your smartphone. Reaching out to
>>>> the millions of words and meanings and uses and deciding which ones will do
>>>> the trick or which ones to modify to do the trick, well, even in terms of
>>>> parallel programming and complexity metrics is a bit of a nightmare. I do
>>>> believe however  that the products will show the way, just like I expect
>>>> robotics with their cumbersome dancing routines and other human-inspired
>>>> but limited and primitive repertoires to slowly but inexorably advance
>>>> towards MVP-like states and beyond. Especially by focusing on the
>>>> *transitions* between repertories or domains etc, for example imagine how
>>>> well a chatbot would do if it could appropriately and seamlessly transition
>>>> between certain dialogue modes like "me", "wikipedia", "joke",
>>>> "arithmetic", "common sense" - it would probably pass the Turing test
>>>> already.
>>>>
>>>> I was going to jump the gun and say I know what is not an MVP for AGI,
>>>> "machine vision would not be an MVP", but then I had to remind myself that
>>>> in my very own analysis a human level vision system would probably need to
>>>> ask itself questions such as "could it be that there is an irregularly
>>>> shaped black and white table partially obstructing a black and white cat?"
>>>> in order to do visual object recognition. I cannot prove that the
>>>> compositionality inherent in machine vision is of the same order as the
>>>> compositionality/productivity of natural language or that of "physical
>>>> existence" (meaning the compositionality offered by the material world
>>>> itself once you actually interact with it, with its tremendous potential
>>>> for emergence/surprise, such as getting an electric shock by touching the
>>>> millionth piece of metal you encountered while all the previous touches
>>>> gave no such shock). I think it would be OK if machine vision is an order
>>>> magnitude below the complexity of language too, it would still be an MVP
>>>> and quite a formidable one!
>>>>
>>>> Although my terminology is a bit more standard than the acrobatics of
>>>> our usual suspects on this list, I am perfectly aware the intuitions will
>>>> not be clear to the majority of AGIers, and certainly they are intuitions
>>>> that relate weakly to a lot of academic but "unreal" AGI like Hutter's. I
>>>> will try to see if watching Hofstadter can equip us with a few updated,
>>>> powerful shared terms/metaphors, but at the end of the day the intuitions
>>>> are for those who understand them and are willing and able to do three
>>>> things: build, build, build!
>>>>
>>>> AT
>>>>
>>>>
>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Full employment can be had with the stoke of a pen. Simply institute a
>>> six hour workday. That will easily create enough new jobs to bring back
>>> full employment.
>>>
>>>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Full employment can be had with the stoke of a pen. Simply institute a six
> hour workday. That will easily create enough new jobs to bring back full
> employment.
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to