>
> Perhaps we **MUST** be as complex as we are, just to be able to gather
> enough data to reduce the possibility-space down to a tractable size?!!!
> This could bode poorly for small implementations.


We evolved gradually from a lower intelligence level. That means there are
functional intermediate steps.


On Thu, Dec 19, 2013 at 3:43 PM, Steve Richfield
<[email protected]>wrote:

> AT,
>
> I suspect that there is some sort of new logic at work in Google, machine
> vision and other areas of pre-AGI, specifically...
>
> Google FINALLY made their search engine look for all synonyms and
> variations of each word (unless you surround it with "quotes"). Machine
> vision often becomes much more useful when a "recognition" is redefined as
> an INability to eliminate the prospect of an object being present. This
> handily solves the partial obscurement challenges, unusual orientations,
> etc.
>
> I suspect that AGI-logic will become the process of manipulating
> POSSIbilities rather than manipulating Bayesian PROBAbilities. There is
> 100% probability of whatever is present being there, and 0% probability of
> objects that are not present being there. Only in VERY familiar
> circumstances are Chi Square computed conditional probabilities, Bayesian
> computations, etc. worth anything at all.
>
> Then, the process of learning becomes the discovery of which possibilities
> *might* be relevant, and which possibilities are clearly irrelevant. This
> has the possibility of sidestepping the "probabilities of probabilities of
> probabilities" conundrum, where you are trying to manipulate probabilities
> through a haphazard process that itself is full of probabilities of being
> grossly wrong. This has apparently sunk the implementation of practical
> learning of Bayesian computations.
>
> When you have eliminated the impossible, whatever remains, *however
> improbable*, must be the truth.
> Sherlock Holmes
>
> This approach runs into problems when there is insufficient information to
> eliminate multiple "truths" **AND** those truths lead to different actions
> that have different-valued outcomes. Perhaps we **MUST** be as complex as
> we are, just to be able to gather enough data to reduce the
> possibility-space down to a tractable size?!!! This could bode poorly for
> small implementations.
>
> Any thoughts?
>
> Steve
> =======================
> On Wed, Dec 18, 2013 at 1:54 PM, Anastasios Tsiolakidis <
> [email protected]> wrote:
>
>> I am in favour of all kinds of Products on top of the current AGI
>> codebases, as opposed to Ben who has OKed only some kinds of Products. But
>> I see no place for a Minimum Viable Product as scalability is they key
>> unknown of search space algorithms, if I may call it that. Perhaps Siri is
>> a brilliant NLP assistant or Siri 3 will be a brilliant NLP assistant,
>> mastering 1000 words, but as I have pointed out before, by no means being
>> the first one to say so, full-blown language depends on a constant
>> goertzelification, constantly redefining words and occasionally creating
>> new ones, trying to produce a tight fitting film over a much more expansive
>> reality, a bit like the rubber case for your smartphone. Reaching out to
>> the millions of words and meanings and uses and deciding which ones will do
>> the trick or which ones to modify to do the trick, well, even in terms of
>> parallel programming and complexity metrics is a bit of a nightmare. I do
>> believe however  that the products will show the way, just like I expect
>> robotics with their cumbersome dancing routines and other human-inspired
>> but limited and primitive repertoires to slowly but inexorably advance
>> towards MVP-like states and beyond. Especially by focusing on the
>> *transitions* between repertories or domains etc, for example imagine how
>> well a chatbot would do if it could appropriately and seamlessly transition
>> between certain dialogue modes like "me", "wikipedia", "joke",
>> "arithmetic", "common sense" - it would probably pass the Turing test
>> already.
>>
>> I was going to jump the gun and say I know what is not an MVP for AGI,
>> "machine vision would not be an MVP", but then I had to remind myself that
>> in my very own analysis a human level vision system would probably need to
>> ask itself questions such as "could it be that there is an irregularly
>> shaped black and white table partially obstructing a black and white cat?"
>> in order to do visual object recognition. I cannot prove that the
>> compositionality inherent in machine vision is of the same order as the
>> compositionality/productivity of natural language or that of "physical
>> existence" (meaning the compositionality offered by the material world
>> itself once you actually interact with it, with its tremendous potential
>> for emergence/surprise, such as getting an electric shock by touching the
>> millionth piece of metal you encountered while all the previous touches
>> gave no such shock). I think it would be OK if machine vision is an order
>> magnitude below the complexity of language too, it would still be an MVP
>> and quite a formidable one!
>>
>> Although my terminology is a bit more standard than the acrobatics of our
>> usual suspects on this list, I am perfectly aware the intuitions will not
>> be clear to the majority of AGIers, and certainly they are intuitions that
>> relate weakly to a lot of academic but "unreal" AGI like Hutter's. I will
>> try to see if watching Hofstadter can equip us with a few updated, powerful
>> shared terms/metaphors, but at the end of the day the intuitions are for
>> those who understand them and are willing and able to do three things:
>> build, build, build!
>>
>> AT
>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Full employment can be had with the stoke of a pen. Simply institute a six
> hour workday. That will easily create enough new jobs to bring back full
> employment.
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to