Steve,
I am not finished with the summary.  It should take me a least a week to
finish, maybe two.
In the first line of Part 1 I mentioned that complexity is a major problem
so when you declare that I am making some rookie errors because I fail to
appreciate the speed issue I can only conclude that you either did not read
the summary too well or that you are exaggerating. This lessens the
magnitude of your criticism, and it actually represents a weak (an
extremely weak) positive indicator that maybe I am on the right track. In
other words, there may be something subtle in my theories that you are
missing. I will be glad to discuss this with you after I am finished with
the summary.
Jim Bromer


On Sat, Apr 13, 2013 at 4:30 PM, Steve Richfield
<[email protected]>wrote:

> Jim,
>
> You are making some rookie errors...
>
> 1.  You fail to appreciate the speed issue. Computers are WAY too slow to
> even be able to experiment in the domains you are speculating. In short,
> you are at least decades too early to start. Note (for example) my fast
> parsing, where I FINALLY proposed a fast-enough method of parsing English,
> using ideally-constructed tables, and NOT using the sort of expensive
> learning you are talking about.
>
> People hear "gigahertz" and their eyes cross, their knees weaken, and they
> think they can do ANYTHING.
>
> WATSON comes a little closer, still understands nothing, but uses 2,880
> processors to do it.
>
> 2.  There is a belief/condition in people's minds that they can
> arbitrarily discard entire dimensions, often more than one dimension, and
> still make a working learning system. I might bet a week of my time testing
> such a highly questionable presumption, but certainly not years. In any
> case, computers are still too slow for your approach, even with discarded
> dimensions.
>
> Note what I did with my Scanning UV Fluorescent Microscope. Here is
> something I first came up with ~50 years ago - and it was WAY ahead of its
> time. As late as ~2 years ago it was rejected as being "off topic" by the
> AGI conference. Now, Obama is calling for just such a machine in his BRAIN
> Initiative. I am now scrambling to get my SUVFM considered because it IS
> the best of the several competing approaches.
>
> I suspect that you may end up doing the same. Once we know how brains
> work, and you can buy a petascale machine from Best Buy for ~$1K, then you
> can dust off your proposal and forge on ahead. You will then be government
> funded (via Social Security) and have your medical insurance covered (by
> Medicare) as I now am.
>
> Mine is a "success" story, as most good designs that are ahead of their
> time end up lost to history, often because their creators have also been
> lost to history (died). Once you have finished your design, your next job
> will be to stay alive for another ~50 years, to be around to promote it
> when the "missing pieces" have become readily available.
>
> Steve
> ==================
> On Sat, Apr 13, 2013 at 3:39 AM, Jim Bromer <[email protected]> wrote:
>
>> Part 1
>>
>> I feel that complexity is a major problem facing contemporary AGI.  It
>> is true, that for most human reasoning we do not need to figure out
>> complicated problems precisely in order to take the first steps toward
>> competency but so far AGI has not been able to get very far beyond the
>> narrow-AI barrier.
>>
>> I am going to start with a text-based AGI program.  I agree that more
>> kinds of IO modalities would make an effective AGI program better.  However,
>> I am not aware of any evidence that sensory-based AGI or multi-modal
>> sensory based AGI or robotic based AGI has been able to achieve something
>> greater than other efforts. The core of AGI is not going to be found in the
>> peripherals.  And it is clear that starting with complicated IO
>> accessories would make AGI programming more difficult.  It seems obvious
>> that IO is necessary for AI/AGI and this abstraction is a probably more
>> appropriate basis for the requirements of AGI.
>>
>> My AGI program is going to be based on discreet references. I feel that
>> the argument that only neural networks are able to learn or are able to
>> incorporate different kinds of data objects into an associative field is
>> not accurate. I do, however, feel that more attention needs to be paid to
>> concept integration.  And I think that many of us recognize that a good
>> AGI model is going to create an internal reference model that is a kind of
>> network.  The discreet reference model more easily allows the program to
>> retain the components of an agglomeration in a way in which the traditional
>> neural network does not.  This means that it is more likely that the
>> parts of an associative agglomeration can be detected.  On the other
>> hand, since the program will develop its own internal data objects, these
>> might be formed in such a way so that the original parts might be difficult
>> to detect. With a more conscious effort to better understand concept
>> integration I think that the discreet conceptual network model will prove
>> itself fairly easily.
>>
>> I am going to use weighted reasoning and probability but only to a
>> limited extent.
>>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Full employment can be had with the stoke of a pen. Simply institute a six
> hour workday. That will easily create enough new jobs to bring back full
> employment.
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10561250-470149cf> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to