As for your opening sections 1-5, I feel like your back ground in mathematics/physics may be insufficient for you to properly treat these topics. In general your ignorance of psychology, philosophy, and neuroscience show through out. I would recommend doing a great deal of reading in each of these domains and then largely rewriting this piece in its entirety.
Don't call your work a "report", that sounds so juvenile. The reason behaviorism dominated psychology wasn't just because it was "fashionable", but rather because it was objective, and at the time we really didn't have brain scanning technologies, and such which could make the inner workings of the brain objectively observable. Your "thinking gap" section is generally rather muddled. Your idea that conscious thinking is serial is self contradicted when you talk about sensation processing being parallel and then go on to say that the line between sensory processing and conscious processing is gray. Wouldn't that mean the (parallel) sensory processing is in part serial, and the (serial) conscious processing is in part parallel? Your solution to the difficulty to creating sensory and perception processing software is to invent an AI that is smart enough to write this software itself.. So, how are we suppose to create that AI that can create AGI? The paper really doesn't get interesting until section 6, when you start to discuss problems in AGI and your suggested methods for addressing them. I myself look favorably on your proposed methodology In each case we picked a specific task for a hypothetical HLAI to pursue and, by considering how humans would do that task, worked out plausible representations and cognitive mechanisms. [And developed a precise formalization] There are strong arguments against this methodology (introspection) however, the foremost of which is the lack of objectivity, and repeatability of such kinds of research. However, the truth is there is no such thing as objectively observable science, for the only way one can become aware of an "objectively observed thing" is through ones subjective conscious experience! If you verbalize your thought processes as you perform your task, you make this introspection far more objective (as you can record this speech, and you can have others perform the same task and record their speech and then generalize from that), and this could help alleviate the concerns many would have with such an approach. I have written a paper that uses a similar methodology with similar formalism, although I have never published it. If you would be interested in reading it I could attach a copy. On Fri, Mar 17, 2017 at 6:46 PM, Sean Markan <[email protected]> wrote: > Hi AGI folks, > > I've written a paper about strategy and methodology for AGI. I would be > interested in your thoughts/criticism! > > http://www.basicai.org/pubs/h2hlai.pdf > > - Sean > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> | > Modify > <https://www.listbox.com/member/?&> > Your Subscription <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
