DEREK ZAHN wrote:
It would be interesting to see what basic interests and views the members of this list hold. For a few people, published works answer this pretty clearly but that's not true for most list members.

Sounds like a good idea.

I am a professional, self-funded researcher and developer, with my own company (Surfing Samurai Robots LLC) and I label myself a "theoretical psychologist" - which means that I am part cognitive scientist, part AI researcher, but with an approach to methodology that differs from the norm in both those fields. I started out as a physicist, have done more professional software development than I care to think about, and have also done other work.

I have a specific framework that encapsulates how I think an AGI should be designed. I have made a number of references to this framework in various places, but have not yet published a full version of it (not least for proprietary reasons, about which I apologize and beg for understanding). Broadly speaking, you could describe this framework as a form of "generalized connectionism".

I strongly believe in the need to build a conceptual framework first, then instantiate that framework into a software development environment that allows systems to be built within the context of that framework.

I think we already had the hardware needed to build an AGI back in 1990.

I also think that the present approach to the "goals" or "motivations" of AI systems is very confused and oversimplified. Put simply, I think that the way people usually talk about driving an AI is too rigid and inconsistent to work in a real AGI system, and that we will end up creating a more diffuse type of mechanism (which I refer to as a "motivational" system). Counterintuitively, I think that this more diffuse mechanism will actually make it easier to build a safe and friendly AI, even though a rigorous proof of that fact will remain impossible for any kind of goal or motivational system.

I have a theory of consciousness that I sincerely believe is unique: bottom line is that there really is something weird about the subjective stuff involved in consciousness, after all, but the right kinds of "machines" (I dislike that term) would have the same mysterious subjective inner lives that we do.

My guess is that AGI systems could be built in as little as 10 years, if we actually put our minds to it.

I am very impatient to get things done, and this sometimes comes out in a brusque tone that nobody should take seriously. ;-)



Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to