"Theory: give enough computer scientists enough keyboards and time, and they 
will eventually figure out or stumble on whatever it takes to have general 
intelligence

Experiment: let the world's programmers work on this for half a century.

Results: Zero, nada, nothing. Experiment failed. Time for another theory."

In other words, statistically, we're no smarter than a bunch of monkeys 
randomly typing away on typewriters? 🙂

Obviously we are, but I love the analogy. At least with  a proper engineering 
approach, one has a definite product in mind and how to get there. We're all 
aiming for our version of AGI.

As things go...in time, about 3 main theories would take prominence and in all 
probability only one would break the code. That "one" would've borrowed heavily 
from the other two and the general history of development. How long this would 
take, no one could predict for sure.

I do think some of us understand the problem space pretty well. I only notice 2 
mainstream theories emerging at present. Any academicians present who would 
like to summarize the problem space and emerging theories for us?

Robert Benjamin


________________________________
From: Colin Hales <[email protected]>
Sent: Friday, 28 June 2019 05:00
To: AGI
Subject: Re: [agi] ARGH!!!

On Fri, Jun 28, 2019 at 10:33 AM Steve Richfield 
<[email protected]<mailto:[email protected]>> wrote:
Colin,

The obvious thing missing from neuroscience and AGI is application of the 
Scientific Method.

Theory: give enough computer scientists enough keyboards and time, and they 
will eventually figure out or stumble on whatever it takes to have general 
intelligence

Experiment: let the world's programmers work on this for half a century.

Results: Zero, nada, nothing. Experiment failed. Time for another theory.

My/Our? Theory: Use math to predict what might work to do the needed 
processing, physics to evaluate whether biological neurons might be capable of 
such things, neuroscience to see if these actually occur in biology, computer 
science (AGI) to simulate large systems of identified components, etc.

To illustrate, we have argued in the past whether the Hall effect is 
significantly responsible for mutual inhibition. This micro-dispute can only 
exist in our current broken "system", because once a new integrated field has 
emerged, some bright physicist would spend a week running numbers through the 
equations to provide a definitive answer that we would both accept.

What we seem to need here is some sort of "constitution" for people to 
digitally sign onto. I thoroughly expect a coming AGI disaster much like the 
Perceptron Winter. Maybe if we point the way to the future via competent 
research BEFORE the crash, we can preserve future research while these folks 
join the ranks of the homeless.

Let's wring out any differences we might have and put this together.

Thoughts?

Steve

Yes. Let's. There is a lot to sort out.

I have just embarked on writing a paper to sort this out once and for all. It's 
my last attempt to get this very issue sorted out. The writing will benefit 
from a serious pile of adversarial collaboration from yourself and others. Ben? 
You interested?

I have one and only one perspective on the issue that I have not tried. Maybe 
it will push it over the line. I have written this cross-disciplinary thing out 
from so many disciplinary perspectives I have lost count. All shot blanks. And 
a sorry story it is. I have 1 approach left. Before that, this is my personal 
position and preferred way to handle it if it happens in this place:

1) I have taken the IP warrior hat off, and all my ideas will be in the paper, 
including the chip design concept. 100% ownership of something that goes 
nowhere = ...let me do the math ... hmmm. $Bugger-all in any currency.
2) Co-authors. This must be a collaboration with at least 3 authors. I have 
some ideas for prospective people. Anyone that can make a viable textual 
contribution that makes it into the final version gets authorship. Explicit 
acknowledgement will cover everything else. It would be very cool to be able to 
put the names of a couple of hundred people in the acknowledgements.
3) The text shall be fed to the commentariat in an ARXIV context for serious 
adversarial critique prior to submission in any journal.
4) The paper shall be of the ilk (scholastic standard) of those that caused the 
trajectory of the state of AGI art to go the way it has e.g.  Turing, Von 
-Neumann... and e.g. my fave, probably the most influential (required reading!) 
:  Pylyshyn, Z.W. (1980). Computation and cognition: Issues in the foundations 
of cognitive science. Behavioral and Brain Sciences 3, 111-132.
https://www.southampton.ac.uk/~harnad/Temp/.pylyshynBBS.pdf
5) It shall be published in a journal with suitable impact.

I already know what the outcome is in terms of its changes to the science of 
AGI. I have already prepared the question leading to it in the final chapter of 
my book. But that's all moot. Let's re-discover it in the paper's own 
narrative. Shoot it to death if you can. Put me out of my misery!

It's kind of weird that such a paper would be produced somewhat under the gaze 
of an AGI forum. But I'm OK with that if you are. We can manage that aspect 
offline a bit, if needed. It would be good if we can carry the whole forum 
along with us to its conclusion. If we can do that, surely it counts for 
something? Personally I think it apt that a serious left turn in AGI science 
should come from a place like this, and a social media community of this kind, 
where stakeholders abound. It would be very cool to be able to tell any 
potential reviewers to join the forum to read the archives covering the 
creation of the work!

If the social media side gets too hard to manage we can bail and go off line. 
BTW you can bail any time. I'll be doing this anyway, one way or another. Just 
tell me to EFF OFF and I will. :-)

Comments? ... Good to go? Or not?

Colin

Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Me92d94ddb64316a7bdbd1507>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T87761d322a3126b1-M4ebc00962fcc6e7ec66b78d0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to