I almost never bother to read the list these days, but by coincidence I happened to take a look and discovered the below post. Since the complex systems problem is mentioned, I feel obliged to respond.

The below suggestion is a perfect illustration of why I have given up on the list: it shows that the AGI list has become, basically, just a vehicle for the promotion of Ben's projects and preferences, while everything (and everyone) else is gradually being defined as a distraction.

The so-called 'complex systems problem' perfectly fits the requirements for being included in the (1) category below: it is about the practical aspects of building AGIs, and it is backed up by solid argument.

But, Ben, in all my attempts to discuss the topic with you, what I got back was sidetracking, confusion, obfuscation, remarks directed against me personally, and eventually a sweeping dismissal of the whole topic as (in your opinion) not coherent enough to be worth discussing. In short, what I got was your intuition and opinion on the subject.... just the sort of thing you don't want to see on this list.

I will always be ready to debate the subject with you in a serious, methodical, structured way, so let me know if you ever want to do that.

But in the mean time I think that it is just political maneuvering on your part, that you want to dump the CSP into the same bucket as all the silly, unscientific arguments about why AGI is 'impossible'.

Sincerely,


Richard Loosemore








Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current
computers, according to designs that can feasibly be implemented by
moderately-sized groups of people

2)
Discussions about whether the above is even possible -- or whether it is
impossible because of weird physics, or poorly-defined special
characteristics of human creativity, or the so-called "complex systems
problem", or because AGI intrinsically requires billions of people and
quadrillions of dollars, or whatever

Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ...
certainly, they are valid topics for intellectual inquiry.

But, to do anything real, you have to make **some** decisions about what
approach to take, and I've decided long ago to take an approach of trying to
engineer an AGI system.

Now, if someone had a solid argument as to why engineering an AGI system is
impossible, that would be important.  But that never seems to be the case.
Rather, what we hear are long discussions of peoples' intuitions and
opinions in this regard.  People are welcome to their own intuitions and
opinions, but I get really bored scanning through all these intuitions about
why AGI is impossible.

One possibility would be to more narrowly focus this list, specifically on
**how to make AGI work**.

If this re-focusing were done, then philosophical arguments about the
impossibility of engineering AGI in the near term would be judged **off
topic** by definition of the list purpose.

Potentially, there could be another list, something like "agi-philosophy",
devoted to philosophical and weird-physics and other discussions about
whether AGI is possible or not.  I am not sure whether I feel like running
that other list ... and even if I ran it, I might not bother to read it very
often.  I'm interested in new, substantial ideas related to the in-principle
possibility of AGI, but not interested at all in endless philosophical
arguments over various peoples' intuitions in this regard.

One fear I have is that people who are actually interested in building AGI,
could be scared away from this list because of the large volume of anti-AGI
philosophical discussion.   Which, I add, almost never has any new content,
and mainly just repeats well-known anti-AGI arguments (Penrose-like physics
arguments ... "mind is too complex to engineer, it has to be evolved" ...
"no one has built an AGI yet therefore it will never be done" ... etc.)

What are your thoughts on this?

-- Ben






-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to