Admittedly I don't have any proof, but I don't see any reason to doubt my assertions. There's nothing in them that appears to be to be specific to any particular implementation of an (almost) AGI.

OTOH, you didn't define play, so I'm still presuming that you accept the definition that I proffered. But then you also didn't explicitly accept it, so I'm not certain. To quote myself: "Play is a form a strategy testing in an environment that doesn't severely penalize failures."

There's nothing about that statement that appears to me to be specific to any particular implementation. It seems *to me*, and again I acknowledge that I have no proof of this, that any (approaching) AGI of any construction would necessarily engage in this activity.

P.S.: I'm being specific about (approaching) AGI as I doubt the possibility, and especially the feasibility, of constructing an actual AGI, rather than something which merely "approaches being an AGI at the limit". I'm less certain about an actual AGI, but I suspect that it, also, would need to play for the same reasons.


Brad Paulsen wrote:
Charles,

By now you've probably read my reply to Tintner's reply. I think that probably says it all (and them some!).

What you say holds IFF you are planing on building an airplane that flies just like a bird. In other words, if you are planning on building a human-like AGI (that could, say, pass the "Turing test"). My position is, and has been for decades, that attempting to pass the Turing test (or win either of the two, one-time-only, Loebner Prizes) is a waste of precious time and intellectual resources.

Thought experiments? No problem. Discussing ideas? No problem. Human-like AGI? Big problem.

Cheers,
Brad

Charles Hixson wrote:
Play is a form a strategy testing in an environment that doesn't severely penalize failures. As such, every AGI will necessarily spend a lot of time playing.

If you have some other particular definition, then perhaps I could understand your response if you were to define the term.

OTOH, if this is interpreted as being a machine that doesn't do anything BUT play (using my supplied definition), then your response has some merit, but even that can be very useful. Almost all of mathematics, e.g., is derived out of such play.

I have a strong suspicion that machines that don't have a "play" mode can never proceed past the reptilian level of mentation. (Here I'm talking about thought processes that are mediated via the "reptile brain" in entities like mammals. Actual reptiles may have some more advanced faculties of which I'm unaware. (Note that, e.g., shrews don't have much play capability, but they have SOME.)


Brad Paulsen wrote:
Mike Tintner wrote: "...how would you design a play machine - a machine that can play around as a child does?"

I wouldn't. IMHO that's just another waste of time and effort (unless it's being done purely for research purposes). It's a diversion of intellectual and financial resources that those serious about building an AGI any time in this century cannot afford. I firmly believe if we had not set ourselves the goal of developing human-style intelligence (embodied or not) fifty years ago, we would already have a working, non-embodied AGI.

Turing was wrong (or at least he was wrongly interpreted). Those who extended his imitation test to humanoid, embodied AI were even more wrong. We *do not need embodiment* to be able to build a powerful AGI that can be of immense utility to humanity while also surpassing human intelligence in many ways. To be sure, we want that AGI to be empathetic with human intelligence, but we do not need to make it equivalent (i.e., "just like us").

I don't want to give the impression that a non-Turing intelligence will be easy to design and build. It will probably require at least another twenty years of "two steps forward, one step back" effort. So, if we are going to develop a non-human-like, non-embodied AGI within the first quarter of this century, we are going to have to "just say no" to Turing and start to use human intelligence as an inspiration, not a destination.

Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of an AGI is surely that it must be able to play - so how would you design a play machine - a machine that can play around as a child does?

You can rewrite the brief as you choose, but my first thoughts are - it should be able to play with
a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly more flexible than a computer, but if you want to do it all on computer, fine.

How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something interesting?
How do infants, IOW, play?





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to