>
> Your test is the opposite of objective and measurable. What if two high IQ
> people disagree if a robot acts like a human or not?
>
> Which IQ test? There are plenty of high IQ societies that will tell you
> your IQ is 180 as long as you pay the membership fee.
>
> What if I upload the same software to a Boston Dynamics robot dog or robot
> humanoid like Atlas, do you really think you will get the same answer?
>

Valid criticisms 👌

I wanted to start the conversation on a true benchmark, mission
accomplished! 😎

If a consensus is formed in this community, the results can be published in
AGI25?

Here’s some ideas for addressing the points Matt raised:

- Add a code postfix to ground the conditions
- E.g. Ruting Binet100_humanoid_SH
- Above example can mean:
  - The IQ test taken by the observing person is Stanford-Binet 100
questions in 24 minutes
  - The robot is in humanoid form, quality of the parts not important
  - The robot has the “Sight” and “Hearing” of the five human senses,
quality of the sensors not important
- The necessary and sufficient condition for passing the test is if only
one person validated by the IQ test confirms that the robot has human-like
behavior
- Someone can take a bribe and confirm a robot, that’s a fraudulent case of
passing the test, and can be contested by the scientific community
- A committee of trusted test takers can exist to take the test annually on
a live stage!

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T244a8630dc835f49-M534a366eeb945fdb092a6a13
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to