On Tue, Nov 18, 2008 at 11:23 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Steve, what is the purpose of your political litmus test? If you are trying
> to assemble a team of seed-AI programmers with the "correct" ethics, forget
> it. Seed AI is a myth.
> http://www.mattmahoney.net/agi2.html (section 2).

(I'm assuming you meant the section "5.1. Recursive Self Improvement")

Why do you call it a myth? Assuming that an AI (not necessarily
general) that is capable of software programming is possible and such
AI is created using software, it's entirely plausible that it would be
able to find places for improvement in its source code, be it in time
or space usage, concurrency and parallelism missed opportunities,
improved caching, more efficient data-structures, etc.. In such
scenario the AI would be able to create a better version of itself,
how many times this process can be done depend heavily on the
cognitive capabilities of the AI and it's performance.

If we move to an AGI, it would be able to come up with better tools
(e.g. compilers, type systems, programming languages), improve it's
substrate (e.g. write a better OS, rewrite its the performance
critical parts in FPGA), come up with better chips, etc., without even
needing to come up with new theories (i.e. there's sufficient
information already out there that, if synthesized, can lead to better
tools). This will result in another version of the AGI with better
software and hardware, reduced space/time usage and more concurrent.

We can come up with the argument that it'll only be a faster/leaner
AGI and it will get stuck coming up with bad ideas very quickly. But
if it's truly general it would, at least be able to come up with all
science/tech human beings are eventually capable of and if the AGI can
progress further it means humans can't also progress further. If
humans are able to progress than an AGI would be able to progress, at
least as quickly as humans but probably much faster (due to it's own
performance enhancements).

I am really interested to see your comments on this line of reasoning.

> -- Matt Mahoney, [EMAIL PROTECTED]

Best regards,
Daniel Yokomizo


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to