Thank you Matt. I always enjoy reading your posts and comments.

On Thu, Jan 14, 2021, 7:23 AM Matt Mahoney <mattmahone...@gmail.com> wrote:

> Most people don't think about AGI. Very few of those who do believe that
> we need to worry about an unfriendly singularity.
>
> A self improving AGI needs to acquire both knowledge and computing power,
> the two components of intelligence. Right now, that's the internet. Your
> phone already has more storage than human long term memory (10^9 bits).
> It's not clear that self improvement has to start at human level
> intelligence, measured somehow.
>
> A self improving AGI in a box (if that were possible) cannot gain
> knowledge by rewriting its own software. The worst that can happen is self
> replicating nanotechnology. Freitas worked out the physics. The replication
> rate is maximized for bacteria sized agents. They would be limited to about
> the same speed and power requirement as real bacteria, or slightly better.
> https://foresight.org/nano/Ecophagy.php
>
> Your containment strategy has to account for millions of cheap nanoscale
> 3-D printers in the hands of hobbyists and hackers. At the current rate of
> Moore's law, the internet will surpass the computing power of the biosphere
> (10^37 bits of DNA memory, 10^31 transcription operations per second) in
> the 2080s. Solar cells are already more efficient than chlorophyll, and
> could displace DNA based life. But I don't think it's something we need to
> worry about now. Nothing we could build today would be more dangerous than
> a computer virus.
>
> On Wed, Jan 13, 2021, 5:42 PM Mohammadreza Alidoust <
> class.alido...@gmail.com> wrote:
>
>> Dear AGI friends,
>>
>>
>> Yesterday I was notified that one of my papers was cited in a 2020
>> technical report from Global Catastrophic Risk Institute about active AGI
>> projects. Although that paper was my first steps in AGI I wondered why my
>> other paper about my main model "AGI Brain" was not cited. Also, AGI Brain
>> is my personal project based on many years of my enthusiasm in the field,
>> and I am not working on it for AGT co. nor anyone/any company else as
>> mentioned in that report.
>>
>> Anyway I got surprised that how people and media might look at AGI. As
>> mentioned in the report, AGI Brain is the only active AGI project in my
>> country Iran and as far as I know most of the people here even do not know
>> about the field. When I speak with my friends about what AGI is and about
>> my excitement and eager in AGI, they mostly get excited and say: Oh! Thats
>> fantastic! Such a great field! And then they ask: But... would AGI be an
>> enemy to the humanity? Would it vanish life from the earth? How could you
>> control such thing? What are the benefits of AGI?... . Some of their
>> questions leave me speechless.
>>
>> And now I wonder how would other people around the world think about
>> this? Do they really think AGI is a global catastrophic risk? And how do
>> you answer such questions? Your answers may help me either ;)
>>
>> I suggest AGI scientists in the Ethics field should explain the bright
>> side of the AGI to the public sector in a way that everybody could easily
>> understand that side too.
>>
>>
>> Best regards and stay safe!
>>
>> Mohammadreza Alidoust
>>
>>
>> P.S. I attached that report.
>>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T86b555c591599ac6-M1a9ab885d9681ec5e56ae0cb>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T86b555c591599ac6-M1a7dcbd18589df84b65b0184
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to