We need to understand what the risks of AI are. How far off is self
replicating nanotechnology? Freitas worked out the physics of
nanotechnology, and the fastest possible replicators would be about the
size and speed of bacteria, consistent with the Landauer limit of 3 x
10^-21 J per bit operation at room temperature, the work needed to move
atoms. The 10^44 carbon atoms in the biosphere encode 10^37 bits of DNA
with a replication cycle time of 5 years, or 10^29 copy operations per
second. RNA and protein transcription is another 10^31 OPS.

Global storage capacity on all the world's computers is 10^24 bits and
doubling every 3 years. At this rate, it will take 130 years for
nanotechnology to compete with DNA based life. It will win because plants
currently use 500 TW (0.6%) of solar power for photosynthesis out of 90,000
TW available at ground level. (By comparison, global electricity production
is 18 TW and global human food consumption is 0.8 TW). Solar panels are
already 20-30% efficient.

The risk here is uncontrolled self replicators made by hobbiests with cheap
nanoscale 3-D printers, and similarly, lethal engineered pathogens that
might be available earlier. These will be difficult to make, especially to
test in secret without killing lots of people. A 100% fatal virus will be
much harder to create and test than a 99.9% fatal virus.

I think the more immediate risks are humans using AI to impersonate people
for personal gain rather than uncontrolled AI. AI doesn't have feelings or
goals unless we program them. Self preservation is the result of billions
of years of evolution. It doesn't just emerge from a LLM that passes the
Turing test. You have to code it. Our DNA has the same information content
as 300 million lines of code.

Altman was not explaining any of this to Congress. Just that there is some
nebulous risk of doom. All that's going to come of this are regulations
that don't help anyone.


On Thu, May 18, 2023, 2:21 AM <[email protected]> wrote:

> https://www.youtube.com/watch?v=6r_OgPtIae8
>
> Interesting times! Seems somewhere or in an other video he asks for /
> suggests a license to work on AGI.
>
> It is true I think scale is needed to make sure your algorithm
> scales...maybe? But IDK I don't think regulations work for AGI, as Matt
> said, we only need 1 or a little more GBs of text to make AGI, and
> furthermore, testing how good your AI predicts can be done on 10% or 1% of
> that, i.e. using just 40MBs of text! Lol. That can be trained in like
> minutes or hours.
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T98d543ca2ee45e01-M09269a2b258d5513b4509190>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T98d543ca2ee45e01-M8156b75c3e4e64080e62104a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to