On Saturday, June 03, 2023, at 7:17 AM, Matt Mahoney wrote:
> The alignment problem is not aligning AI to human values. We know how to do 
> that. The problem is aligning human values to a world where you have 
> everything except the will to live.

I still don't think having everything makes you sad. And while I do think the 
new "great" world we are about to enter will isolate us almost totally, I think 
the AIs will take us out of that before we well, die or get really lonely.


On Saturday, June 03, 2023, at 7:17 AM, Matt Mahoney wrote:
> Vision and robotics will require more work.
> 

This is true text and image AI right now "seems" perfected, video AI is 
"clearly" not human level yet. From seeing how bad video AI was in 2020, and so 
on, it seems clear we will get to HD movie level and complex prompt handling by 
2026. If we ignore exponentials (which would trick us in our timelines if not 
had them), then 2029 if linear pace of progress.


On Saturday, June 03, 2023, at 7:17 AM, Matt Mahoney wrote:
> We also need to collect 10^17 bits of human knowledge at a cost of a few 
> cents per bit

Maybe for now a bit slow yes. But in the near future: AGIs will make their own 
data much faster than us, so, data won't stay limited at all like humans, nor 
will speed, nor will how many AIs exist (at the theoretical future moment let's 
say a billion come online). Same for energy. I don't beleive our Moore's law 
will continue to stay this slow once AIs come.

Also, what about software version Moore's Law? It's faster than Moore's Law, 
openAI said. I forget the exact name of it but you get it.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T22a6257384b5d40a-M12dc20b34a1d646946ce377a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to