I didn't think this would come so soon it's just still 2022.

https://www.nature.com/articles/s41598-021-03938-w

TLDR: Based on my skim read but mostly payed attention mostly not skim-read: 
They trained a GAN on the celeb dataset and used it to generate face images and 
showed them to some humans (~2000 images), test set first then train set, then 
used this MRI headset data to train another GAN AI on learning the relations of 
brain waves <> latent space of already trained AI. With this, they were able to 
generate those pictures of them thinking (or was it seeing?) of the person, and 
it does match the original very closely, and besides it is what they thought of 
i.e. so they basically made art. Yes, they triggered a similar latent space by 
thinking of the face, but if they think of a close other latent then it would 
be a bit different face under your control! Now scale this up 1000 times and we 
have maybe DALL-E 2 for brains and crazy mind uploading and gods, sharing their 
visions wirelessly to another person through their google glasses and then back 
out to the sender and repeat.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17d301092e0442c4-M8620d8e8256d49d689712ed6
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to