In this thought experiment, I take a look into how the brain uses input and 
output from its body.


Let's imagine we have a brain in a body, and is stationed in a large laboratory 
that enhances its research and exploration. The reason it is not stationed in a 
normal home is because technological advances come from experiments and data 
processing, not sleeping eating and sitting on home furniture.


The lab and body won't intelligently move on its own, so let's direct focus to 
the brain now.


The brain cannot make any non-random output decisions yet, as it has had no 
input experiences yet. To do better output it needs input first. So the first 
thing we can do is either feed it lots of diverse image/text data, or let it 
try to walk forward and receive data about which motor actions got motion. So 
far so good, we just made it store its first experiences.


Up to now the brain has only received random input data (note the real world 
data isn't random but the source is) from all sorts of sources. It didn't 
decide where to collect data from, as it couldn't output any non-random 
decisions.


Now our brain can decide to tweak its walking skills to further improve its 
speed, or can decide to collect text/image data from certain sources such as 
particular websites or particular real life experiments. For example it may be 
trying to invent better storage devices and wants to see if it's predictions 
are correct or may want to collect data from there simply. Testing it's 
predictions is also data collection because it boosts it's already existing 
beliefs's probabilities.


The trend here seems to show that as it collects data, it is getting more 
precise where to collect it from. Without output from the brain, the brain 
could never collect data from specific areas. The brain is learning where to 
collect data from.


The 2 uses the brain has for output is 1) specific data collection, and 2) 
implementing solutions ex. a new product to market and seeing their mission 
completed (this is also data collection).


Our brain, if it had a solution to a big problem on Earth will all road-bumps 
covered, could just tell us it of course. It wouldn't absolutely require a body 
to implement a "plan".


The "coming up with a *new* plan" is done in the brain, it needs a lot of on 
topic data also. The output of the brain to the body is just to collect more 
desired data.


What is most interesting is when you have a lot of data, you can make a lot of 
connections in it and generate new data by using a method called Induction.


So what do yous think about the idea that we could make a powerful AGI without 
a body? I mean it still would have output to talk to us and input from various 
websites and experiments it asks us to do, but it wouldn't need a LOT of real 
life experiments if it has a lot of data because it can mostly generate its own 
data at that point and fill in gaps.


So most of its output in that case would be either a solution to a problem or a 
few requests of where to collect new data from if its solution isn't ready yet. 
Other than than it would be mostly doing induction internally using huge 
amounts of data. After all, experiments are only to collect data, we can give 
it lots even if not from precise tests.


My point here is AGI creates new data and is an induction engine, and works 
better with huge amounts of diverse data and on-topic data as well. That's all 
its input does - is provide data. The output is to collect certain data. But 
AGI *needs to generate *new data using all this data and/or find part of its 
solutions in data. For example finding a device in nature or an advanced 
civilization would be a solution that eliminates many sub goals. It could read 
about it in data too, if it trusts that data.


In that sense, AGI is about re-sorting existing features/data into new data/ 
devices or skills. To do that it needs a lot of data. AGI generates the future 
using a Lot of context.


What do yous think? Can we sort of get away without a body and just make AGI in 
the computer and talk to us using its text/vision thoughts? And can we get way 
with doing lots of specific experiments from the right locations and times and 
just use the slightly-more random big data? To me it appears to be yes we can. 
The AGI could still extract/form new answers from the large data. Like you know 
how Prediction works right? You can answer unseen questions or image hole fill 
in? So AGI can just 'know' answers to things using related data. And, what if 
it can just watch microscopic data and do all sorts of random experiments to 
see what "happens" and build a better model!? It is true though brute force is 
not computable in our case, but it's an idea.



------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf81a0714b774beb2-M69e9def5bc1db5a088058c2d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to