Trying to get to this all day.

"The greater the generalization of a network, the more entangled its neurons 
and as a consequence the network becomes less interpretable.". The article says 
our brain net is too complex to look at and should have it talk to us instead. 
And cast from the high dimensionality of it more intuitive shadows, like 4D to 
3D, like how a 3D block casts 2D shadow.

Well, a more accurate neural network / car / system "model" is smallest and 
still explains most of the data. It is general / multi-purpose, it solves all 
sorts of problems because they are the same problem really. To do this, you 
need to merge patterns, to e-merge. Merging creates hierarchies / clusters of 
clusters, because whatever you merge can be used for other merges ex. you never 
store the same word feature more than once, and that is used to create a phrase 
feature or semantic feature or task sequence feature. And if you had a real 
life machine, ex. a motor, funnel, brush, arm, bag, electricity line etc, you'd 
make hierarchies and re-use parts, ex. the vacuum or a hammer or man could do 
all sorts of things, the man/ hammer could make up a star ship or house or car 
or hammer or man (man+hammer=new machine), and could have sequences re-used too 
ex. a man with hammer makes wall, then lifts it up to a story, and combine this 
with drive off and get more material // or wait while they take it an hour so 
drive over here and do other job. The most powerful things are the longest 
living technology and that is one that is small but deadly, like a nanobot 
advanced squad unit.

This entangling is merging, but it doesn't make it uninterpretable. It makes 
all the data make sense if viewed! That's what it is after all, a model. The 
reason it seems messy to look at to yous is simple. Take a video, or 3D car, or 
tree trie like https://ibb.co/p22LNrN, and store it on a computer storage disc 
and look at it, it'll not make any sense! Unless you know how to view it! Same 
for DNA, we look at it and see strings, but we aren't looking at it correctly. 
My AGI is not only 100% understood in how it works/learns, but also after 
learning the WHOLE network can be observed to see what it's thinking, what it 
wants to do, what it knows, and how it's task sequences are functioning/going 
etc. Where you guys are at is this: you don't know how AI works, you just use 
backprop etc!!! No wonder yous can't read it or know how to make it. *You's 
aren't making the REAL bread, you're making FAKE bread. I actually KNOW how to 
make a good NN without backprop.*
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1b8c0c3b7933a51f-M981812111677bc876b2b514f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to