Is the following correct about Backprop? I have an alternative to Backprop and 
am trying to compare them. Let's do an example. If our dataset is "1, 4, 9, 16, 
25, ?" And it wants to now predict the ? after have trained on the other data, 
it will have backpropagated already and hopefully learnt the function, right? 
That we get 1, 4, 9, 16, 25 because of the function x^2 ex. 1^2=1, 2^2=4, 
3^2=9, 4^2=16, 5^2=25. So when it sees the ? it might predict now the next 
should be 36. Right? Backprop is learning the function based on data seen, then 
applying it to the context? One more question now, how intuitively does 
Backprop find this algorithm x^2, I know it adjusts weights but really how can 
it find this please intuitively explain. I am building in a few months my 
alternative so it does this naturally, it'll brainstorm likely algorithms them 
generate the sequence shown and hence the ? too, so if Backprop doesn't 
naturally find this I'd be surprised how it finds it then because the only way 
is the examples it sees, which are "connected" by the rule that generates them. 
It's pattern matching/learning. My alternative (if works) can not only explain 
how it discovered the function/algorithm behind the observed data but also why 
it generates answers and unseen answers.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfe956a9448c609fd-Ma3ae67fa765f6413ac426dc0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to