I know I've probably asked this before but we still have a problem.

See the image attached. We have a tree at left, or better, a hierarchy on
right. When you get a prompt/ sentence, you can find multiple matches of
varying lengths in the tree or hierarchy ex. 'th' and 'the' and 'the ' and
'the c' and 'the ca' and 'the cat'. You combine the probabilities to get 1
set of probabilities for each letter that may come next. It gives ok
predictions. The weights are just counts on the connections that you update
based on accesses made.This algorithm is called Partial Prediction Match
(PPM) which is like a markov chain that uses Backoff. It just tells you the
probability of the next letter i.e. what it probably is.

So, why would you want to use a *backprop *NN? Gives better results? But
why does it give better results? Is it doing semantics!? Or something else?
What exactly is making it predict better? You can't just say cuz it does,
there is an actual thing going on if it gives better results.

I will be back later today.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdde7634514c58304-M4c14baa41fac8816b03fbac5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to