First of all anyone making an AI can and should and even probably does know how 
their code works, they should know how their Backprop, vectors, etc purposes 
and results are. This is so obvious but it needs to be said.

Next, they all also should and probably do know why the backprop/ word vectors/ 
negative sampling works and is useful, and I do agree maybe here a bit they 
don't know why even when they invented it.

Deeper, why and how does backprop or word2vec or positional encoding etc 
predict the next letter or word sub part? How does it deal with or does it deal 
with long range dependancies? Why I mean, what I mean, does it help? BPE helps 
ignore rare context memory matching, but DO they SAY this? Or they just say use 
BPE, it, helped? Looks like to me the latter! This they all fail at pretty bad, 
by the looks of their way they talk and can't explain GPT etc at ease, but me 
and some others are on top this real good.

Next, to the bloody details, no one really can look at all the things that 
helped predict the next letter or word or sentence, you can and can ask the AI 
to explain, but this isn't always needed much of the time, give or take, 
basically.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T73cb0deded02df8c-Md91cc7ac0f60be0cf9433afd
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to