On Saturday, August 14, 2021, at 1:20 PM, immortal.discoveries wrote:
> You must not know how GPT nor my AI works then.

How about keeping this discussion on a conceptual level? I explain my 
objections to all statistical / perceptron-based methods in "Comparison to ANN 
and BNN" section.

On Saturday, August 14, 2021, at 1:20 PM, immortal.discoveries wrote:
> Recency priming is a very important part for most prediction problems

I call it proximity rather than recency, because it doesn't have to be 
temporal. Yes, it should determine the order and limits of search. But it 
should be implemented as a positional hyperparameter, adjusted by feedback. The 
parameter itself is not task-specific, thus it's not an RL.

On Saturday, August 14, 2021, at 1:20 PM, immortal.discoveries wrote:
> The most important IS reward for prediction,

You / they / all statistical crap need this "reward for prediction" because 
predictive value is not quantified bottom-up. In a comparison-first paradigm, I 
quantify it as match, see "Atomic comparison" section. So I can select for it 
incrementally, instead of waiting for ridiculously coarse feedback. The whole 
"self-supervised" mindset is a crutch, they use RL because their core 
unsupervised method (perceptron) is a cripple.   

On Saturday, August 14, 2021, at 1:20 PM, immortal.discoveries wrote:
> We are not even talking about text or vision cons & pros,

You are talking about and working text, so I suggest that you work on your 
intuition instead.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-M84d7a8c3fcd2768551f2813e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to