On Saturday, August 14, 2021, at 6:00 PM, immortal.discoveries wrote: > .it would be better if you used more common words and examples to explain > what you're seeing visually.
This happens to be the most abstract subject ever. Which means you need to think in terms of definitions, not examples. For example, you keep talking about patterns, but can you formally define what that is? No you can't, because you always think in terms of text, which is a killer. The only way to make sense out of it is to define everything bottom-up, which means pixels-up. On Saturday, August 14, 2021, at 6:00 PM, immortal.discoveries wrote: > But I'm not using supervision or labeling, just unsupervised learning. The > reward for text prediction is not what you think of when you read RL, it is > merely a way to make it AGI like by making it talk about certain features > more often than others. Don't you see how fuzzy your explanation is? RL is never unsupervised. Maybe self-supervised, but not unsupervised. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Md89c6af7f1c27c8fd7e459a5 Delivery options: https://agi.topicbox.com/groups/agi/subscription
