On Friday, May 22, 2020, at 4:44 AM, immortal.discoveries wrote: > And my approach to AGI is not wrong. AGI needs not just more existing > data/compute but a smarter discovery Extractor/Generator to create NEW > desired data to its held questions (duplicating old data with mutations). The > output of AGI is only to either implement plans or update where to collect > data from, those silly RL walker robots do this and GPT-2 should if we > improve it to do so. It is specializing in where to collect new data from, > which question, which source. I don't really need a body for my AGI > therefore. Output is just for implementation or data collection > specialization updates. For example, my algorithm I made from scratch, compresses the dataset enwik8 (100MB) to 21.8MB, which means it predicts pretty ok, and my net predicts better the more data it sees, for example if I used the dataset enwik2 (100 bytes lol) it'd compress it to only ex. 70 bytes. Get it?
SO, with the same dataset enwik8 of 100MB, how can I predict better if I don't have more data? Add more data. WHAT!? Yeah. Let me show you. When you find discoveries in the enwik8 dataset ex. cat=dog by shared contexts, you can recognize longer unseen sentences more robustly, and more! Th world best compressor can get enwik8 to 14.8MB. See? ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb12d9c575ab02557-M79500de30a5479ab42ab7bd6 Delivery options: https://agi.topicbox.com/groups/agi/subscription
