On Thursday, December 23, 2021, at 10:50 PM, James Bowery wrote: > Where is Google's big investment in sparse matrix multiplication hardware, > for example? Why the reliance on dense models when it is known that's not > how the brain works? > I don't know why, but, at least they are getting close.
On Thursday, December 23, 2021, at 10:50 PM, James Bowery wrote: > And why the emphasis on "big" models when it is known that minimizing the > size of the algorithm that outputs the data is the optimal model selection > criterion? Might it have anything to do with trying to HIDE that ground > truth of AGI by throwing vast amounts of money around that they shouldn't > have in the first place? > Yes, small model and better accuracy is the goal, and adding more data also make the AI better. In Lossless Compression, one has A FIXED DATASET SIZE OF 1 GIGABYTE ( :p hehe), and the contestants must make their AI better - resulting in better prediction resulting in a smaller COMPRESSED ENVOLOPE (being the algorithm code size, ANN size if any, and the correction code to allow decompression for mistakes in the brain model (P.S. model = BRAIN, brain can't store ALL details folks, only the key aspects)). So I think you overlooked this James, in the LC contest, it "looks" like you want to achieve a small envelope, yes. But if one were to include (if, they do!) the ability to use 10GB or 100GBs of text or other data types, then suddenly the goal is not just to make the smallest envelope --- but the biggest envelope that has the best prediction accuracy for its size, or simply the best AI predictor. So bigger means better, but it is no not the only way to improve AI. However, for the AI's evaluation, one does not need to use more than 1GB or 100MBs to measure Perplexity or Lossless Compression. This is why I say in my Guide to AGI that big data, better recognizer, make a smarter AI, but the evaluation only has to use 100MBs for now. So if someone asks "why the emphasis on BIG?", the answer is not really evaluation score, but usefulness (helping humans, raking in cash. People get better generated data if it is scaled up). On Thursday, December 23, 2021, at 10:50 PM, James Bowery wrote: > And, finally, there is your misconception that Transformers are, in some > sense, recurrent -- a misconception advanced by Google's own paper titled > "Attention Is All You Need", leaving you here to do their dirty work for them > in your Google induced brain fog. > Transformers can do everything LSTMs could do, and more. Prove me wrong. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T358f938c1cfb5c51-M2405670d2014ab18edc90cac Delivery options: https://agi.topicbox.com/groups/agi/subscription
