Just to be clear I meant we won't die by repair etc, when I said transfer I did 
not mean upload. I certainty don't talk about uploading in the context: 
I/humans don't want to be uploaded to avoid death. Uploads do however still 
help backup data if need to fetch some missing memories or organs.


On Wednesday, July 05, 2023, at 1:08 PM, Rob Freeman wrote:
> I argued it was the wrong goal. That meaning would turn out to be an
expansion of data.

And now, what do we find? We find that LLMs just seem to get bigger
all the time. That more training, far from compressing more
efficiently, just keeps on generating new parameters. In fact training
for longer generates a number of parameters roughty equivalent to just
adding more data.

Hold on. The Lossless Compression evaluation tests not just compression, but 
expansion! So, you train the AI, and it predicts, and you store the errors 
(which are small), and, then you must check now that it does decompress and so 
the AI now predicts and picks the correct prediction (using the small 
correction as steps through the file being remade). This tests not just 
compression but expansion. And not just how short a time you can take to 
compress, but also to expand. This lets you know how fast it would expand if 
let go forever, and how much it can expand in burst mode (an AI can expand, but 
can it expand with very little error at predicting the true next surrounding 
matter/energy to exist?). This tests that.

And it is true more data and longer training help, but Perplexity and this test 
above only  requires a (small) fixed dataset size of 1GB and even just 1MB 
works! And a fixed training time of I forget but obviously a very short time 
works fine.*_* This is meant to measure how many intelligence thingies you 
added to your AI, not how big your dataset was, we already know how to do that 
part of AI, why measure that!?*_* And are Well Used to check how good GPT-X is 
doing I believe they even openAI ya still use that, how can't you. Everyone has 
been using this test for AIs.

See my AI's tests below compared to one of the best guy's scores (exact scores 
may vary if adjusted settings more, but this is what his gave and I got where 
left off)



10,000 bytes in
3,295 BP, 3351 bytes out. On PC1: 1.5 seconds, 202MB RAM
Byron Knoll's cmix: 2,146

100,000 bytes in
28,074 bytes out. On PC1: 5.7 seconds, 235MB RAM
Byron Knoll's cmix: 20,054

1,000,000 bytes in
240,871 bytes out. On PC1: 48.4 seconds, 442MB RAM
Byron Knoll's cmix: 176,388

10,000,000 bytes in
2,207,890 bytes out. On PC2: 614 seconds, 4.4GB RAM. PC1: 523 seconds, 3GB RAM
Byron Knoll's cmix: 1,651,421
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-M1df47d77bed658ee2ea336e5
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to