"Sure there are more appropriate lists to talk
about compression on,"

"disassociate ourselves from their esteemed company BY KICKING THEM OFF THE DAMN
LIST!!!!!!!! ERROR: SARCASM"

How bout instead of writing cryptic stories you just say what you think? Me and 
Matt and JB are compression dudes on this list, and we are of high intellect 
indeed... So you don't understand why text compression is so spot on? I can 
explain, just ask. I'm the explainer around here anyway... Since I'm 
probabilistically sure you are confused, here:

-----------------------------------------------------------------------------

Compression is better the higher probability the predictor correctly thinks. To 
compress well means it understands the data, cat=dog to group words etc. The 
best compressors are neural models that mix predictions. It can make analogies 
to losslessly reinvent all the data back from seemingly nothing, and all AI o 
the internet do this "repairing rejuvenation", and so do cells etc, fetuses 
predict/make future based on past context.

My 120 lines of code algorithm that made 200,000 bytes into losslessly 59,230 
bytes is a text generator, a letter predictor using probabilities, learnt 
Online as it talks to itself storing what it says to itself. It mixes 13 
context models and then does Arithmetic Coding. It uses the past text to 
predict/generate the future until it reaches the end of the file. My algorithm 
builds a tree from scratch and during mixing will use global model weight, 
weight threshold squashing, and adaptive threshold based on how many channels 
are open to it. If I stop its losslessness it generates mostly silly ramblings 
that were not in the dataset but somewhat make sense. But I will get it as good 
as the great GPT-2 yet. This works because my algorithm handles future 
uncertainty with missing data it didn't see yet aka no very likely answer, so 
it mixes models to get more virtual data to handle uncertainty. My tree uses 
frequency for phrases to single letter models. And I haven't even started, 
there's energy, translation, etc etc to do yet that others have paved the way 
in already and I'm building on it soon. I'm on the way to building REAL AGI 
here, I'm telling you this is the way to get started. The models are in the 
tree, it stores text and frequency and can store semantics in the future, the 
tree is the neural network basically and that's where the pruning and storing 
of good enough nodes will result in a robust distributed net that is small, 
fast, and can recognize long unseen strings.

I coded it all from scratch, no reusing others code.

As for the copy while mutate thing in evolution, AGI must answer questions 
never seen before, i.e. it uses surrounding context atoms to predict/make the 
future ground/sentence (babies in the womb, text generators...). During 
disabling lossless compression / Arithmitic Coding, it can generate unseen 
futures that are likely based on the context past. Sounds like physics. And 
that mutates the future correctly while clones the sentence topic being on 
topic and generates content.

Imagine my text predictor/generator wants to predict after "and in the 1600s 
the captain said the fortress was _" with better probability to compress 
better, well it has to model the writer's intellect! It can see 100,000 letters 
back saying after the 1600s the fortress was rebuilt but before then it was 
being built. so it can see which to use based on the date. It's more of a match 
thing. To predict new true answers not in the dataset, you are given a question 
and simply predict the most likely answer based on large diverse data seen 
previously, which gives you all the truth your eyes could ever see to help you. 
So in the dataset you want to predict the cure to cancer wrongly, based on 
their current knowledge, but in the end you can do it better by using more data 
globally in the dataset from different domains. The AI has to always want 
certain futures to be seen, this is what steers the prediction in favor of what 
we want, and that may help compression since all text leads back to the need 
for survival aka food sex aka shelter games meetings hockey shops TVs books 
walks data_collection etc. The past, makes the future, using surrounding 
context in the embryo fetus/sentence said last, to make on topic specialized 
forecasts predictions generations!! All of evolution is about info duplication 
and mutation while cloning the topic, like GPT-2. DNA/cells, ideas, humans, 
cities, AIs. Patterns are found/created and the brain is like a friend network 
and also like a magnetic, propagating vibrations in fully feedforward aligned 
networks built from combinational domains to distribute all energy for most 
power, that's why the most powerful system are larger and fully aligned within.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T7e5f921f8ed8e057-Ma5558a09662d7987451ee780
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to