To learn how to break the law, of physics, we must understand it better.

https://paste.ee/p/kQLCx

"So the meaning of a word is not contained within it but is instead described 
by the shape of the web of related words as observed from the vantage point of 
the word in question? Context is the actual word?"
Yes, a given particle of Earth is defined by all of Earth context (and then it 
re-checks the all to each again, a self-attentional-SelfRecursion of 
Data-Improvement like editing a Paper), a exponential explosion of heat is 
given to the core of Earth and self-extracts free energy from burning fuel. 
Brains do this, atoms do it, galaxies do it. That's why magnetic domains align 
and propagate brain waves in brain, team of brains, magnets, etc. AGI will be a 
collaborative project and already is too, we share data. Let's hug each other 
(real tight hug).

The big bang was unstable and decompressed. Planets re-compress. Atoms do it. 
Galaxies do it. A brain compresses to learn the facets of the universe by using 
data compression, so that it can burn fuel and extract free energy/data from 
old data (just like batteries, gasoline, and our stomachs). Data evolution, 
data-Self-Recursion. Lossy/Lossless compression both transform data from one 
form to another. When you compress a file losslessly, it actually is destroyed 
and gone because it isn't the same file/data. Compressing/firing employees do 
this too. Luckily, being lossless, you can re-generate it back at a click of a 
button (or if you destroy a drawing on your desk and re-draw it from memory), 
however it takes time to evolve it back, sometimes VERY long time. Brute force 
to find the smallest compression of the Hutter Prize file would take extremely 
long. Intelligence is all about speed, evolving domains of nodes (cells, 
neurons, brains, cities) to find which out-pace each other. This aligns the 
domains of the brain/group to propagate brain waves faster through the cluster 
and have a bigger electro-magnetic potential. If we use lossy compression, you 
can actually get the exact file back but takes much longer. A system in space 
will collect data to grow, then decompress, a self-extracting drive. This 
decompression is exponentially explosive and results in smaller agents that 
evolve to compress-extract so they can resist change. Energy (photons) 
propagate forward but can be pulled in by gravity and will loop around like in 
a motionless, cold battery. Change=energy release. Unstable. Equilibrium is the 
opposite. We seen a algorithm can be run perfectly many times, compress, 
decompress, compress, repeat. To do this requires a form of equilibrium. Wear 
and tear affects it though. Yet our sperm/eggs has seen many generations. If 
the universe contracts back, Earth can emerge again by this 
self-organizing/attention physics. Different systems and their size evolve 
different but is based on electromagnetic compression/decompression, Earth if 
became nanobots would simply grow in size and resist change/death approx. 
better. Lossless compression is so fast because it's all contained in such a 
small place like a cor rod and is very hot/related, lossy requires ex. the 
whole Earth, a form of brute force and exponential hints/data evolve it back 
faster. Lossless, locally, without brains to discover the data, requires only 
little data. The bigger a system is the bigger file you can re-create from 
nothing - a human brain can re-generate back almost anything. Lossless, based 
on how many particles are in the defined system (uncompressed file size which 
needs a computer to store/run it), has a limit of how small it can become and 
so does lossy because Earth is finite in size during a given period quantinized 
and a file can be re-generated back quite fast if some of it is still around - 
the lossy file, even if incinerated, can be re-generated back based on how many 
particles make up Earth. Here we see a file can be compressed deeper the bigger 
the file is or the bigger the Earth is. With such little of the file left (even 
just the remaining physics if incinerated) it can come back based on large 
context but has a limit/need (size of Earth/fileData, time, and compute). 

We see the communication/data tech builds on itself exponentially faster, 
bigger data = better intelligence and extracts exponentially more/better data 
(per a given system size). Earth is growing and heating up by collecting more 
mass and extracting/utilizing exponentially more energy like nanobots will when 
they come. We will harvest Dyson Spheres. Our goal to resist change by 
finding/eating food and breeding (Darwinian survival) could Paperclip Effect us 
and explode ourselves! A cycle of compress, decompress. Our goal is to compress 
data in our files, brains, teams, but also to expand our colony of data. Why? 
To resist change, to come to equilibrium (end of evolution fora given system 
exponentially faster). These colony mutants/tribes have longer stable lives 
being so large and using its size to extract so much. The bigger a system is 
the less it changes. Imagine destroying all instantly-repairing nanobots 
superOrganism? Can't. And, the bigger a system the more weight/vote/context 
interaction (heat) is transmitted/infected, not just to extract free 
knowledge/heat (motion/energy) but also to fix issues/damage. My body/knowledge 
 stay the same almost yet my cells/blood all change their spots for new ones, 
the air stays the same yet it blows around Earth, the heat in my walls stay the 
same yet the heat moves around, Earth is a fractal of pipes, veins, roads, and 
internet connections to propagate energy, ideas, blood, waste, traps, and 
negative electricity, simply to loop it around and re-use it. Distribution of 
data allows global, not just local, flow/alignments. It moves around and the 
system can resist change/repair/or, emerge. Or goal is to resist change by 
using large context/collaboration by aligning random domains to get free 
energy/knowledge. We have to collect/grow big and digest/extract it so we can 
resist change better. We are doing both compression and decompression of 
data/energy and possibly are trying to equal them out so we can come to 
equilibrium jussst right in the middle of the 2 opposites/attractors. The 
system we become will be exponentially repairing/immune to change - compression 
and decompression, however we may be growing larger but less dense as it does 
so to become approx. more immortal. We will likely need a exhaust/feed though, 
we will need a fine tuned food source and radiation exit for our global utopia 
sphere/galactic disc loop string.

So we should be very interested in compression, and decompression, i.e. Bigish 
Diverse Dropout - which data to destroy and remove/ignore/forget, and Big 
Diverse Data collection/creation by extracting free data using old data context 
vote/weight in. In the brain, we do compression and can basically still 
re-generate the ex. Hutter Prize file despite having a small decompression 
brain. The need to do both ignore/attend are the same process in Dropout or 
data collecting/harvesting, and the decompression process when ignore/attend 
which to extract/collect new data from old data is also the same process, and 
the compress/decompress processes are the same process too - which to remove 
and which to attend however to attend fast we need to remove fast, hence these 
2 steps are not really the same process. However when you do compress data and 
create a brain/team, it is easy to attend to the remaining keys. During 
extraction, you use what you Learned (patterns) to decide what to Generate. So 
they are both 2 different processes I guess. Btw, when you build a heterarchy 
you need the hierarchy first, and may not even need the heterarchy! The 
connections of context handles are already laid. I was going to say, making 
relational connections doesn't compress data on its own yet in effect does, 
though.

Some concepts above were compression, decompression, equilibrium (no 
change/death), exponentialality. We seen how we grow mutants that resist change 
better by using both compression/decompression (destruction of 
neurons/ideas/employees/lives/Earth, and creation of such) so we can come to 
equilibrium exponentially faster by large context weight (which exponentially 
helps compression, and extraction during Generating (ex. GPT-2's 40GB and 1024 
token view)). I'm still unsure if we are just growing and exploding. If the 
universe only expands then we will likely radiate.

Compression looks for patterns and leads to faster domain alignment/propagation 
and exponentially faster large brain waves/free energy extraction/re-generation 
from nothing. If we want to compress the Hutter Prize the most, we will need to 
stop it from generating multiple choices from a given context (it still uses 
the context). We could sort all phrases in the file like 'and the' 'but the', 
'so I' 'then I', and force it to discover the concept that leads to the re-used 
code 'the' or 'I'.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T65747f0622d5047f-Mf487a6e729bc0f6dc39a1663
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to