I just wanted to finish the idea I was expressing last night. Using
the example of Shannon Entropy you will find that Information Theory
is a tool that some people can use but it is not a fundamental
proposition that will lead to completely effective algorithms. For
example, a hidden code might not be superficially apparent using
Information Theory. Once you find the hidden relationships of the code
you might then go back and say well Information Theory covers that too
you know! Even in unencrypted messages the relationships of language
may make conceptual relations available to people with similar
backgrounds of interest that will not be readily available to an
automated application of Information Theory.
So even those these concepts like entropy and randomness have utility
they do not in themselves fundamentally illuminate AGI problems and
the kinds of related problems that we are interested in. We need to be
able to use different ways to look at the problems that we are
interested in. I think this question of using data in compressed form
to append new information and to use the compressed data in algorithms
without completely decompressing it may be a key to better understand
what we have to do to get the kinds of results that we are interested
in. Then I came up with a variant of this: perhaps a program might
operate on different stages of data compression in order to achieve
these kinds of goals. Perhaps the algorithms that I am musing about
have to be constantly compressing the data (and partially
decompressing it) in order to operate on the data effectively without
totally decompressing it. This is a new idea but if it is useful we
should be able to find examples of this in existing algorithms once we
start looking for it.
John: I have to take some time to read your latest message so it will
take me some time to respond to it.
Jim Bromer

On Thu, Oct 11, 2018 at 7:33 AM John Rose <[email protected]> wrote:
> > -----Original Message-----
> > From: Jim Bromer via AGI <[email protected]>
> >
> > And if the concept of randomness is called into question then
> > how do you think entropic extremas are going to hold up?
> >
> 
> "Entropic extrema" as in computational resource expense barrier, including 
> chaotic boundaries, too expensive to mine into for the compression agent 
> causing symbol explosion and unpredictable time complexity.. so effectively 
> one-time symbolizing the whole region and working around it until a larger 
> pattern is discovered perhaps on successive passes and the symbol can be 
> fitted into some dynamical component from an emerging model. "Randomness" is 
> merely computational distance from agent perspective.
> 
> Your example? Infinity. The ultimate symbol explosion. So what do you do? You 
> symbolize it. And you're right it is not a number unless intentionally 
> demarked so in a virtualized boundary. Symbols can be pointers to regions of 
> relatively incomputable data or an expression of operation to generate data. 
> For infinity there are an infinite number of expressions so the expression 
> should be in relation to the agent engine. The more efficient and intelligent 
> the agent the better it is at creating computable expressions verses data 
> pointers of symbol alphabets and languages. And expressions can be 
> re-expressed into simpler form with optimization of the language on 
> successive passes.
> 
> What do you envision when seeing the symbol "infinity"? The time complexity 
> of various algorithms in your mind... it's open ended... unpredictable... 
> your mind symbolizes symbols. Infinity symbolizes symbol creation, it's an 
> operation not a number or data point until you reach a boundary of 
> thermodynamic expense being a compressor agent in a virtualized escapism 
> pulled back to finite entropic reality. Thermo-entropically bound in a 
> virtual-entropic projection searching for escape velocity... and not finding 
> it... your concept of infinity being transmitted to other compression agents 
> who are similarly entrapped and virtualizing out attempting more efficient 
> combustion and intelligence increase... thus the qualia of infinity is 
> protocolized for systemic intelligence maximalization.
> 
> John
> 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-M9404c5a8ff5563b474369485
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to