John said, ""Entropic extrema" as in computational resource expense
barrier, including chaotic boundaries, too expensive to mine into for
the compression agent causing symbol explosion and unpredictable time
complexity.. so effectively one-time symbolizing the whole region and
working around it until a larger pattern is discovered perhaps on
successive passes and the symbol can be fitted into some dynamical
component from an emerging model. "Randomness" is merely computational
distance from agent perspective."

That is really interesting but why the fixation on the particular
fictionalization?  Randomness is computation distance from the agent
perspective?  No it isn't. I don't know how to get this across and I
am guessing I will have to give up trying but you are not merely using
(specialized) linguistic reference markers. What you are saying makes
enough sense to me to want to think about it but the noise makes it
more difficult to understand. So yeah, I can see how randomness within
a relative constraint system might be related to computational
distance - especially from [your perspective[ of the agent's
perspective. But even if I accept that as a reasonable view, you later
made this remarkable statement: "it's an operation not a number or
data point until you reach a boundary of thermodynamic expense being a
compressor agent in a virtualized escapism pulled back to finite
entropic reality."

What!?? There is a side of me that is wondering if you are just
goofing on me. If we both worked at Intel I would be worried that you
were saying something that might be relevant to whatever it was that I
was working on but not in this case. Thermodynamic expense and
entropic reality has nothing to do with anything that I am talking
about. If you recognized that then...
I am wasting my time right? OK, too bad. I will just try to find a way
to get you to explain what you are talking about when it gets a little
too out there and try to find some common ground and some grounded
reasoning.  I am honestly thinking about the parts of your comments
that do not go beyond the limits of the thermodynamic expanse and I
will try to reply to your comments more carefully in the future.
Jim Bromer

On Thu, Oct 11, 2018 at 9:39 AM Jim Bromer <[email protected]> wrote:
>
> I just wanted to finish the idea I was expressing last night. Using
> the example of Shannon Entropy you will find that Information Theory
> is a tool that some people can use but it is not a fundamental
> proposition that will lead to completely effective algorithms. For
> example, a hidden code might not be superficially apparent using
> Information Theory. Once you find the hidden relationships of the code
> you might then go back and say well Information Theory covers that too
> you know! Even in unencrypted messages the relationships of language
> may make conceptual relations available to people with similar
> backgrounds of interest that will not be readily available to an
> automated application of Information Theory.
> So even those these concepts like entropy and randomness have utility
> they do not in themselves fundamentally illuminate AGI problems and
> the kinds of related problems that we are interested in. We need to be
> able to use different ways to look at the problems that we are
> interested in. I think this question of using data in compressed form
> to append new information and to use the compressed data in algorithms
> without completely decompressing it may be a key to better understand
> what we have to do to get the kinds of results that we are interested
> in. Then I came up with a variant of this: perhaps a program might
> operate on different stages of data compression in order to achieve
> these kinds of goals. Perhaps the algorithms that I am musing about
> have to be constantly compressing the data (and partially
> decompressing it) in order to operate on the data effectively without
> totally decompressing it. This is a new idea but if it is useful we
> should be able to find examples of this in existing algorithms once we
> start looking for it.
> John: I have to take some time to read your latest message so it will
> take me some time to respond to it.
> Jim Bromer
>
> On Thu, Oct 11, 2018 at 7:33 AM John Rose <[email protected]> wrote:
> > > -----Original Message-----
> > > From: Jim Bromer via AGI <[email protected]>
> > >
> > > And if the concept of randomness is called into question then
> > > how do you think entropic extremas are going to hold up?
> > >
> > 
> > "Entropic extrema" as in computational resource expense barrier, including 
> > chaotic boundaries, too expensive to mine into for the compression agent 
> > causing symbol explosion and unpredictable time complexity.. so effectively 
> > one-time symbolizing the whole region and working around it until a larger 
> > pattern is discovered perhaps on successive passes and the symbol can be 
> > fitted into some dynamical component from an emerging model. "Randomness" 
> > is merely computational distance from agent perspective.
> > 
> > Your example? Infinity. The ultimate symbol explosion. So what do you do? 
> > You symbolize it. And you're right it is not a number unless intentionally 
> > demarked so in a virtualized boundary. Symbols can be pointers to regions 
> > of relatively incomputable data or an expression of operation to generate 
> > data. For infinity there are an infinite number of expressions so the 
> > expression should be in relation to the agent engine. The more efficient 
> > and intelligent the agent the better it is at creating computable 
> > expressions verses data pointers of symbol alphabets and languages. And 
> > expressions can be re-expressed into simpler form with optimization of the 
> > language on successive passes.
> > 
> > What do you envision when seeing the symbol "infinity"? The time complexity 
> > of various algorithms in your mind... it's open ended... unpredictable... 
> > your mind symbolizes symbols. Infinity symbolizes symbol creation, it's an 
> > operation not a number or data point until you reach a boundary of 
> > thermodynamic expense being a compressor agent in a virtualized escapism 
> > pulled back to finite entropic reality. Thermo-entropically bound in a 
> > virtual-entropic projection searching for escape velocity... and not 
> > finding it... your concept of infinity being transmitted to other 
> > compression agents who are similarly entrapped and virtualizing out 
> > attempting more efficient combustion and intelligence increase... thus the 
> > qualia of infinity is protocolized for systemic intelligence maximalization.
> > 
> > John
> > 

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T55454c75265cabe2-Mba58a3ed7ceaa1f47ebb8b6d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to