> "Isn't this just building alphabets of patterns and symbolizing "effective > complexity" regions (Gell-Mann ) on successive iterations while interacting > with a more general library graph of symbols? Aligning to entropy extrema > when forming crypticity topology."
Well it definitely is not *just* that. Ironically, the concept of entropy may be an interesting concept in computational mathematics but it used in a way that is nearly nonsensical in physics - in my opinion. I think the idea first arose from thinking about combustion. Stable substance is burned and turned to its (relatively more) constituent parts. The problem is that parochial view was untenable. Is the sun an entropic production device? Because life on earth is totally dependent on the sun and it would seem more like the sun is an anti-entropy device from that view. A more modern view is that combustion and other forces of physics are acts and effects that occur within a much more complex environment and the attempt to define extremes is not necessarily a sound way to look at nature. Entropy is often said to be a measure randomness. But extreme randomness is unmeasurable. Randomness can only exist within a bounded system. If you have a bounded system you may start talking about randomness but even then, there are problems with the definition. Any random sequence of numbers, for example, can be made to be the output of an infinite number of pattern generators. Oops - infinity is not a number! When I used the term "infinite number" I was just using a figure of speech. But you know what I meant. The use of the term random is like that in some ways. I mean, if you can understand that any random sequence can also be seen as a product of an infinite number of pattern generators then is your concept of randomness truly sound? And if the concept of randomness is called into question then how do you think entropic extremas are going to hold up? Relative randomness - within a constrained system, and relative entropy, also within a constrained system make a lot more sense ironically enough. But to use them that way means that you would have to start using them more like they were parts of an engineering problem than some ideological superlative. And that requires a lot more work. So when someone starts casually talking entropy or even randomness as if these ideas somehow could be used to define what is impossible then I already know that without a lot more work they are not going to be able to build a useful substantive case - even for an intuitive case. Because in an interesting system there may be more than one way to define relative randomness. Let me try one other idea that you mentioned. "...mining into "dynamical depth" then inserting "purer" symbols from the library into the compressed form at the appropriate depth. Symbol injection basically." I have thought a lot about substituting enumeration values for more complicated formulas. But I have also thought about the importance of using symbols that can then be efficiently used in specialized algorithms. The symbols would have to have some interrelated formatting (or some other quality that can be used effectively) to allow them to be used efficiently in the algorithm. Yes I was thinking of multiple layers of compression (or conversion from one compression to another). But these (probably) would not exist as tables of symbols or predefined rewrite rules. One other thing. I don't care what Shannon said in his paper about entropy any more than I care about Turing's use of the term, "nondeterministic polynomial time." I suspect they used these terms the way I used the term, "infinite number". In spite of the inanity of using the term "infinite number" I do know that infinity is not a number and I know something about the use of an approach to infinity in the theory of limits. I only use the term as an abbreviation that most people in a group like this can understand. It is nominal abbreviation, like Shannon Entropy. As I said, I think entropy is better defined within a constrained system. But Shannon Entropy is not the only possible *relative* definition in a system. And Shannon signal processing is not the only interesting system that might be referenced in a discussion like this. Jim Bromer On Tue, Oct 9, 2018 at 8:04 AM John Rose <[email protected]> wrote: > > -----Original Message----- > > From: Jim Bromer via AGI <[email protected]> > > > > Operating on compressed data without having to decompress it is the goal > > that > > I am thinking of so being able to access internal relations would be > > important. > > There can be some compressed data that does not contain explicit internal > > relations but even then it would be nice to be able to make modifications to > > the data without decompressing it. My assumption is that the data would have > > some kind of internal relations that were either implicit in the data or > > which > > might be a product of the compression method. > > The parts of the model that I am thinking about may contain functions to: > > Compress data. > > Transform compressed data into another compressed form without > > decompressing it. > > Append additional data onto the previously compression without > > decompressing it. > > Modify the data previously compressed without decompressing it. > > Decompress the data. > > > > > Isn't this just building alphabets of patterns and symbolizing "effective > complexity" regions (Gell-Mann ) on successive iterations while interacting > with a more general library graph of symbols? Aligning to entropy extrema > when forming crypticity topology... shifting lossy and lossless dynamically > in referencing the general library. IOW, for example mining into "dynamical > depth" then inserting "purer" symbols from the library into the compressed > form at the appropriate depth. Symbol injection basically... the cleaner > symbols being effectively pre-compressed. > > Maybe? > > John > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T55454c75265cabe2-Mc3b4bb2ff7f1cf9e32205da4 Delivery options: https://agi.topicbox.com/groups/agi/subscription
