I have been really busy but I thought about the subject a little. Most programs use concise representations of data -as they relate to the program- and so I guess many programs do operate on these program-related representations. These might be characterized as partial compressions since typically the data is not totally compressed. The other issue, whether these are true compressions have to be decompressed to produce a result of some kind is a little more intangible. But it is easy to think of simple tasks in which the information produced by the program was operated on in compressed form and then used to output data (for example how display the data) by decompressing these symbols. Some characteristics of the data might be initially abstracted and represented in compressed form. Then the operations of the system could work on these compressed abstractions which are then represented as strings. The strings could subsequently be used on the original data to produce some effect. And if the program was able to see that some of these strings of operations could refer to other relations (consisting of operations and abstracted characteristics of the data) these relations might be translated in some way. So the goal would be fairly simple to achieve. The program would have to be able to detect relations in some symbols (or symbol-like representations) produced by the program. But I was wondering if this could be used in dealing with Logical Satisfiability problems. For that I would need to create (computational) rules that are more powerful and flexible than the familiar rules of elementary logic. The problem in contemporary logic is that you typically need to decompress the data to some degree in order to use it in elementary operations. I believe the first step to the smushing problem in weighted reasoning is to track the sources of the weighted resultant that is to be subsequently used in another computation. That way, conflation producing smushing might be resolved to some degree as needed. But that situation sounds like it would need decompression. So the source characteristics would have to be characterized as abstractions which could be used in complicated computational operations without decompression. If the characteristic abstraction methods and the computational relations between them were sensitive enough this plan might work - at least to some degree. In innovative AI this would involve the specialization of abstract relations created in response to the 'experiences' the program encounters.
Jim Bromer On Sat, Jun 10, 2017 at 1:25 PM, Jim Bromer <[email protected]> wrote: > Ben, Thank you for your comments. I will look at Homomorphic Encryption > when I get a chance. > When I said that I did not want to use binary arithmetic as an example, I > meant that I did not want to use it as an example of operators that are > able to act on compressed data. I did not mean that I did not want to use > binary arithmetic although my opinion is that there has to be some other > fundamental computational operations that we do not know about. > > Jim Bromer > > On Sat, Jun 10, 2017 at 12:46 PM, Ben Kapp <[email protected]> wrote: > >> Reading this I can't help but think about Homomorphic encryption, which >> provides the ability to perform computation on encrypted data. If you were >> to use a compression algorithm as your encryption method then you would be >> done. Unfortunately homomorphic encryption necessitates bitwise encryption >> and compression algorithms would most certainly take into account more than >> a single bit when they perform their compression. And so there would be >> quite a bit of work to do to generalize this field of mathematics to work >> on that class of encryption methods, but it seems to be where I would focus >> my efforts if I wished to provide this capability. >> >> But you seem to have dismissed homomorphic encryption for some reason >> when you said the following. >> >> "I did not want to use binary arithmetic as an example because computers >> were designed around those principles." >> >> >> Homomorphic encryption most certainly uses binary arithmetic. Can you >> elaborate on why you wish to preclude this? >> >> >> Memories in the brain are reconstructive, and confabulatory. Which is to >> say when you ask someone to recall something, they will not recall >> information as it was, but rather as their brain is. And one can alter the >> brain of others to effectively perform CRUD operations on their memories, >> allowing you to alter those memories to any extent you wish. Such seems to >> be a rather big problem for memory systems which are brain inspired. >> Computers have perfect recall and such is highly desirable and a great >> improvement over humans. I'm not certain why you would wish to replace >> such a perfect system with a lossy (and confabulatory) system. >> >> >> On Sat, Jun 10, 2017 at 4:02 AM, Jim Bromer <[email protected]> wrote: >> >>> Rob, >>> I will look at the paper when I get a chance. >>> >>> Jim Bromer >>> >>> On Wed, Jun 7, 2017 at 7:16 PM, Rob Freeman <[email protected]> >>> wrote: >>> >>>> Jim, >>>> >>>> Have a look at this paper and see if you find it relevant. I understand >>>> it to be a sketch for logic using distributed representation. RNN's still >>>> globally optimize, so I think they will still have lossy compression >>>> (instead of partial compression?) But the idea of using distributed >>>> representation is on the right track: >>>> >>>> Semantic Compositionality through Recursive Matrix-Vector Spaces >>>> Richard Socher Brody Huval Christopher D. Manning Andrew Y. Ng >>>> https://nlp.stanford.edu/pubs/SocherHuvalManningNg_EMNLP2012.pdf >>>> >>>> -Rob >>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | >>>> Modify <https://www.listbox.com/member/?&> Your Subscription >>>> <http://www.listbox.com> >>>> >>> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | >> Modify >> <https://www.listbox.com/member/?&> >> Your Subscription <http://www.listbox.com> >> > > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
