Hi Telmo, Thank you for these links, they are very helpful in articulating the problem. I think you are right about there being some connection between communication of qualia and the symbol grounding problem.
I used to think there were two kinds of knowledge: 1. Third-person sharable knowledge: information that can be shared and communicated through books, like the population of Paris, or the height of Mount Everest 2. First-person knowledge: information that must be felt or experienced first hand, emotions, feelings, the pain of a bee sting, the smell of a rose But now I am wondering if the idea of third-person sharable knowledge is an illusion. The string encoding the height of Mount Everest is meaningless if you have no framework for understanding physical spaces, units of length, spatial extents, and the symbology of numbers. All of that information has to be unpacked, and eventually processed into some thought that relates to a basis of conscious experience and understanding of heights and sizes. Even size is a meaningless term when attempting to compare relative sizes between two universes, so in that sense it must be tied somehow back to the subject. There also seem to be counter-examples to a clear divide between first- and third-person knowledge. For example, is the redness of red really incommunicable between two synesthesiacs who both see the number *5* as red? If everyone in the world had such synesthesia, would we still think book knowledge could not communicate the redness of red? In this case, what makes redness communicable is the shared processing between the brains of the synesthesiacs, their brains process the symbol in the same way. More comments below: On Thu, Apr 8, 2021 at 11:11 AM Telmo Menezes <[email protected]> wrote: > Hi Jason, > > I believe that you are alluding to what is known in Cognitive Science as > the "Symbol Grounding Problem": > https://en.wikipedia.org/wiki/Symbol_grounding_problem > > My intuition goes in the same direction as yours, that of "procedural > semantics". Furthermore, I am inclined to believe that language is an > emergent feature of computational processes with self-replication. From > signaling between unicellular organisms all the way up to human language. > That is interesting. I do think there is something self-defining about the meaning of processes. Something that multiplies two inputs can always be said to be multiplying. The meaning of the operation is then grounded in the function and processes of that process, which is made unambiguous. The multiplication process could not be confused with addition or subtraction. This is in contrast to a N-bit string on a piece of paper, which could be interpreted in at least 2^N ways (or perhaps even an infinite number of ways, if you consider what function is applied to that bit string). > > Luc Steels has some really interesting work exploring this sort of idea, > with his evolutionary language games: > https://csl.sony.fr/wp-content/themes/sony/uploads/pdf/steels-12c.pdf > Evolving systems that can communicate amongst themselves seems to be a fruitful way to explore these issues. Has anyone attempted to take simple examples, like computer simulated evolved versions of robots playing soccer, and add in a layer that lets each player emit and receive arbitrary signals from other players? I would expect there would be strong evolutionary pressures for learning to communicate things like "I see the ball", "I'm about to take a shot", etc. to other teammates. > > > I have been working a lot with language these days. I and my co-author > (Camille Roth) developed a formalism called Semantic Hypergraphs, which is > an attempt to represent natural language in structures that are akin to > typed lambda-calculus: > https://arxiv.org/abs/1908.10784 > One curiosity is that all human languages appear to be "Turing complete" in the sense that we can use natural language to describe and define any finite process. I don't know how significant this is though, as in general it is pretty easy to achieve Turing complete languages. I think the central problem with "Mary the super-scientist <https://en.wikipedia.org/wiki/Knowledge_argument>" is our brains, in general, don't have a way to take received code/instructions and process them accordingly. I think if our brains could do this, if we could take book knowledge and use it to build neural structures for processing information in specified ways, then Mary could learn what it is like to see red without red light ever hitting her retina. But such flexibility in the brain would make us vulnerable to "mind viruses" that spread by words or symbols. Modern computers clearly demarcate <https://en.wikipedia.org/wiki/NX_bit> executable and non-executable memory to limit similar dangers. This might also explain the apparent third-person / first-person distinction. We can communicate through language any Turing machine, and understand the functioning of that machine and processing in a third person way, but without re-wiring ourselves, we have no way to perceive it in a direct first-person way. > > > Here's the Python library that implements these ideas: > http://graphbrain.net/ > > Very cool! > So far we use modern machine learning to parse natural language into this > representation, and then take advantage of the regularity of the structures > to automatically identify stuff in text corpora for the purpose of > computational social science research. > > Something I dream of, and intend to explore at some point, is to attempt > to go beyond the parser and actually "execute the code", and thus try to > close the loop with the idea of procedural semantics. > I have often wondered, if an alien race discovered an english dictionary (containing no pictures) would there be enough internal consistency and information present in that dictionary to work out all the meaning? I have the feeling that because there is enough redundancy in it, together with a shared heritage of evolving in the same physical universe, there is some hope that they could, but it might involve a massive computational process to bootstrap. Once they make some headway towards a correct interpretation of the words, however, I think it will end up confirming itself as a correct understanding, much like the end stages of solving a Sudoku puzzle become easier and self-confirming of the correctness of the solution. Is this the problem you are attempting to solve with the semantic hyper graphs/graph brain, or that such graphs could one day solve? Jason > > > Am Mi, 31. Mär 2021, um 17:58, schrieb Jason Resch: > > I was thinking about what aspects of conscious experience are communicable > and which are not, and I realized all communication relies on some > pre-existing shared framework. > > It's not only things like "red" that are meaningless to someone whose > never seen it, but likewise things like spatial extent and dimensionslity > would likewise be incommunicable to someone who had no experience with > moving in, or through, space. > > Even communicating quantities requires a pre-existing and common system of > units and measures. > > So all communication (inputs/outputs) consist of meaningless but strings. > It is only when a bit string is combined with some processing that meaning > can be shared. The reason we can't communicate "red" to someone whose never > seen it is we would need to transmit a description of the processing done > by our brains in order to share what red means to oneself. > > So in summary, I wonder if anything is communicabke, not just qualia, but > anything at all, when there's not already common processing systems between > the sender and receiver, of the information. > > Jason > > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhC%3Dq%3D1t6mQzo%2BLLZCOrpXFK9etNojhQ-hgb%2BZaE2wr0A%40mail.gmail.com > <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhC%3Dq%3D1t6mQzo%2BLLZCOrpXFK9etNojhQ-hgb%2BZaE2wr0A%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/everything-list/60bab9c1-98a1-4001-830f-fd7a469b3a8d%40www.fastmail.com > <https://groups.google.com/d/msgid/everything-list/60bab9c1-98a1-4001-830f-fd7a469b3a8d%40www.fastmail.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgWHah6t6fAoAKd7kZ6Ch%3D-WPL%2BzLwqa1J0psVrKC-oJQ%40mail.gmail.com.

