> On 8 Apr 2021, at 21:38, Jason Resch <[email protected]> wrote:
> 
> Hi Telmo,
> 
> Thank you for these links, they are very helpful in articulating the problem. 
> I think you are right about there being some connection between communication 
> of qualia and the symbol grounding problem.
> 
> I used to think there were two kinds of knowledge:
> Third-person sharable knowledge: information that can be shared and 
> communicated through books, like the population of Paris, or the height of 
> Mount Everest
With mechanism, that is still first person plural. Third person knowledge is 
confined in elementary arithmetic.


> First-person knowledge: information that must be felt or experienced first 
> hand, emotions, feelings, the pain of a bee sting, the smell of a rose
… believing in some god, or primitive Reality, ...



> But now I am wondering if the idea of third-person sharable knowledge is an 
> illusion. The string encoding the height of Mount Everest is meaningless if 
> you have no framework for understanding physical spaces, units of length, 
> spatial extents, and the symbology of numbers.

Which is a good insight, as the physical reality is a sort of partially 
sharable qualia, although not in any provable way. 
But there is no real problem with the simple combinatorial or partially 
computable (assuming Church-Thesis) part of the arithmetical reality.
We can “communicate” that 24 is an even number, from finite simple hypothesis 
we can agree on, like x + 0 = x, etc.



> All of that information has to be unpacked, and eventually processed into 
> some thought that relates to a basis of conscious experience and 
> understanding of heights and sizes. Even size is a meaningless term when 
> attempting to compare relative sizes between two universes, so in that sense 
> it must be tied somehow back to the subject.

… which explains why, except for the minimum amount of arithmetic needed to 
define what is a digital machine, everything is eventually defined through the 
“8 eyes” of the self-introspecting machine. Yet, eventually physics can be 
(re)-defined by the set on laws of prediction that all machine agree when 
introspecting themselves deep enough. 

The physical universe is not “out there”. It is only a common sharable illusion 
about prediction shared by all universal numbers. The rest is 
geography/history, with contingent aspect related to long computations, that we 
can locally share.


> 
> There also seem to be counter-examples to a clear divide between first- and 
> third-person knowledge. For example, is the redness of red really 
> incommunicable between two synesthesiacs who both see the number 5 as red?

How could they know they see the same red, or that they have the same 
experience, without defining red ostensively, like Brent mentions regularly 
(and correctly, Imo)?


> If everyone in the world had such synesthesia, would we still think book 
> knowledge could not communicate the redness of red?

I don’t see why synesthesia could help here. It seems you would need some 
telepathy added here, which is probably not what you are thinking?



> In this case, what makes redness communicable is the shared processing 
> between the brains of the synesthesiacs, their brains process the symbol in 
> the same way.

Yes, but not in a provable way. We might discover later that our substitution 
level is much lower than we have thought, and that the qualia “red” needs to 
emulate the glial cells. The neuron would keep people agreeing on may aspect of 
red, and we might agree on many overlapping experience, but eventually 
realised, when getting a better artificial brain, that we were not really 
seeing red in the same way, after all.

Honestly, I don’t see why synesthesia could help, without postulating some 
substitution level. 




> 
> 
> More comments below:
> 
> 
> 
> On Thu, Apr 8, 2021 at 11:11 AM Telmo Menezes <[email protected] 
> <mailto:[email protected]>> wrote:
> Hi Jason,
> 
> I believe that you are alluding to what is known in Cognitive Science as the 
> "Symbol Grounding Problem":
> https://en.wikipedia.org/wiki/Symbol_grounding_problem 
> <https://en.wikipedia.org/wiki/Symbol_grounding_problem>
> 
> My intuition goes in the same direction as yours, that of "procedural 
> semantics". Furthermore, I am inclined to believe that language is an 
> emergent feature of computational processes with self-replication. From 
> signaling between unicellular organisms all the way up to human language.
> 
> That is interesting. I do think there is something self-defining about the 
> meaning of processes. Something that multiplies two inputs can always be said 
> to be multiplying. The meaning of the operation is then grounded in the 
> function and processes of that process, which is made unambiguous. The 
> multiplication process could not be confused with addition or subtraction. 
> This is in contrast to a N-bit string on a piece of paper, which could be 
> interpreted in at least 2^N ways (or perhaps even an infinite number of ways, 
> if you consider what function is applied to that bit string).

All right.


>  
> 
> Luc Steels has some really interesting work exploring this sort of idea, with 
> his evolutionary language games:
> https://csl.sony.fr/wp-content/themes/sony/uploads/pdf/steels-12c.pdf 
> <https://csl.sony.fr/wp-content/themes/sony/uploads/pdf/steels-12c.pdf>
> 
> Evolving systems that can communicate amongst themselves seems to be a 
> fruitful way to explore these issues. Has anyone attempted to take simple 
> examples, like computer simulated evolved versions of robots playing soccer, 
> and add in a layer that lets each player emit and receive arbitrary signals 
> from other players? I would expect there would be strong evolutionary 
> pressures for learning to communicate things like "I see the ball", "I'm 
> about to take a shot", etc. to other teammates.

All the world agree that Alpha-Go has won the Go Championship Game.
That requires deep learning, that is many neuron layers. We can do the same in 
natural language, but it is much more complicated to get a person grounded in 
our reality. We need the equivalent of an hyppocampus, a good handling of long 
term and short term memories. Here, I do think that some small programs can get 
this, except that they might take millions of years to evolve. Then such 
machine will acts “intelligently”, fight for their right, and social security 
...



>  
> 
> 
> I have been working a lot with language these days. I and my co-author 
> (Camille Roth) developed a formalism called Semantic Hypergraphs, which is an 
> attempt to represent natural language in structures that are akin to typed 
> lambda-calculus:
> https://arxiv.org/abs/1908.10784 <https://arxiv.org/abs/1908.10784>
> 
> One curiosity is that all human languages appear to be "Turing complete" in 
> the sense that we can use natural language to describe and define any finite 
> process. I don't know how significant this is though, as in general it is 
> pretty easy to achieve Turing complete languages.

Typed lambda calculus (combinators) are usually NOT Turing-complete, unless you 
add a universal type (but then the semantics is again inaccessible to the 
entity itself).

It is related to the eternal hesitation between security-totality, and 
liberty-universality-partiality lived by *all¨Turing complete entity.


> 
> I think the central problem with "Mary the super-scientist 
> <https://en.wikipedia.org/wiki/Knowledge_argument>" is our brains, in 
> general, don't have a way to take received code/instructions and process them 
> accordingly. I think if our brains could do this, if we could take book 
> knowledge and use it to build neural structures for processing information in 
> specified ways, then Mary could learn what it is like to see red without red 
> light ever hitting her retina.

Only by assuming some substitution level, which requires always some act of 
faith...



> But such flexibility in the brain would make us vulnerable to "mind viruses" 
> that spread by words or symbols. Modern computers clearly demarcate 
> <https://en.wikipedia.org/wiki/NX_bit> executable and non-executable memory 
> to limit similar dangers.

That’s what “types” are for. In untyped lambda calculus, like in LISP, or the 
SK combinators, the option is complete freedom, with the risk of crashing the 
machine...


> 
> This might also explain the apparent third-person / first-person distinction.

Hmm… I do think this appears with the fact that G¨proves []p <-> ([]p & p), but 
the machine can never see this, and indeed, the machine can define “[]p”, but 
cannot define “[]p & p”. It is like the difference between seeing torture and 
being tortured. It is very different.
The universal machine is aware of “[]p & p”, but cannot associate it to any 
machine, nor to any third person describable way, in any rationally justifiable 
way. She can say “yes” to the doctor, but that requires an act of faith, in 
mechanism, and in some doctor...



> We can communicate through language any Turing machine, and understand the 
> functioning of that machine and processing in a third person way, but without 
> re-wiring ourselves, we have no way to perceive it in a direct first-person 
> way.

With rewiring yourself, you change yourself, including the possible 
interpretation of previous experience, but without “you" being able to see the 
difference, because “you” has become another.


> 
>  
> 
> 
> Here's the Python library that implements these ideas:
> http://graphbrain.net/ <http://graphbrain.net/>
> 
> 
> Very cool!
>  
> So far we use modern machine learning to parse natural language into this 
> representation, and then take advantage of the regularity of the structures 
> to automatically identify stuff in text corpora for the purpose of 
> computational social science research.
> 
> Something I dream of, and intend to explore at some point, is to attempt to 
> go beyond the parser and actually "execute the code", and thus try to close 
> the loop with the idea of procedural semantics.
> 
> 
> I have often wondered, if an alien race discovered an english dictionary 
> (containing no pictures) would there be enough internal consistency and 
> information present in that dictionary to work out all the meaning?

Certainly not *all* the meaning. We cannot do that among the humans.



> I have the feeling that because there is enough redundancy in it, together 
> with a shared heritage of evolving in the same physical universe, there is 
> some hope that they could, but it might involve a massive computational 
> process to bootstrap. Once they make some headway towards a correct 
> interpretation of the words, however, I think it will end up confirming 
> itself as a correct understanding, much like the end stages of solving a 
> Sudoku puzzle become easier and self-confirming of the correctness of the 
> solution.

They will be able to progress, but will never get all the meaning, even for 
simple arithmetical notions. That is a consequence of incompleteness. Yet, 
better and better approximations can be accessible.

Bruno



> 
> Is this the problem you are attempting to solve with the semantic hyper 
> graphs/graph brain, or that such graphs could one day solve?
> 
> Jason
>  
> 
> 
> Am Mi, 31. Mär 2021, um 17:58, schrieb Jason Resch:
>> I was thinking about what aspects of conscious experience are communicable 
>> and which are not, and I realized all communication relies on some 
>> pre-existing shared framework.
>> 
>> It's not only things like "red" that are meaningless to someone whose never 
>> seen it, but likewise things like spatial extent and dimensionslity would 
>> likewise be incommunicable to someone who had no experience with moving in, 
>> or through, space.
>> 
>> Even communicating quantities requires a pre-existing and common system of 
>> units and measures.
>> 
>> So all communication (inputs/outputs) consist of meaningless but strings. It 
>> is only when a bit string is combined with some processing that meaning can 
>> be shared. The reason we can't communicate "red" to someone whose never seen 
>> it is we would need to transmit a description of the processing done by our 
>> brains in order to share what red means to oneself.
>> 
>> So in summary, I wonder if anything is communicabke, not just qualia, but 
>> anything at all, when there's not already common processing systems between 
>> the sender and receiver, of the information.
>> 
>> Jason
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] 
>> <mailto:[email protected]>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhC%3Dq%3D1t6mQzo%2BLLZCOrpXFK9etNojhQ-hgb%2BZaE2wr0A%40mail.gmail.com
>>  
>> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhC%3Dq%3D1t6mQzo%2BLLZCOrpXFK9etNojhQ-hgb%2BZaE2wr0A%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/60bab9c1-98a1-4001-830f-fd7a469b3a8d%40www.fastmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/60bab9c1-98a1-4001-830f-fd7a469b3a8d%40www.fastmail.com?utm_medium=email&utm_source=footer>.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgWHah6t6fAoAKd7kZ6Ch%3D-WPL%2BzLwqa1J0psVrKC-oJQ%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgWHah6t6fAoAKd7kZ6Ch%3D-WPL%2BzLwqa1J0psVrKC-oJQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/000B6B10-339B-4FA2-AB88-394845DA0159%40ulb.ac.be.

Reply via email to