Daniel/Ivan, It is quite obvious we are not really in OpenCog territory here, but what your discussion is hinting at is that you will need your own theory of meaning, or theory of the meaning of meaning. At the conceptual level my approach begins where Linas left off, ie there is no meaning independent of agents. And yes, signals between agents, even if it is "notes to oneself", attempt to "compress" information about the universe(s), something like "dad's back" means "I don't know if a supernova exploded but hopefully it won't matter, while dad's return matters in so many ways". A "universe" with explicitly defined agents and their internal states will come as close as possible to intelligent interacting with and understanding humans.
AT On Thursday, April 20, 2017 at 10:52:27 PM UTC+2, Daniel Gross wrote: > > Hi Ivan, > > thank you for your response. > > Pattern matching is a very general purpose mechanism -- in my mind key > questions are: > > what governed the language for pattern description and the semantics of > how patterns match with inputs > what governs the language of transformational rules, triggered by patterns > > and finally, what mechanism creates patterns and the associated > transformational rules, so that the inputs and outputs are correlated > meaningful, relevant (semantically, temporally), and accurate enough in > relation to the cognitive support they intend (i.e. teleological) provide > > > Daniel > > On Thursday, 20 April 2017 23:40:05 UTC+3, Ivan Vodišek wrote: >> >> Hey Daniel, great to see someone interested in AGI :) >> >> How about us, humans, I mean how do we think? I'm not trying to resemble >> our neural networks, I took another, top-down approach, in between, but >> let's observe us as an thinking example. Do we see how our thoughts are >> formed? I think that we don't see the math behind it (correct me if I'm >> wrong). All we see in our mind is input sensory data, or memories of it. >> From what we see in input, we try to adjust our output to reach the input >> we care about. If we fail, we remember that we failed. If we succeed, we >> remember the output actions to repeat them at places we find appropriate. >> In this process, we can see only with our sensory input, yet we don't see >> the math behind it. Looking from an AGI programming aspect, this math would >> be that invisible part, the part of notions that programmers would type >> into the machine. The machine (at run-time) doesn't need to see how it is >> really functioning behind the curtain, just to perform actions based on its >> input. Analogy is like the application user doesn't need to know how the >> application is programmed to actually use the application. She enters some >> data, observe output and she can do wonderful stuff without even seeing a >> line of code behind the application. In that sense, it is possible for us >> to change the world without knowing how we really do it. So, I assume, the >> machine could do it in the similar fashion. >> >> Let's extrapolate this to our imaginary programming language, how would >> code in this language work?. The code reads some input, do some math >> invisible to users, and outputs something back to users, but what is this >> output really? If we say that output is really just a replicated input from >> the past, then even the programmer doesn't have to know the exact shape of >> output. All the programmer needs to know is that user entered something >> back there and that we want to replicate it in our output in given moment, >> based again on similarities between input data without knowing what the >> data actually is. And here we come to the essence of the problem: >> similarity. We need a method to compare the inputs without knowing the >> actual value of the input: we need to test if input I1 equals input I2. And >> I believe (with some testing behind) that's all we need to do tasks as >> complex as solving mathematical equations or concluding new knowledge. My >> belief comes from existence of a mechanism called pattern matching. We >> pattern match a set of rules against some input and provide relevant rule >> output. Remember that all these rule inputs (causes) and outputs >> (consequences) all came by simply remembering and replicating other inputs >> from the past of running the same process. From what I've seen in my work, >> with this pattern matching we can do pretty mean stuff, even comparing >> numbers regarding to their positive or negative distance from zero, or >> branching through different decisions, and all we need is testing if two >> inputs are equal. We don't even have to know what these inputs represent, >> numbers, letters, colors, cats or mice, to do something nice with them, >> making the world a better place to live in. >> >> I hope I didn't scare you with this philosophy massage, things are a lot >> simpler when it comes to burning in the rules by which the machine do this >> or that, being changing lights on semaphore, or deciding the moment in >> which it has to stop lip motors and speaker, not to offend a person in a >> morning that asked "how do I look?" :) It could be all about input, >> equality match and output. I am pretty sure about it by now. >> >> Tx for asking interesting questions :) >> >> ivan >> >> >> 2017-04-20 21:37 GMT+02:00 Daniel Gross <[email protected]>: >> >>> Hi Ivan, >>> >>> Your work sounds very exciting ... would be great to hear more about it. >>> >>> I think one issue with the approach you are describing is that you have >>> to assume the knowledge of a second language and a mapping, in principle, >>> from the first to the second. >>> >>> I think systems that aim to self-learn (unsupervised) try to omit such >>> an a-priori mapping because it would (presumably) make the knowledge >>> capture process non-scalable. >>> >>> So, you end up with a system that tries to self learn meaning of system >>> A on its own terms (and via "meta-cognitive" strategies derived from the >>> machine learning approach at hand- which are by definition meaning >>> agnostic) ... so i wonder where is the meaning in this kind of machine . >>> -- if the semantic graph is actually constructed out of the machine learned >>> parse of natural language text without a predefined mapping to a semantic >>> graph (which is what ones want to build in the first place). >>> >>> I think this is essentially what confuses me -- if i managed to explain >>> it correctly ... . >>> >>> Daniel >>> >>> >>> On Friday, 14 April 2017 14:07:08 UTC+3, Alex wrote: >>>> >>>> Hi! >>>> >>>> What is the best texbook (most relevant to Opencog Node and Link Types) >>>> in Knowledge representation? I am aware about books about PLN and >>>> egineering AGI (and I am reading them and they are relevant to >>>> probabilisti >>>> reasoning side of knowledge represenatation), but I feel that e.g. >>>> concepts >>>> of inheritance (extensional and intensional) as adopted by OpenCog >>>> Atomsapce is coming from earlier work - so from what work? I would like to >>>> see this work, to include it into broader context. I have adapted to UML, >>>> ER, OO design and I am still struggling to model knowledge using OpenCog >>>> nodes and links. That is why I am seeking more books to dive into this >>>> line >>>> of thinkin. >>>> >>>> I am reading now: >>>> Knowledge Representation and Reasoning (The Morgan Kaufmann Series in >>>> Artificial Intelligence) >>>> <https://www.amazon.co.uk/Knowledge-Representation-Reasoning-Artificial-Intelligence/dp/1558609326/ref=sr_1_1?s=books&ie=UTF8&qid=1492167755&sr=1-1&keywords=knowledge+representation>17 >>>> >>>> Jun 2004 >>>> by Ronald Brachman and Hector Levesque Dr. >>>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "opencog" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> To post to this group, send email to [email protected]. >>> Visit this group at https://groups.google.com/group/opencog. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/opencog/54b5f383-e6a3-41bb-b2ee-64f7f7cc3c8f%40googlegroups.com >>> >>> <https://groups.google.com/d/msgid/opencog/54b5f383-e6a3-41bb-b2ee-64f7f7cc3c8f%40googlegroups.com?utm_medium=email&utm_source=footer> >>> . >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/1da5d026-96af-4a73-88db-0868c09c8303%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
