Hi Ivan, 

I think best if you can spend a bit time on working on a few representative 
examples that shows what you can do with your embedded language. AI 
discussions tend to get very abstract, very quickly :-), so to "engineer" 
ground ourselves its best to talk by way of examples. This helps highlight 
what one really *means* :-) by what one does. 

thank you,

Daniel




On Friday, 21 April 2017 00:25:19 UTC+3, Ivan Vodišek wrote:
>
> Mr. Daniel Gross,
>
> I'm afraid I'm going to leave the juicy AGI details to AGI developers (not 
> to say it is an easy part, far from that). I decided to be just a technical 
> guy, if anyone is interested in my low-level solution of programming 
> language that equally easy (or hard) solves application development and 
> inferring a knowledge.
>
> If you are interested in some of my unfinished work, I assembled a paper 
> for showing off with some of my academic friends 
> <http://lambda-the-ultimate.org/node/5413>, but it is not yet ready for a 
> broader public. Public version misses some examples 
> <https://docs.google.com/document/d/1uPGrUomiffB16osLkIWLROYLl4RYC6z-cx4fFkUDnZA/edit?usp=sharing>
>  
> and their thorough explanations (as a proof of concept), but I decided 
> first to program the language, build an user community, and then to show up 
> my face to the real AGI researchers, if they would want to consider another 
> solution, whoever would be interested.
>
> The project is conceptually defined, I'm in a process of implementing the 
> language in Javascript and things look good to me. Wish me luck :)
>
> ivan
>
>
>
> 2017-04-20 22:52 GMT+02:00 Daniel Gross <[email protected] <javascript:>>:
>
>> Hi Ivan, 
>>
>> thank you for your response. 
>>
>> Pattern matching is a very general purpose mechanism -- in my mind key 
>> questions are:
>>
>> what governed the language for pattern description and the semantics of 
>> how patterns match with inputs 
>> what governs the language of transformational rules, triggered by patterns
>>
>> and finally, what mechanism creates patterns and the associated 
>> transformational rules, so that the inputs and outputs are correlated 
>> meaningful, relevant (semantically, temporally), and accurate enough in 
>> relation to the cognitive support they intend (i.e.  teleological) provide 
>>
>>
>> Daniel
>>
>> On Thursday, 20 April 2017 23:40:05 UTC+3, Ivan Vodišek wrote:
>>>
>>> Hey Daniel, great to see someone interested in AGI :)
>>>
>>> How about us, humans, I mean how do we think? I'm not trying to resemble 
>>> our neural networks, I took another, top-down approach, in between, but 
>>> let's observe us as an thinking example. Do we see how our thoughts are 
>>> formed? I think that we don't see the math behind it (correct me if I'm 
>>> wrong). All we see in our mind is input sensory data, or memories of it. 
>>> From what we see in input, we try to adjust our output to reach the input 
>>> we care about. If we fail, we remember that we failed. If we succeed, we 
>>> remember the output actions to repeat them at places we find appropriate. 
>>> In this process, we can see only with our sensory input, yet we don't see 
>>> the math behind it. Looking from an AGI programming aspect, this math would 
>>> be that invisible part, the part of notions that programmers would type 
>>> into the machine. The machine (at run-time) doesn't need to see how it is 
>>> really functioning behind the curtain, just to perform actions based on its 
>>> input. Analogy is like the application user doesn't need to know how the 
>>> application is programmed to actually use the application. She enters some 
>>> data, observe output and she can do wonderful stuff without even seeing a 
>>> line of code behind the application. In that sense, it is possible for us 
>>> to change the world without knowing how we really do it. So, I assume, the 
>>> machine could do it in the similar fashion.
>>>
>>> Let's extrapolate this to our imaginary programming language, how would 
>>> code in this language work?. The code reads some input, do some math 
>>> invisible to users, and outputs something back to users, but what is this 
>>> output really? If we say that output is really just a replicated input from 
>>> the past, then even the programmer doesn't have to know the exact shape of 
>>> output. All the programmer needs to know is that user entered something 
>>> back there and that we want to replicate it in our output in given moment, 
>>> based again on similarities between input data without knowing what the 
>>> data actually is. And here we come to the essence of the problem: 
>>> similarity. We need a method to compare the inputs without knowing the 
>>> actual value of the input: we need to test if input I1 equals input I2. And 
>>> I believe (with some testing behind) that's all we need to do tasks as 
>>> complex as solving mathematical equations or concluding new knowledge. My 
>>> belief comes from existence of a mechanism called pattern matching. We 
>>> pattern match a set of rules against some input and provide relevant rule 
>>> output. Remember that all these rule inputs (causes) and outputs 
>>> (consequences) all came by simply remembering and replicating other inputs 
>>> from the past of running the same process. From what I've seen in my work, 
>>> with this pattern matching we can do pretty mean stuff, even comparing 
>>> numbers regarding to their positive or negative distance from zero, or 
>>> branching through different decisions, and all we need is testing if two 
>>> inputs are equal. We don't even have to know what these inputs represent, 
>>> numbers, letters, colors, cats or mice, to do something nice with them, 
>>> making the world a better place to live in.
>>>
>>> I hope I didn't scare you with this philosophy massage, things are a lot 
>>> simpler when it comes to burning in the rules by which the machine do this 
>>> or that, being changing lights on semaphore, or deciding the moment in 
>>> which it has to stop lip motors and speaker, not to offend a person in a 
>>> morning that asked "how do I look?" :) It could be all about input, 
>>> equality match and output. I am pretty sure about it by now.
>>>
>>> Tx for asking interesting questions :)
>>>
>>> ivan
>>>
>>>
>>> 2017-04-20 21:37 GMT+02:00 Daniel Gross <[email protected]>:
>>>
>>>> Hi Ivan, 
>>>>
>>>> Your work sounds very exciting ... would be great to hear more about 
>>>> it. 
>>>>
>>>> I think one issue with the approach you are describing is that you have 
>>>> to assume the knowledge of a second language and a mapping, in principle, 
>>>> from the first to the second. 
>>>>
>>>> I think systems that aim to self-learn (unsupervised) try to omit such 
>>>> an a-priori mapping because it would (presumably) make the knowledge 
>>>> capture process non-scalable. 
>>>>
>>>> So, you end up with a system that tries to self learn meaning of system 
>>>> A on its own terms (and via "meta-cognitive" strategies derived from the 
>>>> machine learning approach at hand- which are by definition meaning 
>>>> agnostic) ...  so i wonder where is the meaning in this kind of machine . 
>>>> -- if the semantic graph is actually constructed out of the machine 
>>>> learned 
>>>> parse of natural language text without a predefined mapping to a semantic 
>>>> graph (which is what ones want to build in the first place).
>>>>
>>>> I think this is essentially what confuses me -- if i managed to explain 
>>>> it correctly ... .
>>>>
>>>> Daniel
>>>>
>>>>
>>>> On Friday, 14 April 2017 14:07:08 UTC+3, Alex wrote:
>>>>>
>>>>> Hi!
>>>>>
>>>>> What is the best texbook (most relevant to Opencog Node and Link 
>>>>> Types) in Knowledge representation? I am aware about books about PLN and 
>>>>> egineering AGI (and I am reading them and they are relevant to 
>>>>> probabilisti 
>>>>> reasoning side of knowledge represenatation), but I feel that e.g. 
>>>>> concepts 
>>>>> of inheritance (extensional and intensional) as adopted by OpenCog 
>>>>> Atomsapce is coming from earlier work - so from what work? I would like 
>>>>> to 
>>>>> see this work, to include it into broader context. I have adapted to UML, 
>>>>> ER, OO design and I am still struggling to model knowledge using OpenCog 
>>>>> nodes and links. That is why I am seeking more books to dive into this 
>>>>> line 
>>>>> of thinkin.
>>>>>
>>>>> I am reading now:
>>>>> Knowledge Representation and Reasoning (The Morgan Kaufmann Series in 
>>>>> Artificial Intelligence) 
>>>>> <https://www.amazon.co.uk/Knowledge-Representation-Reasoning-Artificial-Intelligence/dp/1558609326/ref=sr_1_1?s=books&ie=UTF8&qid=1492167755&sr=1-1&keywords=knowledge+representation>17
>>>>>  
>>>>> Jun 2004
>>>>> by Ronald Brachman and Hector Levesque Dr.
>>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "opencog" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>> To post to this group, send email to [email protected].
>>>> Visit this group at https://groups.google.com/group/opencog.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/opencog/54b5f383-e6a3-41bb-b2ee-64f7f7cc3c8f%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/opencog/54b5f383-e6a3-41bb-b2ee-64f7f7cc3c8f%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "opencog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/opencog.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/opencog/d608c012-cfd9-450a-804f-c612dc4228ef%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/opencog/d608c012-cfd9-450a-804f-c612dc4228ef%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/7b5f0cdd-ddd8-454c-8485-9b55cb244ec9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to