I've certainly considered it, but I wasn't sure where I would fit and I'm 
minimally familiar with OpenCog. Also, I have some pretty strong intuitions as 
to how the internal representation should look, to the point that I passed over 
the link parser in favor of implementing my own parser in order to avoid a lot 
of work in conversion between the two formats.

I'm a bit concerned over losing control of the direction that's taken -- I 
don't want to be forced take a back seat to someone who doesn't share my 
vision. I might come on board anyway, though, since it would be nice to be part 
of a team instead of going it alone, and there's no reason other than time 
constraints that I couldn't continue with my own project.

Should I just email Ruiting, or what?



On Aug 27, 2012 6:24 PM, Ben Goertzel <[email protected]> wrote: 


Hey Aaron,
Why don't you join us in OpenCog ... Ruiting Lian is currently attempting to do 
exactly the same thing in OpenCog, and could use some help ;)
See the PDF at

http://wiki.opencog.org/w/Link2Atom
for a general description of the approach...
ben


On Mon, Aug 27, 2012 at 7:10 PM, Aaron Hosford <[email protected]> 
wrote:

That's the base assumption for my current project. I'm starting from human 
language and attempting to derive an internal representation that 
corresponds roughly to that used by humans. It is my hope that once an 
appropriate model of how humans internally represent knowledge is available, 
the actual mental computations we perform to handle higher-level rational 
thought should become much more amenable to understanding and analysis. This 
seems self evident to me, since clearly human beings have solved the GI 
problem, which means we probably have some sort of internal representation 
that makes the sorts of representational gymnastics that are necessary for 
GI much simpler to perform. This approach has the advantage, as well, 
that its accuracy as a model of human internal knowledge 
representation should be directly verifiable in experiments with human 
subjects.


 
The idea is that I can run a parser on a piece of natural language, extract out 
the relationships between the words as a semantic net, convert that format into 
another semantic net that represents the meaning of the language sample, and 
then reverse the flow back to natural language which is identical in meaning 
but may be stated differently. When the meaning has been extracted and 
represented using the internal format, it can be linked up with other semantic 
nets that represent the meanings of other statements/questions. This combined 
net in turn can be analyzed directly as a collection of logical predicates and 
queries in which the binding of two symbols (word/phrase occurrences) to a 
common referent are directly represented as links from those 
symbols to that referent's node. New statements/queries can be generated 
via inference rules and other daemons, and then converted to natural 
language using the parser, etc. in reverse.


 
I have already built a small system as a proof of concept with Boolean 
links -- either a semantic link exists or doesn't, rather than 
allowing links to have real-valued strengths -- and was able to resolve 
anaphora moderately well, given its toy nature. Since this initial 
implementation left me unsatisfied with how uncertainty was handled, I'm 
working now on rebuilding the system using real-valued links that represent 
probability/uncertainty, similarly to the truth values used by 
the term logic-based inference system of NARS 
(https://sites.google.com/site/narswang/). Adding in the ability to represent 
uncertainty will allow the system to compare alternatives and choose the most 
salient anaphoric referent that makes sense, taking advantage of knowledge the 
system has already acquired to determine what makes sense, rather than just 
taking the most salient/obvious choice in terms of raw language structure 
independent of conceptual context.


 
 
 
On Mon, Aug 27, 2012 at 4:55 PM, Anastasios Tsiolakidis 
<[email protected]> wrote:

As I started reading I thought to myself "I told you 1000 times, it
depends on the criteria". Reading on, I saw that it is precisely the


criteria you use as a parameter. Well, I'd like to find out the
programming language that makes the most money while giving
immortality :) A little more seriously, if the criteria are cognitive,
as they often are in the real world, you'd be digging yourself a hole


too deep to get out of. On the other hand, if the criteria are
domain-specific, relating to well-behaved domains, I am afraid we are
heading towards tautologies and trivialities. Something like
Mathematica would be optimal for algebra, analysis, gravity, mechanics


etc (though what about instead of calculating a parachute drop
actually measure a real parachute drop), for economics, psychology,
necromancy most things would do equally badly, and for AGI all options
have so far being worse than bad. Mind you, I am in the process of


defining an AGI architecture not as a compression problem but as a
distributed computation problem, and I would challenge you to answer
the question:

Which programming language/mechanism would be ideal for calculating X


as quickly as possible.

where X, for the sake of argument, is just a/any "heavy calculation"
without necessarily any of the anomalies of chaotic behavior, pi's
infinite series etc. It is not that I expect intelligence to arise out


of PDEs and integrals, rather I am asking which is the "perfect"
distributed system for calculus, as I am expecting your answer to take
the form of multipliers and other exotic units all converging in an


addition pipeline. I still can't help thinking that the fastest way
for parallel computations is the actual experiment, after all we have
the 3/n body problem and a ton of mathematics OR just an experiment
with n bodies in a field.



With regards to a possible language for AGI, I don't see how you can
do much better than a human language. Never mind Turing completeness,
we have GI completeness here (except for that part of human language,


perhaps 100% of it, that gets its meaning from its grounding, its
grounding from its embodiment, and its embodiment from - god?)

AT


On Mon, Aug 27, 2012 at 10:44 PM, Russell Wallace
<[email protected]> wrote:
> On Mon, Aug 27, 2012 at 9:12 PM, Ben Goertzel <[email protected]> 
wrote:


>> For domains in which one is concerned with recognizing large ensembles
>> of weak patterns, the language one uses to represent patterns can make
>> a big difference...
>>
>> Image analysis, genetic data analysis and financial prediction are


>> contexts in which I've found this to be the case
>>
>> In these settings, if one does pattern recognition via automated
>> program learning with an Occam bias,
>> the underlying language relative to which the Occam bias is expressed


>> makes a big difference...
>
> Absolutely, but these overheads are not constants - the computational
> cost of a poor choice of representation language is typically
> exponential.
>


>> From a different direction, consider Hutter's proof that AIXI-tl is as
>> good as any other reinforcement learning system ... up to an arbitrary
>> constant.
>
> Well, much violence is being done to the word 'constant' in this case.


> Sure, f(N) is constant for a given N, but... :)
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now


> RSS Feed: https://www.listbox.com/member/archive/rss/303/14050631-7d925eb1

> Modify Your Subscription: https://www.listbox.com/member/?&


> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now


RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4
Modify Your Subscription: https://www.listbox.com/member/?&


Powered by Listbox: http://www.listbox.com





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  




-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to