Re: [agi] Philosophy of General Intelligence

2008-09-09 Thread Mike Tintner

Narrow AI : Stereotypical/ Patterned/ Rational

Matt:  Suppose you write a program that inputs jokes or cartoons and outputs 
whether or not they are funny


AGI : Stereotype-/Pattern-breaking/Creative

What you rebellin' against?
Whatcha got?

Marlon Brando. The Wild One (1953)  On screen, he rebelled against the 
man; offscreen, he rebelled against the rebel stereotype imposed on him. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Does prior knowledge/learning cause GAs to converge too fast on sub-optimal solutions?

2008-09-09 Thread William Pearson
2008/9/8 Benjamin Johnston [EMAIL PROTECTED]:

 Does this issue actually crop up in GA-based AGI work? If so, how did you
 get around it? If not, would you have any comments about what makes AGI
 special so that this doesn't happen?


Does it also happen in humans? I'd say yes, therefore it might be a
problem we can't avoid but only mitigate by having communities of
intelligences sharing ideas so that they can shake each other out of
their maxima assuming they settle in different ones (different search
landscapes and priors help with this). The community might reach a
maxima as well, but the world isn't constant so good ideas might
always be good, changing the search landscapes, meaning a maxima my
not be a maxima any longer.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] any advice

2008-09-09 Thread Valentina Poletti
I am applying for a research program and I have to chose between these two
schools:

Dalle Molle Institute of Artificial Intelligence
University of Verona (Artificial Intelligence dept)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] any advice

2008-09-09 Thread Jan Klauck
 Dalle Molle Institute of Artificial Intelligence
 University of Verona (Artificial Intelligence dept)

If they were corporations, from which one would you buy shares?

I would go for IDSIA. I mean, hey, you have Schmidhuber around. :)

Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Artificial humor

2008-09-09 Thread Matt Mahoney
A model of artificial humor, a machine that tells jokes, or at least inputs 
jokes and outputs whether or not they are funny. Identify associations of the 
form (A ~ B) and (B ~ C) in the audience language model where (A ~ C) is 
believed to be false or unlikely through other associations. Test whether the 
joke activates A, B, and C by association to induce the association (A ~ C).

This approach differs from pattern recognition and machine learning techniques 
used in other text classification tasks such as spam detection or information 
retrieval: a joke is only funny the first time you hear it. That's because once 
you form the association (A ~ C), it is added to the language model and you no 
longer have the prerequisites for the joke.

Example 1:
Q. Why did the chicken cross the road?
A. To get to the other side.

(I know, not funny, but pretend you haven't heard it).  We have:
A ~ B: Chickens have legs and can walk.
B ~ C: People walk across the road for a reason.
A ~ C: Chickens have human-like motivations.

Example 2 requires a longer associative chain:
(A comment about Sarah Palin) A vice president who likes hunting. What could go 
wrong?

It invokes the false conclusion: (Sarah Palin ~ hunting accident) by inductive 
reasoning: (Sarah Palin ~ vice president ~ Dick Cheney ~ hunting accident) and 
(Sarah Palin ~ hunting ~ hunting accident).  Note that all of the preconditions 
must be present for the joke to work. For example, the joke would not be funny 
if told about Joe Biden (doesn't hunt), George W. Bush (not vice president), or 
if you were unaware of Dick Cheney's hunting accident or that he was vice 
president. In order for a language model to detect the joke as funny, it would 
have to know that you know all four of these facts and also know that you 
haven't heard the joke before.

Humor detection obviously requires a sophisticated language model and knowledge 
of popular culture, current events, and what jokes have been told before. Since 
entertainment is a big sector of the economy, an AGI needs all human knowledge, 
not just knowledge that is work related.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial humor

2008-09-09 Thread Mike Tintner

Matt,

Humor is dependent not on inductive reasoning by association, reversed or 
otherwise, but on the crossing of whole matrices/ spaces/ scripts .. and 
that good old AGI standby, domains. See Koestler esp. for how it's one 
version of all creativity -


http://www.casbs.org/~turner/art/deacon_images/index.html

Solve humor and you solve AGI. 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] OpenCogPrime tutorial tomorrow nightlll

2008-09-09 Thread Ben Goertzel
The introductory OpenCogPrime tutorial will be
tomorrow night...

http://opencog.org/wiki/OpenCogPrime:TutorialSessions

I'm flying home from California on the red-eye tonight so
don't expect me to be fully lucid, but hey ;-)

ben

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com