Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Jean-paul Van Belle
My two cents.  FWIW: Anyone who seriously doubts whether AGI is possible will 
never contribute anything of value to those who wish to build an AGI. Anyone 
wishing to build an AGI should stop wasting time reading such literature 
including postings (let alone replying to them). This is not advocating blind 
or unscientific dogma, sometimes you just have to make a choice in belief 
systems and no one achieved anything of greatness or even just significance by 
listening to those who say it can't be done. Although reading the various 
philosophical arguments against AI was a useful step in my AGI education, I 
went through that phase using books and internet articles. Several times I was 
on the verge of unsubscribing from the list because of those discussions (and 
all of the ego-maniacal mudslinging, flamewars and troll-postings) - I agree 
fully with Harry. I want to see new ideas, experiences on what worked and didnt 
work, who's working on what approaches, suggestions for ways forward, 
references to new resources or tools etc. So when e.g. Ben 'criticises' Richard 
Loosemore's model, I'm highly interested (because Richard's way of thinking is 
in some aspects much closer to mine than Ben's approach), when Richard replies 
emotionally, I just skip his reply but when he puts forward a rational argument 
it is extremely interesting to me. So I vote to stop all philosophical 
arguments on the possibility of AGI on this list, even though it is a 
necessary, or better, crucial part of any AGIer's development stage... 
incidentally: storing any AI reading in my AI philosophy folder is typically 
equivalent to utter condemnation, despite the fact that philosophy is one of my 
greatest interests.
Note that you should discount my posting somewhat due to the fact that I 
haven't posting anything for quite a while but that's because I am rather 
focussing my little time on building a first generation prototype.
 
= Jean-Paul
 On 2008/10/15 at 18:12, Harry Chesley [EMAIL PROTECTED] wrote:
On 10/15/2008 8:01 AM, Ben Goertzel wrote:
  What are your thoughts on this?

A narrower focus of the list would be better for me personally.

I've been convinced for a long time that computer-based AGI is possible, 
and am working toward it. As such, I'm no longer interested in arguments 
about whether it is feasible or not. I skip over those postings in the list.

I also skip over postings which are about a pet theory rather than a 
true reply to the original post. They tend to have the form your idea x 
will not work because it is in opposition to my theory y, which states 
insert complex description here. Certainly ones own ideas and 
theories should contribute to a reply, but they should not /be/ the reply.

And the last category that I skip are discussions that have gone far 
into an area that I don't consider relevant to my own line of inquiry. 
But I think those are valuable contributions to the list, just not of 
immediate interest to me. Like a typical programmer, I tend to 
over-focus on what I'm working on. But what I find irrelevant may be 
spot on for someone else, or for me at some other time.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now 
RSS Feed: https://www.listbox.com/member/archive/rss/303/ 
Modify Your Subscription: https://www.listbox.com/member/?; 
Powered by Listbox: http://www.listbox.com


 
__
 

UNIVERSITY OF CAPE TOWN 

This e-mail is subject to the UCT ICT policies and e-mail disclaimer published 
on our website at http://www.uct.ac.za/about/policies/emaildisclaimer/ or 
obtainable from +27 21 650 4500. This e-mail is intended only for the person(s) 
to whom it is addressed. If the e-mail has reached you in error, please notify 
the author. If you are not the intended recipient of the e-mail you may not 
use, disclose, copy, redirect or print the content. If this e-mail is not 
related to the business of UCT it is sent by the sender in the sender's 
individual capacity.

_
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] organising parallel processes

2008-05-04 Thread Jean-paul Van Belle
I assume that you have checked out Hofstadters architecture mixing random  
relevance (Fluid Analogies Research Group)?

Jean-Paul Van Belle
Associate Professor


Head: Postgraduate Section, Department of Information Systems
Research Associate: Centre for IT and National Development in Africa (CITANDA)
The IS Dept is co-hosting ZA-WWW'08


Contact details: phone +27-21-6504256;   fax +27-21-6502280;   office LC 4.21


 On 2008/05/04 at 09:09, in message [EMAIL PROTECTED], rooftop8000 [EMAIL 
 PROTECTED] wrote:

hi,
I have a lot of parallel processes that are in control of their own activation 
(they can decide which processes are activated and for how long). I need some 
kind of organisation (a simple example would be a hierarchy of processes that 
only activate downwards). 

I'm looking for examples of possible organisations or hierarchies in existing 
AI systems or designs of them . Any ideas?
thanks



  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI textbook

2008-03-31 Thread Jean-paul Van Belle
Hi Ben

Hereby my proposed additional topics / references for your wiki - aimed
at the more computer scienty/mathematically challenged (like me):
Sorry don't have the time to add directly to the wiki 

AGI ARCHITECTURES (EXPANDS on the COGNITIVE ARCHITECTURES section)
Questions about any Would-Be AGI System. Ben Goertzel - May 20, 2002
Artificial General Intelligence: A Gentle Introduction - Pei Wang
Architectures for intelligent systems by J. F. Sowa
Cognitive Architectures: Research Issues and Challenges by Langley,
Laird  Rogers.
Choosing and getting started with a cognitive architecture to test and
use human-machine interfaces by Frank RITTER. MMI-Interaktiv, #7, Jun04
Artificial General Intelligence PowerPoint presentation by ??
Artificial General Intelligence - Goertzel, Ben; Pennachin, Cassio
(Eds). Chapter by Peter Voss  Four Contemporary AGI Designs: a
Comparative Treatment Sep2006 Stan FRANKLIN, Ben GOERTZEL, Alexei
SAMSONOVICH, Pei WANG
Mixing Cognitive Science Concepts with Computer Science Algorithms and
Data Structures: An Integrative Approach to Strong AI. Moshe Looks  Ben
Goertzel
Computational ARchitectures for Intelligence and Motivation  Darryl N.
Davis The 17th IEEE International Symposium on Intelligent Control ,
ISIC’02, Canada, Oct 2002
Considerations Regarding Human-Level Artificial Intelligence - Nils J.
Nilsson - Jan 2002
A Survey of Artificial Cognitive Systems: Implications for the
Autonomous Development of Mental Capabilities in Computational Agents
David Vernon.

AGENTS
Search and select depending on the nature of your AGI architecture.

AUTONOMIC COMPUTING 
Any one of the IBM AC overview papers e.g.
Practical Autonomic Computing: Roadmap to Self Managing Technology A
White Paper Prepared for IBM
January 2006 

BOTS 
Read any document on AIML, check out on the Loebner prize and check the
source code of at least  one ChatterBot in your preferred programming
langague.

COGNITION
List of cognitive biases - Wikipedia
Contemporary Approaches to Symbol Grounding - Moshe Looks
Interior Grounding, Reflection, and Self-Consciousness - Marvin Minsky
Intl Conf on Brain Mind   Society, Japan 2005.
Solving the Symbol Grounding Problem: a Critical Review of Fifteen
Years of Research by  Mariarosaria Taddeo and Luciano Floridi
French, R. M. (2002). The Computational Modeling of Analogy -Making.
Trends in Cognitive  Sciences, 6(5), 200-205.

COMPLEXITY THEORY - any good overview

COMPUTATIONAL INTELLIGENCE
Feigenbaum - Grand Challenges for Computational Intelligence
Some paper on AIXI
Craenen  Eiben - Computational Intelligence 2002
Moshe Looks - Learning with Semantic Spaces: From Parameter Tuning to
Discovery
One of Wang's papers on Cognitive Informatics e.g. Theoretical
Framework of CI, CI Models of the Brain or similar

INTRODUCTORY, POPULAR  GENERAL BOOKS:
# Eric Baum, What is Thought?, 2004
# Ben Goertzel, The Hidden Pattern: A Patternist Philosophy of Mind,
2006
# Ben Goertzel, Cassio Pennachin (Eds.), Artificial General
Intelligence, 2007
# Jeff Hawkins, On Intelligence, 2004
# Steven Pinker, The Stuff of Thought, 2007; How the Mind Works; The
language Instinct.
# Storrs Hall, Beyond AI
# Franklin, Artificial Minds
# Sowa, Knowledge Representation
# Douglas Hofstadter, Godel, Escher  Bach
# Ray Kurzweil, The Age of Spritiual Machines  The Singularity Is
Near
# Wolfram, A new kind of science
# Smith, On the origin of objects
# Jeff Hawkins, on intelligence
and at least two of the following 'cognitive science compilations'
# Rosenthal (ed), The Nature of Mind
# Bechtel  Graham, a Companion to Cognitive Science
# Posner (ed), Foundationsof Congitive Science
# Wilson  Kehl, THe MIT Encyclopedia of Cognitive Sciences
Some other popular science titles that you might consider (some are
quite dated):
#Kevin Warwick, In the mind of the Machine
#Robert Winston, the human mind
#Philip Johnson-Laird - the computer and the mind
#Gardenfors -
conceptual spaces
#Rita carter - mapping the mind
#Rondey Brooks - Robot
#and you should read at least one book by Roger Penrose (and/or perhaps
Daniel Dennett.).

Some advice on literature/articles NOT to read i.e. TIME-CONSUMING
debates to avoid wasting your precious time on:
** PROGRAMMING LANGUAGES **
If you're defining your own AGI project, you have to choose (ideally
one) language. 
IMHO 
if you already have lots of experience and feel comfortable in a
particular language and it  *seems to you* that it is adequate, then
don't waste time debating other languages - *all*  languages have their
advantages and limitations. However, if you have to choose a new
language  (or don't mind changing) then:
if you need raw speed and current hardware is likely to be a
bottleneck, then:
= if your algorithms are fairly classic in nature, choose C#, C++ or
similar - ideally a dialect  that supports parallel hardware
architectures
= but if your algorithms are fairly esoteric and/or you have 'strange'
data structures, looks at  Lisp, SmallTalk or similar
if you think 

[agi] Scalable computer resources

2008-02-29 Thread Jean-paul Van Belle
Hi
 
There was a thread on cluster and distributed computing earlier. It was in the 
context of some of you possibly needing huge computer resource (bandwidth, 
storage space and/or raw processing power) for a short amount of time and . 
Check out Amazon's S3 and EC2 web services. To test out your ideas without 
having to worry about setting up a PC cluster or paying for huge computer 
resources which you only need briefly to test out ideas, their EC2 initiative 
Elastic Compute Cloud of Amazon.com and related webservices (e.g. S3 for 
storage)  seems ideal. Amazon EC2 enables you to increase or decrease capacity 
within minutes, not hours or days. You can commission one, hundreds or even 
thousands of server instances simultaneously. 
[http://www.amazon.com/gp/browse.html?node=201590011];
 
Sample costs are:
$0.10 - per hour Small Instance computer time [7.5 GB of memory, 1 EC2 
Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance 
storage, 32-bit platform - Linux]

$0.18 per GB - first 10 TB / month data transfer out // $0.16 per GB - next 40 
TB / month data transfer out // $0.13 per GB - data transfer out / month over 
50 TB
Storage (S3 service): $0.15 per GB-Month of storage used  (with Data Transfer 
from $0.10 per GB - all data transfer in, $.10-.18 out)

If you want to do text mining on the web, you can use Alexa's related 
webservice (eg allowing for 'million query research results' mining).
(Note: I don't earn commission on this nor do I have any relations to Amazon :) 
I haven't tested them out (yet) but their main development centre is right 
around the corner from me.
 

Jean-Paul Van Belle

Associate Professor
Head: Postgraduate Section, Department of Information Systems ( 
http://www.commerce.uct.ac.za/InformationSystems/ )
Research Associate: Centre for IT and National Development in Africa (CITANDA) 
( http://www.commerce.uct.ac.za/Organisations/CITANDA/ )
The IS Dept is co-hosting ZA-WWW'08 ( http://www.zaw3.co.za/ )
Contact details: phone +27-21-6504256;   fax +27-21-6502280;   office LC 4.21

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog

2007-12-28 Thread Jean-Paul Van Belle
IMHO more important than working towards contributing clean code would be to 
*publish the (required) interfaces for the modules as well as give standards 
for/details on the knowledge representation format*. I am sure that you have 
those spread over various internal and published documents (indeed, developing 
a system like Novamente or proposing a framework is impossible without those) 
but a cut-and-paste of the relevant sections are essential documentation for 
the framework. Also a concrete example of how a third-party module would slot 
into this framework would be mightily useful.

I am raising this because many would-be AGI developers have to decide on an 
interface and KR standard even if they develop their own proprietory system - 
lots of mileage would be gotten from not having to reinvent the wheel.

=Jean-Paul
-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 On 2007/12/28 at 14:59, in message
[EMAIL PROTECTED], Benjamin
Goertzel [EMAIL PROTECTED] wrote:
 On Dec 28, 2007 5:59 AM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:

 OpenCog is definitely a positive thing to happen in the AGI scene.  It's
 been all vaporware so far.
 
 Yes, it's all vaporware so far ;-)
 
 On the other hand, the code we hope to release as part of OpenCog actually
 exists, but it's not yet ready for opening-up as some of it needs to
 be extracted from
 the overall Novamente code base, and other parts of it need to be cleaned-up
 in various ways...
 
 Much of the reason for yakking about it months in advance of releasing it, 
 was a
 desire to assess the level of enthusiasm for it.  There are a number
 of enthusiastic
 potential OpenCog developers on the OpenCog mail list, so in that regard, I 
 feel
 the response has been enough to merit proceeding with the project...
 
 
 I wonder what would be the level of participation?
 
 Time will tell!
 
 -- Ben
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email 
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79895084-0bd555

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Jean-Paul Van Belle
Sounds like the worst case scenario: computations that need between say 20 and 
100 PCs. Too big to run on a very souped up server (4-way Quad processor with 
128GB RAM) but to scale up to a 100 Beowulf PC cluster typically means a factor 
10 slow-down due to communications (unless it's a 
local-data/computation-intensive algorithm) so you actually haven't gained much 
in the process. {Except your AGI is now ready for a distributed computing 
environment, which I believe luckily Novamenta was explicitely designed for.} 
:)
 
=Jean-Paul
 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

 Benjamin Goertzel [EMAIL PROTECTED] 2007/12/07 15:06 
I don't think we need more than hundreds of PCs to deal with these things,
but we need more than a current PC, according to the behavior of our
current algorithms.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73568490-365c88

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-07 Thread Jean-Paul Van Belle
Hi Matt, Wonderful idea, now it will even show the typical human trait of 
lying...when i ask it do you still love me? most answers in its database will 
have Yes as an answer  but when i ask it 'what's my name?' it'll call me John?

However, your approach is actually already being implemented to a certain 
extent. Apparantly (was it newsweek, time?) the No 1 search engine in 
(Singapore? Hong Kong? Taiwan? - sorry I forgot) is *not* Google but a local 
language QA system that works very much the way you envisage it (except it 
collects the answers in its own SAN i.e. not distributed over the user machines)

=Jean-Paul
 On 2007/12/07 at 18:58, in message
 [EMAIL PROTECTED], Matt Mahoney
 [EMAIL PROTECTED] wrote:
 
 Hi Matt
 
 You call it an AGI proposal but it is described as a distributed search
 algorithms that (merely) appears intelligent i.e. design for an
 Internet-wide message posting and search service. There doesn't appear to
 be any grounding or semantic interpretation by the AI system? How will it
 become more intelligent?

Turing was careful to make no distinction between being intelligent and
appearing intelligent.  The requirement for passing the Turing test is to be
able to compute a probability distribution P over text strings that varies
from the true distribution no more than it varies between different people. 
Once you can do this, then given a question Q, you can compute answer A that
maximizes P(A|Q) = P(QA)/P(Q).

This does not require grounding.  The way my system appears intelligent is by
directing Q to the right experts, and by being big enough to have experts on
nearly every conceivable topic of interest to humans.

A lot of AGI research seems to be focused on how to represent knowledge and
thought efficiently on a (much too small) computer, rather than on what
services the AGI should provide for us.

-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73912948-7bb204

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Jean-Paul Van Belle
Interesting - after drafting three replies I have come to realize that it is 
possible to hold two contradictory views and live or even run with it. Looking 
at their writings, both Ben  Richard know damn well what complexity means and 
entails for AGI. 
Intuitively, I side with Richard's stance that, if the current state of 'the 
new kind of science' cannot even understand simple chaotic systems - the 
toy-problems of three-variable differential quadratic equations  and 2-D Alife, 
then what hope is there to find a theoretical solution for a really complex 
system. The way forward is by experimental exploration of part of the solution 
space. I don't think we'll find general complexity theories any time soon.
On the other hand, practically I think that it *is* (or may be) possible to 
build an AGI system up carefully and systematically from the ground up i.e. 
inspired by a sound (or at least plausible) theoretical framework or by 
modelling it on real-world complex systems that seem to work (because that's 
the way I proceed too), finetuning the system parameters and managing emerging 
complexity as we go along and move up the complexity scale. (Just like 
engineers can build pretty much anything without having a GUT.)
Both paradagmatic approaches have their merits and are in fact complementary: 
explore, simulate, genetically evolve etc. from the top down to get a bird's 
eye view of the problem space versus incrementally build up from the bottom up 
following a carefully chartered path/ridge inbetween the chasms of the unknown 
based on a strong conceptual theoretical founding. It is done all the time in 
other sciences - even maths!
Interestingly, I started out wanting to use a simulation tool to check the 
behaviour (read: fine-tune the parameters) of my architectural designs but then 
realised that the simulation of a complex system is actually a complex system 
itself and it'd be easier and more efficient to prototype than to simulate. But 
that's just because of the nature of my architecture. Assuming Ben's theories 
hold, he is adopting the right approach. Given Richard's assumption or 
intuitions, he is following the right path too. I doubt that they will converge 
on a common solution but the space of conceivably possible AGI architectures is 
IMHO extremely large. In fact, my architectural approach is a bit of a poor 
cousin/hybrid: having neither Richard's engineering skills nor Ben's 
mathematical understanding I am hoping to do a scruffy alternative path :)
-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 On 2007/12/07 at 03:06, in message [EMAIL PROTECTED],
 Conclusion:  there is a danger that the complexity that even Ben agrees
 must be present in AGI systems will have a significant impact on our
 efforts to build them.  But the only response to this danger at the
 moment is the bare statement made by people like Ben that I do not
 think that the danger is significant.  No reason given, no explicit
 attack on any component of the argument I have given, only a statement
 of intuition, even though I have argued that intuition cannot in
 principle be a trustworthy guide here.
 But Richard, your argument ALSO depends on intuitions ...
 I agree that AGI systems contain a lot of complexity in the dynamical-
 systems-theory sense.
 And I agree that tuning all the parameters of an AGI system externally
 is likely to be intractable, due to this complexity.
 However, part of the key to intelligence is **self-tuning**.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73455082-621f89

Re: [agi] Where are the women?

2007-11-28 Thread Jean-Paul Van Belle
By coincidence whilst the debate was raging last night (local time:), I was 
busy reading 'Studying Those Who Study Us, An anthropologist in the world of 
artificial intelligence', (Stanford University Press, 2001) which is a 
posthumous collection of academic essays by Diana Forsythe. She roamed 4 or 5 
AI labs for the better part of 10 years using her trained anthropologist's eye 
to reflect on the culture of AI labls and geeks. A couple of essays concern 
exactly this point (esp 'Disappearing Women in the Social World of Computing') 
and I have a feeling that she would strongly disagree with the feelings 
expressed on this list i.e. that women are scarce because of the nature of the 
field - she feels strongly it has much more to do with the social attitudes 
(cultural norms) in the discipline. Ok she took a bit of a feminist angle but 
that's not surprising considering what happened to her parents (both were 
acccomplished computer scientist, the father became famous, the mother 
forgotten), or probably more by exactly her personal experiences in these labs. 

Anyway it is a very interesting (and quick) read with some good thoughts/inputs 
on other aspects of AI (and AGI) thinking - especially the disconnect between 
how AI geeks think and how the rest of the world (including the user) operates. 
The article that I found the most interesting was 'The Construction of Work in 
Artificial Intelligence' where she highlights strongly what *we* (AI 
scientists) think is real A(G)I as opposed to what we actually really do. It 
relates to an earlier posting of mine whereby I queried how much time the 
people claiming to work on AGI really spend on AGI design as opposed to the 
time spent on peripheral issues (she lists 19 major things AI researchers do, 
only one of which is related to real AI :)

Back to the women, there is at least one very smart woman on this list who's 
elected to stay quiet in this debate... Samantha?

=Jean-Paul
-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 On 2007/11/28 at 19:18, in message
[EMAIL PROTECTED], Robin
Gane-McCalla [EMAIL PROTECTED] wrote:
 The interesting thing about CS and AI is that they are man-defined
 fields whereas physics, chemistry, biology etc are defined by nature.
 Perhaps the simple fact that almost all programming languages and
 concepts in AI were designed by white males (and a geeky subculture of
 white males at that) is the main factor that has limited the entrance
 of women and other minorities rather than other cultural differences.
 
 On Nov 28, 2007 7:46 AM, Jiri Jelinek [EMAIL PROTECTED] wrote:
 Where are the women?

 I once read a short article on this topic. The author was trying to
 explain it suggesting that many technical books are using rather
 man-appealing analogies when explaining concepts which has
 discouraging effect for women. They were about experiment with this in
 Germany, planning to rewrite text-books (/lectures) using neutral and
 woman-appealing analogies. I did not really follow it so not sure what
 the outcome was.

 Regards,
 Jiri Jelinek

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email 
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?; 

 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70075803-05025f

Re: [agi] Do the inference rules.. P.S.

2007-10-12 Thread Jean-paul Van Belle
When commenting on a lot of different items in a posting, in-line responses 
make more sense and using ALL-CAPS in one accepted way of doing it in an email 
client/platform neutral manner. I for one do it often when responding to 
individual emails so I don't mind at all. I do *not* associate it with shouting 
in such a context - especially not in the light of the extremely high-quality 
contributions made by Edward on this list (I, for one, think that he has 
elevated the level of discussion here greatly and I have archived more of his 
postings than anyone else's). I do agree that small-caps is easier on the eye. 
However, Durk, if one wishes to comment on posting etiquette, I thought one 
other rule was to quote as little of the previous post as necessary to make 
one's point ... some members may still have bandwidth issues ;-) (just kidding!)
 
And, for the record, after reading AI literature for well over 20 years and 
having done a lot of thinking, the AGI architecture I'm busy working on is 
strongly founded on insights (principles, axioms, hypotheses and assumptions:) 
many of which are remarkably similar to Edward's views (including those of the 
value of past AI research projects, the role of semantic networks and the 
possibility of symbolic grounding) though I (obviously) differ on some other 
aspects (e.g. complexity :). I hope to invite him and some others to comment on 
my prototype end-2008 (and possibly contribute thereafter :)
 
^.^  
Jean-Paul
 
Research Associate: CITANDA
Department of Information Systems
Email: [EMAIL PROTECTED] 
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 Kingma, D.P. [EMAIL PROTECTED] 2007/10/12 10:57 
Dear Edward, may I ask why you regularly choose to type in all-caps? Do you 
have a broken keyboard? Otherwise, please restrain from doing so since (1) many 
people associate it with shouting and (2) small-caps is easier to read... 

Kind regards,
Durk Kingma

On 10/12/07, Edward W. Porter [EMAIL PROTECTED] wrote:


This is in response to Mike Tintner's 10/11/2007 7:53 PM post.  My response is 
in all-caps. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=52799208-0a3398

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Jean-paul Van Belle
All interesting (and complex!) phenomena happen at the edges/fringe. Boundary 
conditions seem to be a requisite for complexity. Life originated on a planet 
(10E-10 of space), on its surface (10E-10 of its volume). 99.99+% of the 
fractal curve area is boring, it's just the edges of a very small area that's 
particularly interesting. 99.99% of life is not intelligent. 99.9% of 
possible computer programs are completely uninteresting. Hence 99.% of 
glider configurations will be completely uninteresting and utterly boring. Most 
of Wolfram's rules produce boring, predictable patterns too.
=Jean-Paul
-- 



 On 2007/10/06 at 02:52, in message [EMAIL PROTECTED],
Linas Vepstas [EMAIL PROTECTED] wrote:
 For the few times that gliders might collide, well, that's more
 complicated. But this is a corner-case, it's infrequent. Like collisions
 between planets, it can be handled as a special case. I mean, heck, 
 there's only so many different ways a pair of glider can collide, and 
 essentialy all of the collisions are fatal to both gliders. So, by this 
 reasoning, GoL must be a low-complexity system. 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50732414-a6538f

Re: [agi] AGI Consortium

2007-06-08 Thread Jean-Paul Van Belle
Well-said Samantha :-)

On a different note: something YKY and Mark may want to read about a
possible approach to running a new AGI consortium: eXtreme Research. A
software methodology for applied research: eXtreme Researching vy
Olivier Chirouze, David Cleary and George G. Mitchell (Software.
Practice  Experience 2005; 35:1441–1454 - try to get it from
www.interscience.wiley.com). Some interesting ideas on building up
research ideas  prototypes  systems from the ground up with a
distributed group.



 Samantha Atkins [EMAIL PROTECTED] 06/08/07 7:01 PM 
But I don't expect any great understanding about Open Source here.   It
is
not the expertise or prime interest of the group.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] How can you prove form/behaviour are disordered?

2007-06-08 Thread Jean-Paul Van Belle
Hi Matt
Re Halting/non-halting programs:
This try-out works fine for small values of {program length}. For large values 
the problem is essentially unsolvable, though I admit that you could get a fair 
feeling for the distribution by simulating a large number of randomly generated 
programs. The busy beaver sequence is (provably) the fastest growing number 
sequence... (I know because I tried looking for that once but the best I could 
come up was with what was apparently called the arrow notation.)
Re NL  pattern finding:
The problem arises with:
- Apples are (the forbidden :) fruit. My laptop is an apple. Therefore my 
laptop is (the forbidden) fruit.
- People have legs. Johnny the cripple is a person (a people?). Therefore...
- Eggs are white (or brown:). Yolk is in the egg. Therefore yolk is white.


 Matt Mahoney [EMAIL PROTECTED] 06/08/07 9:24 PM 
What is the shortest C program that does not halt?  Here are some 136 bit
programs:
What is the shortest halting program in Java?  I can find 2916 programs of
length 360 bits, but nothing shorter, for example:

- Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
- Cities have tall buildings.  New York is a city.  Therefore New York has
tall buildings.
- Summers are hot.  July is in the summer.  Therefore July is hot.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-05 Thread Jean-Paul Van Belle
Hey but it makes for an excellent quote. Facts don't have to be true if they're 
beautiful or funny! ;-)
Sorry Eliezer, but the more famous you become, the more these types of 
apocryphal facts will surface... most not even vaguely true... You should be 
proud and happy! To quote Mr Bean 'Well, I enjoyed it anyway.'



 Eliezer S. Yudkowsky [EMAIL PROTECTED] 06/05/07 4:38 AM 
Mark Waser wrote:
  
 P.S.  You missed the time where Eliezer said at Ben's AGI conference 
 that he would sneak out the door before warning others that the room was 
 on fire:-)

This absolutely never happened.  I absolutely do not say such things, 
even as a joke, because I understand the logic of the multiplayer 
iterated prisoner's dilemma - as soon as anyone defects, everyone gets 
hurt.

Some people who did not understand the IPD, and hence could not 
conceive of my understanding the IPD, made jokes about that because 
they could not conceive of behaving otherwise in my place.  But I 
never, ever said that, even as a joke, and was saddened but not 
surprised to hear it.

-- 
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] credit attribution method

2007-06-05 Thread Jean-Paul Van Belle
Ok, Panu, I agree with *your statement* below.

[Meta: Now how much credit do I get for operationalizing your idea?]


 Panu Horsmalahti [EMAIL PROTECTED] 06/04/07 10:42 PM 
Now, all we need to do is find 2 AGI designers who agree on something.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
Except that Ogden only included a very few verbs [be , have , come - go , put - 
take , give - get , make , keep , let , do , say , see , send , causeand 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
= Jean-Paul
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 BillK [EMAIL PROTECTED] 2007/06/05 11:18:49 
On 6/5/07, Bob Mottram [EMAIL PROTECTED] wrote:
 I remember last year there was some talk about possibly using Lojban
 as a possible language use to teach an AGI in a minimally ambiguous
 way.  Does anyone know if the same level of ambiguity found in
 ordinary English language also applies to sign language?  I know very
 little about sign language, but it seems possible that the constraints
 applied by the relatively long time periods needed to produce gestures
 with arms/hands compared to the time required to produce vocalizations
 may mean that sign language communication is more compact and maybe
 less ambiguous.

 Also, comparing the way that the same concepts are represented using
 spoken and sign language might reveal something about how we normally
 parse sentences.


http://en.wikipedia.org/wiki/Basic_English

Ogden's rules of grammar for Basic English allows people to use the
850 words to talk about things and events in the normal English way.
Ogden did not put any words into Basic English that could be
paraphrased with other words, and he strove to make the words work for
speakers of any other language. He put his set of words through a
large number of tests and adjustments. He also simplified the grammar
but tried to keep it normal for English users.

More recently, it has influenced the creation of Simplified English, a
standardized version of English intended for the writing of technical
manuals.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
Hi Mike
 
Just Google 'Ogden' and/or Basic English - there's lots of info.
And if you doubt that only a few verbs are sufficient, then obviously you need 
to do some reading: anyone interested in building AGI should be familiar with 
Schank's (1975) contextual dependency theory which deals with the 
representation of meaning in sentences. Building upon this framework, Schank  
Abelson (1977) introduced the concepts of scripts, plans and themes to handle 
story-level understanding. Later work (e.g., Schank, 1982,1986) elaborated the 
theory to encompass other aspects of cognition. 
[http://tip.psychology.org/schank.html]
A number of other researchers have also worked on the concept of a few semantic 
primitives (one called them semantic primes) but I'd be a bad teacher if I did 
*your* homework for you... ;-)
 
Jean-Paul
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 Mike Tintner [EMAIL PROTECTED] 2007/06/05 16:48:32 
Except that Ogden only included a very few verbs [be , have , come - go , put - 
take , give - get , make , keep , let , do , say , see , send , causeand 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
= Jean-Paul
 
How does that work? The first 12 verbs above are among the most general, 
infinitely-meaningful and therefore ambiguous words in the language. There are 
an infinity of ways to come or go to a place.
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
I think you are mis-interpreting me. I do *not* subscribe to the semantic 
primitives (I probably didn't put it clearly though). Just trying to answer 
your question re the sufficiency of 10 or so verbs. However, if you are 
considering any reduced vocabulary then you should be familiar with the 
literature/theories and *also* know why it failed. I think other people also 
mentioned that list readers should check old discredited approaches first and 
then see how your current approach is different/better.
Jean-Paul


 Mike Tintner [EMAIL PROTECTED] 06/05/07 7:14 PM 
Thanks. But Schank has fallen into disuse, no? The ideas re script algorithms 
just don't work, do they?  And what I was highlighting was one possible reason 
- those primitives are infinitely open-ended and can be, and are, repeatedly 
being used in new ways. That supposedly minimally ambiguous language looks, 
ironically, like it's maximally ambiguous. 

I agree that the primitives you list are extremely important - arguably central 
- in the development of human language. But to my mind, and I'll have to argue 
this at length, and elsewhere, they show something that you might not like - 
the impossibility of programming (in any conventional sense) a mind to handle 
them. 
  - Original Message - 
  From: Jean-Paul Van Belle 
  To: agi@v2.listbox.com 
  Sent: Tuesday, June 05, 2007 5:44 PM
  Subject: Re: [agi] Minimally ambiguous languages


  Hi Mike

  Just Google 'Ogden' and/or Basic English - there's lots of info.
  And if you doubt that only a few verbs are sufficient, then obviously you 
need to do some reading: anyone interested in building AGI should be familiar 
with Schank's (1975) contextual dependency theory which deals with the 
representation of meaning in sentences. Building upon this framework, Schank  
Abelson (1977) introduced the concepts of scripts, plans and themes to handle 
story-level understanding. Later work (e.g., Schank, 1982,1986) elaborated the 
theory to encompass other aspects of cognition. 
[http://tip.psychology.org/schank.html]
  A number of other researchers have also worked on the concept of a few 
semantic primitives (one called them semantic primes) but I'd be a bad teacher 
if I did *your* homework for you... ;-)

  Jean-Paul


  Department of Information Systems
  Email: [EMAIL PROTECTED]
  Phone: (+27)-(0)21-6504256
  Fax: (+27)-(0)21-6502280
  Office: Leslie Commerce 4.21


   Mike Tintner [EMAIL PROTECTED] 2007/06/05 16:48:32 

  Except that Ogden only included a very few verbs [be , have , come - go , put 
- take , give - get , make , keep , let , do , say , see , send , cause and 
because are occasionally used as operators; seem was later added.] So in 
practice people use about 60 of the nouns as verbs diminishing the 
'unambiguity' somewhat. Also most words are seriously polysemous. But it is a 
very good/interesting starting point!
  = Jean-Paul

  How does that work? The first 12 verbs above are among the most general, 
infinitely-meaningful and therefore ambiguous words in the language. There are 
an infinity of ways to come or go to a place.

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 
--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.472 / Virus Database: 269.8.9/832 - Release Date: 04/06/2007 
18:43

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Minimally ambiguous languages

2007-06-05 Thread Jean-Paul Van Belle
Sorry yes you're right, I should and would not call Schank's approach 
discredited (though he does have his critics). FWIW I think he got much closer 
than most of the GOFAIers i.e. he's one of my old school AI heroes :) I thought 
for a long time his approach was one of the quickest ways to AGI and I still 
think anyone studying AGI should definitely study his approach closely. In the 
end any would-be AGIst (?:) will have to decide whether she adopts conceptual 
primitives or not - probably, apart from ideological arguments, mainly on the 
basis of how she decides to (have her AGI) ground its/his/her concepts (or not, 
as the case may be).
Personally I'd say that a lot of mental acts do not reduce to his primitives 
easily (without losing a lot in the translation, to paraphrase a good movie:) 
and mental acts are quite important in my AGI architecture.
Just personal opinion of course. =Jean-Paul


 James Ratcliff [EMAIL PROTECTED] 06/05/07 9:19 PM 
I wouldnt say discredited, though he has went off to study education more 
instead of AI now.
Good article on Conceptual Reasoning

http://library.thinkquest.org/18242/concept.shtml

His SAM project was very interesting with Scripts back in '75, but for a very 
limited domain.

My project has the ability for a KR to contain multiple scripts describing a 
similar event to allow reasoning and generalization of simple tasks.

James Ratcliff

Mark Waser [EMAIL PROTECTED] wrote:  list readers should check old 
discredited approaches first

Would you really call Schank discredited or is it just that his line of 
research petered out?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Sick sense of humor? Visit Yahoo! TV's Comedy with an Edge to see what's on, 
when. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] poll: what do you look for when joining an AGI group?

2007-06-04 Thread Jean-Paul Van Belle
Synergy or win-win between my work and the project i.e. if the project 
dovetails with what I am doing (or has a better approach). This would require 
some overlap between the project's architecture and mine. This would also 
require a clear vision and explicit 'clues' about deliverables/modules (i.e. 
both code and ideas). I would have to be able to use these (code, idea) 
*completely* freely as I would deem fit, and would, in return, happily exchange 
the portions of my work that are relevant to the project.
Basically I agree with what the others wrote below - especially Ben. Except I 
would not work for a company that would aim to retain (exclusive or long-term) 
commercial rights to AGI design (and thus become rulers of the world :) nor 
would I accept funding from any source that aims to adopt AGI research outcomes 
for military purposes. 
Oh and yes, I'd like to be wealthy (definitely *not* rich and most definitely 
not famous - see the recent singularity discussion for a rationale on that one) 
but I already have the things I really need (not having to work for a regular 
income *would* be nice, tho)
= Jean-Paul

Justin Corwin wrote:
If I had to find a new position tomorrow, I would try to find (or
found) a group which I liked what they were 'doing', rather than their
opinions, organization, or plans.
Mark Waser wrote:
important -- 6 which would necessarily include 8 and 9
Matt wrote:
12. A well defined project goal, including test criteria.
Ben wrote:
The most important thing by far is having an AGI design that seems
feasible. For me, wanting to make a thinking machine is a far stronger motivator
than wanting to get rich. The main use of being rich is if it helps to more 
effectively launch a positive Singularity, from my view...
Eliezer wrote:
Clues.  Plural.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Open AGI Consortium

2007-06-03 Thread Jean-Paul Van Belle
my 2 cents worth (both to Mark  YKY):
think of the people you are trying to co-opt onto the project. Some of us (most 
mid-lifers) have *some* income stream (regular job or otherwise) but are 
extremely committed to AGI as one of our main purposes of our life. Ideally we 
would want a rich donor to sponsor us to work full-time on AGI (though 
personally I doubt whether *I* could work 60-80 hours a week every week on AGI) 
but we are not really motivated by money (current or future income streams) - 
some of us only in due credit and some of the latter group perhaps in fame :) 
We are likely to not be interested in neither of your schemes because our 
philosophy is 'let's build a (couple of) prototype(s) first to see if our ideas 
work and take it from there - either fully proprietary or full OSS'. (Ok Ben 
got a bit further along that track than most of 'us'.) Many of the others, I 
suspect, (mainly the younger ones on the list) NEED a regular and solid 
immediate income stream and your models ALSO does not provide for that. So I am 
not sure what type of individuals (i.e. their personal circumstances) either of 
your schemes attracts/motivates. Perhaps it may be more productive to ask 
people on the list quickly to indicate their interest and/or willingness to 
participate in your scheme (by emailing either of you directly rather than the 
list)?
Just my thoughts...
=Jean-Paul Van Belle

PS @Mike/J Stors - yes I remember the Hilbert spaces posting as well but 
skipped it (was way beyond my intellectual level/maths background) but it is 
definitely there in the archives (but perhaps one of the other lists?)

PSS Ben I loved reading your blog. Pls keep it up. If you ever have time, let 
us know why, of the 3 different AGI approaches you entertained, you went with 
Novamente instead of the Hebbian neural net (and the theorem proving one)... us 
scruffies would like to know... is it just your mathematical bias/background or 
something more fundamental?

PSSS :) Google is doing narrow AI, Semantic Web  NLP, IBM is doing WebFountain 
(i.e. also semantic web) and autonomous computing. So neither seem to be in 
AGI. Anyone knows what M$ is up to? They have hired quite a few smart CS *and* 
psych people too...


 Mark Waser [EMAIL PROTECTED] 06/02/07 11:56 PM 
 Yes, I believe there're people capable of producing income-generating stuff 
 in the interim.  I can't predict how the project would evolve, but am 
 optimistic. 

Ask Ben about how much that affects a project . . . .

Note:  Yes, I do have a serious mistrust of the legal system.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Paths not Taken to AGI

2007-06-03 Thread Jean-Paul Van Belle
Thx for your response, Ben (and for the many other contributions on the
list!)

Re Hebbian neural – I assume you could calculate an eigenvalue matrix or
some other heuristic approximation (to matrix**n) to speed up
calculations. However, the matrix changes dynamically each time your AGI
learns. Also, the evidence is that the mind switches dynamically quite
easily between different ‘islands of stability’ so small changes in
weights or inputs are likely to produce quite different eigenvalue
values – if indeed it converges at all. Hence I’d venture to guess that
it may be computationally less expensive to iterate than to calculate a
reduced matrix each time. Despite this, personally I’d still prefer an
activation (not necessarily Hebbian) spreading network (tho you have
some of that in your Novamente architecture as well – for your patterns)
especially for the ‘middle level’ (for my top-level I also favour a
purely symbolic though much less formal one than Novamente/NARS/Cyc
approach, mainly because I’m not smart and mathematically skilled enough
:)  Also I think it’s better for different people to try out different
approaches so as to explore the AGI solution space a bit wider.

PS Current theorem-proving approaches I always considered to be
narrow-AI alternatively one of many specialized modules in an AGI tho
obviously a computer-AGI c/would be much more efficient at
theorem-proving than humans. Tho maths, being abstract, would indeed be
one of the areas in which any computer AGI should excel (it should be
one of her main hobbies:)



 Benjamin Goertzel [EMAIL PROTECTED] 06/03/07 3:02 PM 
 of the 3 different AGI approaches you entertained, you went
 with Novamente instead of the Hebbian neural net (and the theorem
proving
 one)... us scruffies would like to know... is it just your
mathematical
 bias/background or something more fundamental?
The Hebbian neural net approach seemed like it would be dramatically
more
computationally expensive, requiring a whole bundle of synapses to do
what
we can do with a single Novamente link.  I.e., it's less natural for the
von
Neumann infrastructure we are stuck with at the moment.  And, once you
get
beyond simple stuff, we don't know how the brain works so we need to
invent
stuff anyway, even in that plan (e.g. I have a scheme for doing
higher-order
logic in neural nets that involves feeding a dimensionally-reduced
version
of a neural net's connection matrix to the same network as an input
vector
... but tuning that would take a lot of work, and there is no
neuroscience
to guide such work, at this point...)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

RE: [agi] Bad Friendly AI poetry ;-)

2007-05-25 Thread Jean-Paul Van Belle
The provable *social* AI
was indeed a very sexy sheila
but she became too emotional
and her brain too irrational
so her creator killda
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 Derek Zahn [EMAIL PROTECTED] 2007/05/25 16:29:19 
The Provably Friendly AI
Was such a considerate guy!
Upon introspection
And careful reflection,
It shut itself off with a sigh.
 
 
 
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Parsing theories

2007-05-23 Thread Jean-Paul Van Belle
Check bigrams (or, more interestingly, trigrams) in computational
linguistics.
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 Eric Baum [EMAIL PROTECTED] 2007/05/23 15:36:20 

One way to parametrize 
the likelihood of various arguments would be with a table over
all two word combinations, the i,j entry gives the likelihood
that the ith word and the jth word are the two arguments.
But most likely, in reality, the likelihood of the jth word
will be much pinned down conditional on the ith. 

Is there empirical work with this model?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-23 Thread Jean-Paul Van Belle
Universal compassion and tolerance are the ultimate consequences of
enlightenment
which one Matt on the list equated IMHO erroneously to high-orbit
intelligence
methinx subtle humour is a much better proxy for intelligence
 
Jean-Paul 
member of the 'let Murray stay' advocacy group aka 'the write 2
doctorates, trigger 2 singularities movement'
just back from 2 weeks enlightenment-seeking in Indian ashram ;-)
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

 Benjamin Goertzel [EMAIL PROTECTED] 2007/05/20 20:38:35 
Personally, I find many of his posts highly entertaining...

If your sense of humor differs, you can always use the DEL key ;-)

-- Ben G

On 5/20/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:

 Why is Murray allowed to remain on this mailing list, anyway?  As a
 warning to others?  The others don't appear to be taking the hint.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-07 Thread Jean-Paul Van Belle
Interesting question you raise there, Matt (vs :) YKY
 
How many of us would be prepared to work FULL-TIME on AGI:
(0) If a department of defense/military organisation paid you develop a
secret AGI for national defense/intelligence purpose?
(1) If a Microsoft, Google, Sun or IBM came along and hired you
full-time to work on either
  (1a) Open-Source; or 
  (1b) Proprietary AGI?
(2) A more 'friendly' research group came along (e.g. University,
government agency) to pay you fulltime
  (2a) on *their* design/architecture or 
  (2b) on YOUR design but having to share your findings with the larger
community (shared credit)?
(3) If you had sufficient funds of your own?
 
Re (3) I have often wondered how much time one could really spend
continuously on working on AGI - refer to the Princeton Instititue of
Advanced Studies where established geniuses (such as Einstein) were/are
paid to devote fulltime efforts to thinking but actually not many
earthshaking ideas have come out of it. Don't we need a lot of 'time
wasted' on trivia such as a real job, leaking plumbing and family in
order to have these 1 or 2 hours of creative thinking/work each day?
Would it help to have consolidated 8 hour or longer blocks each day? Do
people like Ben, Leitz and Peter (Voss) really have so much time to
think creatively/design or is my suspicion right that a lot of their
(your :) time is spent on fundraising, PR, communication, management?
The grass always seems greener on the other side...
 
Jean-Paul

 Matt Mahoney [EMAIL PROTECTED] 2007/05/07 03:47:28 
 I think we should not go FOSS just because we arn't confident of
ourselves,
 or to try to avoid competition.  We love our work and should go the
extra
 miles to make it profitable.  Those who're not interested in
business
 matters can leave that to somebody else in the group.
The problem with closed source is you have to pay your employees. 
Personally,
I am not interested in making a lot of money.  I already make enough to
buy
what I want.  It is more important to have free time to pursue my
interests. 
AGI, especially language, is one of my interests.  But I don't want to
build
something aimlessly like Cyc.  I would like to see an application, a
goal in
which progress can be measured.  I currently use text compression for
this
purpose.  Do you have a better idea?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] A New Approach to AGI: What to do and what not to do (includes my revised algorithm)

2007-05-06 Thread Jean-Paul Van Belle
to find a word in a big list you should really use a dictionary / hash
table instead of binary search... ;-)
(ok i know that wasnt the point you were trying to make :)
Jean-Paul 

PS: [META] - people pls to cut off long message includes - some of us
don't enjoy always on high bandwidth :(
 a [EMAIL PROTECTED] 05/06/07 2:36 AM 
For example, in computational
linguistics, the algorithm can use a binary search to find records
relating to a word, instead of scanning the whole database.

What I mean is that the database can use indexes with a binary search
algorithm to locate the word faster. This means that it avoids scanning
each and every record of the database to find the pixel representation
of the letters of the word (the bitmap image of the word).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Jean-Paul Van Belle
You're mostly correct about the word symbols (barring onomatopoeic words
such as bang hum clipclop boom hiss howl screech fizz murmur clang buzz
whine tinkle sizzle twitter as well as prefixes, suffixes and derived
wordforms which all allow one to derive some meaning). 
However you are NOT correct about NUMBERS, mathematical formulaes etc.
Though they are abstract, a lot of their MEANING (semantics) is
contained in their notation. Give me the binary number 1001011001 and I
can immediately tell you its predecessor, int(log2()), whether it's even
or divisible by (decimal) 256. Which is exactly why some among us like
formal systems so much.
As an aside, I remember once speculating/thinking/making a start about a
language whereby the meaning (or rather 'code') of any word would be
reflected in its notation/representation... it started off with a
commitment to a rather unwieldy ontology. Think Dewey-system for words
but also including verbs, adjectives and other word categories. That was
20-odd years ago yes I was quite naive in those days ;-)
Jean-Paul

 Mike Tintner [EMAIL PROTECTED] 05/01/07 3:38 AM 
Symbols are ABSTRACT. Numbers included. Entirely abstract in relation to
the 
signified.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-30 Thread Jean-Paul Van Belle
Not quite on the grammar topic but on the related topic of 'restricted
vocabulary':
 
A couple of us have been/are  considering using Simple (or the
alternative Basic) English or other restricted vocabulary sets. IMHO
There are actually two issues here:
 
(1) you may wish to have a simplified (i.e. reduced) representation of
the knowledge base i.e. using a minimal set of concepts/attributes/etc.
as a starting point (for those of us who don't bootstrap *all* knowledge
:) - in this case it is fine IMHO - the system can learn the new
concepts later.
 
(2) you may wish to restrict communication/interaction of humans with
the system to the set of words. This is *NOT* a good idea: most words
outside the simple/basic vocabulary set actually respond to refined
concepts (usually as per definition of the word) and you will have to
have - somewhere in your system, depending on your knowledge
representation scheme - a pointer (or whatever) to the data item that
representscorresponds to that (complex or composite) 'concept' ANYWAY.
But then it is silly not to use the real english word as the token/label
for that node in your database. BTW my two arguments for this are (2a)
this is exactly the reason why kids can pick up new words at the rate of
10+/day ... they hear the word and it maps directly onto a
construct/concept that is already present in their mind, they don't have
to construct an entire new structure in their mind; (2b) when you look
at these lists of proposed new (rather silly) words (a la the meaning
of liff etc.) we *all* recognise the concepts/feelings/situations which
these words map to and can see quite well why these should/could be
given a special word.
 
Jean-Paul Van Belle


On 4/29/07, Richard Loosemore [EMAIL PROTECTED] wrote:
 The idea that human beings should constrain themselves to a
simplified,
 artificial kind of speech in order to make life easier for an AI, is
one
 of those Big Excuses that AI developers have made, over the years,
to
 cover up the fact that they don't really know how to build a true
AI.

 It is a temptation to be resisted.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] rule-based NL system

2007-04-29 Thread Jean-Paul Van Belle
@ Mike: remember that she wasn't blind/deaf from birth - read her
autobiographical account (available on project gutenberg - which is an
excellent corpus source btw - also available on DVD :) for how he
finally hooked up the concept of words as tokens for real world
concepts when linking the word water with her memory of water when she
wasn't blind/deaf yet. it's a nice read for those who are interested in
how 'grounding of concepts' happen (tho the nuggets are far and few
inbetween). See below two extracts.

@Matt: yes as far as i remember typical human neurons typically have a
few 1000 to 1 synapses. But note that there are several other types
of neurons - especially the ones linking different brain areas together
as well as those relatively very rare (much less than 10) 'emotional
state/feeling' neurons that hook up/traverse many different areas of the
brain.


 Mike Tintner [EMAIL PROTECTED] 04/29/07 2:04 AM 
Helen Keller must have had a tough time existing without words.
According to you she didn't know the shape of the chairs she sat on. She
had no words.

From Kellers autobiography: 
There was, however, one word the meaning
of which I still remembered, WATER. I pronounced it wa-wa. Even
this became less and less intelligible until the time when Miss
Sullivan began to teach me. I stopped using it only after I had
learned to spell the word on my fingers.
(and much later)
We walked down the path to the well-house, attracted by the
fragrance of the honeysuckle with which it was covered. Some one
was drawing water and my teacher placed my hand under the spout.
As the cool stream gushed over one hand she spelled into the
other the word water, first slowly, then rapidly. I stood still,
my whole attention fixed upon the motions of her fingers.
Suddenly I felt a misty consciousness as of something
forgotten--a thrill of returning thought; and somehow the mystery
of language was revealed to me. I knew then that w-a-t-e-r
meant the wonderful cool something that was flowing over my hand.
That living word awakened my soul, gave it light, hope, joy, set
it free! There were barriers still, it is true, but barriers that
could in time be swept away.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Low I.Q. AGI

2007-04-16 Thread Jean-Paul Van Belle
Since I voiced my concern with the AGI
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
 = Singularity automatic assumption here earlier (give me 1000 times more time 
than Einstein to think up Relativity Theory and I still couldn't; give me 1000 
times more data and I'll be seeing less, not more forest), let me add 
corollaries to/musings about Jef's argument: 
(1) if (by force) we confine a super-AGI to a single problem situation or even 
our own limited environment for long enough  (ignore the ethical slavery 
aspect for a moment), won't it go crazy - just like many geniuses go crazy or 
at the very least very eccentric after a relatively short life of intensive 
intellectual creativity
(2) will we recognise the difference between AGI genius and AGI craziness even 
at the early stage in its life - we hardly do recognize it in human geniuses 
(and remember that the parameters in a normal human only needs to be slightly 
off before (s)he is considered crazy - it'll be hard enough to get the 
parameters right for our human-level AGI)
(3) once/if it goes off in its own super-intelligence space (likely to be in 
intellectual domains such as maths) I doubt that we will ever be able to 
recognize what it does (try reading an advanced maths,  physics or 
theology/philosophy book)
Jean-Paul Van Belle
 
Jef Allbright [EMAIL PROTECTED] 2007/04/15 21:40:06 
While such a machine intelligence will quickly far exceed human
capabilities, from its own perspective it will rapidly hit a wall due
to having exhausted all opportunities for effective interaction with
its environment.  It could then explore an open-ended possibility
space à la schmidhuber, but such increasingly detached exploration
will be increasingly detached from intelligence in an effective
sense.
On 4/15/07, Pei Wang [EMAIL PROTECTED] wrote:
 However, to me Singularity is a stronger claim than superhuman
 intelligence. It implies that the intelligence of AI will increase
 exponentially, to a point that is shorter than what we can perceive or
 understand. That is what I'm not convinced.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x
language factor
with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for
eg C++
i.e. minimum 50 klocs (Python) which is what i wishfully think;
realistically probably closer to 5000 klocs C++
that's of course for the prototype which may or may not bootstrap.
however, the devil's in the data (you're on the money there, Mark) and
more importantly the architecture and algorithms.

| YKY (Yan King Yin) [EMAIL PROTECTED]
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
om 2007/03/29 14:42:53 
| What are other people's estimates?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
Re number of modules - ask any neuroscientist how many modules there are
in the brain... and see which you think you can do without. My approach
was to list important brain modules, delete those that I thought I can
do without, add a very few that they haven't located or seem needed.
Some modules end up being split in smaller ones as you start delving
into implementation issues.
 
Re PYTHON - hey I though we just *had* the language debate. FWIW In a
previous life I've coded in Fortran and various flavours of Basic.
Python gives fast learning curve, high productivity, high readability
(important if you have gaps between programming time), it *is* OO but
also procedural/functional - i like that mesh -, self-modification, the
efficient data structures which I need, and lots of community support
e.g. MontyLingua gives you a natural language parser free. Low
performance is an issue but one could always inline C. So Python it is
for my first prototype. I don't recommend people change their current
language tho if they're happy with it. Still early days for me. 

 YKY (Yan King Yin) [EMAIL PROTECTED] 2007/03/29
15:58:45 
On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:
 I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x
language
factor
 with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5
for eg
C++

50-100 modules?  Sounds like you have a very unconventional
architecture.

From what you say, Python sounds like a pretty good *procedural*
language --
would you say it's the easiest way to build an AGI prototype?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-29 Thread Jean-Paul Van Belle
IMHO
IF you can provide a learning environment similar in complexity as our
world
THEN (maximum code size(zipped using Matt Mahoney algorithm)   portion
of non-redundant DNA devoted to brain
/IMHO
 
Some random thoughts.
Any RAM location can link to any other RAM location so there are more
interconnects.
The structure of RAM can be described very succintly.
A CPU has 800 million transistors - a much more generous instruction
set than our brain.
 
Most likely we're *all* way off the mark ;-)

 kevin.osborne [EMAIL PROTECTED] 2007/03/29 16:24:20 
say 2.5^10e9 interconnects, which is a number too big for even a

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] small code small hardware

2007-03-28 Thread Jean-Paul Van Belle
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
 
 kevin.osborne [EMAIL PROTECTED] 2007/03/28 15:57 
 as a techie: scepticism. I think the 'small code' and 'small
hardware'
 people are kidding themselves. 
Kevin, you're most probably right there. But remember that us small
code people *have* to have this belief in order to justify ourselves
working as individuals / tiny teams often during spare time and snatched
moments. As a small code person I think the chance of a small code
project achieving AGI is probably 1% (still probably an optimistic
estimate) that that of a larger, coordinated, well-funded and focussed
research group. But some of us are loners, like it that way, keep
dreaming and thinking away. Some of us have also seen how some really
innovative ideas tend to get lost in larger groups due to the
normalisation/group pressure. And we take heart in the fact that many of
the big advances in history (i.e. the big ideas) were typically produced
by single individuals or tiny teams. Not so sure about the small
hardware bit. Singularity software will require massive distributed
hardware IMHO but prototypes should run fine on tomorrow's PCs. When I
get technical enough, I'll plan my nebulous design/architecture around
~2012 hardware: i.e. a couple of 64-processor 256GB RAM machines - gives
me a realistic time horizon and something concrete.
 
as a person: nihilism  the human condition. crime, drugs,
debauchery.
self-destructive and life-endangering behaviour; rejection of social
norms. the world as I know it is a rather petty, woeful place [...]
 
hey i liked that bit ;-) most of the time i think the world is a great
place tho. But that's probably because I'm living in
paradise^H^H^H^H^H^H^H^H^HCape Town ;-)
 
Jean-Paul

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] AGI interests

2007-03-26 Thread Jean-Paul Van Belle
Derek, great idea. Here's my story/interest
 
Ever since a kid (and playing with the very first computers) I have
been fascinated  intrigued by the concept of AGI (tho not necessarily
by that name). I've had a number of different career paths and many
different hobbies but my largely philosophical interest in AI remained
probably the only constant in the last 30 years. I've pretty much kept
up to date with AI, cognitive sciences incl computational linguistics
etc. although always as an informed layman^H^H^Hperson. Like you (?) I
believe that most current efforts are attempting to hard-code too much
intelligence. In an earlier posting I also iterated my belief that we
need to reconceptualize in intelligence if you want something that will
evolve into a singularity - biologically-inspired designs are (IMHO of
course) limited in intrensic IQ scaleability (human intelligence is a
Bell-curve - twice the brain size or speed does not increase the
intelligence level one iota) so I am still working on my architecture
which is highly modular (looking at Brooke's success and also
neuroscience inspired:) I believe intelligence can be grounded in an
information-only world rather than trying to build a virtual world or
going the robot embodiment route. Lots of interesting stuff going on in
this list, I was actually quite interested in the language debate having
changed languages several times. My three big questions at the moment
are:
- to validate/refine an AGI architecture (especially a highly modular
one), are you better off using a simulating environment (say GoldSim) or
building a prototype using a RAD language (say Python ;-)
- what is the REAL reason highly talented AGI research groups keep
pushing their deadlines back. E.g. Ben's announced imminent breakthrus
several times ... the one fact he mentioned a few years back that made
sense is the huge parameter space/degrees of freedom (you have at least
5 to 10 tunable parameters per module) but I wonder about the others he
hasnt mentioned (barring the excuses) and even more so for other
projects - newcomers might learn from concentrating their thinking on
AGI aspects where current projects are weak.
- how can I ever get this listserv to move to digest mode - i must have
tried 20 times using both IE and FF to no avail (the singularity one
works fine) ;-)
Ok that was me. Others?
Jean-Paul Van Belle
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 DEREK ZAHN [EMAIL PROTECTED] 2007/03/26 19:07:25 
What about the rest of you, what are your interests?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Why evolution? Why not neuroscience?

2007-03-23 Thread Jean-Paul Van Belle
I like the metaphor. The other good reason NOT to go for neuroscience
(i.e. against Ray Kurzweil's uploading the human brain argument) is that
it may *not* scaleable. Nature may well have pushed close to the limit
of biological intelligence (argument in favour = superior intelligence
is a strong survival trait). Double a neural network's size and it won't
work twice as good.
However evolutionary computing also has a problem: the space of
possible algorithms/computer programs becomes gigantic - see the busy
beaver function. So more complex algorithms necessary for A(G)I may
never be found. 
My money (and, of course, personal AGI project ;-) bets on reverse
engineering AI.

 Eugen Leitl [EMAIL PROTECTED] 2007/03/23 13:15:23 
 Why evolution? Why not neuroscience?
 I was reading about Numenta's NuPIC platform today, and it occurred
to me
 that there are really two big promising directions in machine
learning/AI
 today: evolutionary computation and brain reverse-engineering. Some
readers
 might be curious as to why I'm working on evolutionary computation
and not
 neuroscience-based approaches. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303