My two cents. FWIW: Anyone who seriously doubts whether AGI is possible will
never contribute anything of value to those who wish to build an AGI. Anyone
wishing to build an AGI should stop wasting time reading such literature
including postings (let alone replying to them). This is not
I assume that you have checked out Hofstadters architecture mixing random
relevance (Fluid Analogies Research Group)?
Jean-Paul Van Belle
Associate Professor
Head: Postgraduate Section, Department of Information Systems
Research Associate: Centre for IT and National Development in Africa
Hi Ben
Hereby my proposed additional topics / references for your wiki - aimed
at the more computer scienty/mathematically challenged (like me):
Sorry don't have the time to add directly to the wiki
AGI ARCHITECTURES (EXPANDS on the COGNITIVE ARCHITECTURES section)
Questions about any Would-Be
any relations to Amazon :)
I haven't tested them out (yet) but their main development centre is right
around the corner from me.
Jean-Paul Van Belle
Associate Professor
Head: Postgraduate Section, Department of Information Systems (
http://www.commerce.uct.ac.za/InformationSystems/ )
Research
IMHO more important than working towards contributing clean code would be to
*publish the (required) interfaces for the modules as well as give standards
for/details on the knowledge representation format*. I am sure that you have
those spread over various internal and published documents
Sounds like the worst case scenario: computations that need between say 20 and
100 PCs. Too big to run on a very souped up server (4-way Quad processor with
128GB RAM) but to scale up to a 100 Beowulf PC cluster typically means a factor
10 slow-down due to communications (unless it's a
Hi Matt, Wonderful idea, now it will even show the typical human trait of
lying...when i ask it do you still love me? most answers in its database will
have Yes as an answer but when i ask it 'what's my name?' it'll call me John?
However, your approach is actually already being implemented to
Interesting - after drafting three replies I have come to realize that it is
possible to hold two contradictory views and live or even run with it. Looking
at their writings, both Ben Richard know damn well what complexity means and
entails for AGI.
Intuitively, I side with Richard's stance
By coincidence whilst the debate was raging last night (local time:), I was
busy reading 'Studying Those Who Study Us, An anthropologist in the world of
artificial intelligence', (Stanford University Press, 2001) which is a
posthumous collection of academic essays by Diana Forsythe. She roamed
When commenting on a lot of different items in a posting, in-line responses
make more sense and using ALL-CAPS in one accepted way of doing it in an email
client/platform neutral manner. I for one do it often when responding to
individual emails so I don't mind at all. I do *not* associate it
All interesting (and complex!) phenomena happen at the edges/fringe. Boundary
conditions seem to be a requisite for complexity. Life originated on a planet
(10E-10 of space), on its surface (10E-10 of its volume). 99.99+% of the
fractal curve area is boring, it's just the edges of a very small
Well-said Samantha :-)
On a different note: something YKY and Mark may want to read about a
possible approach to running a new AGI consortium: eXtreme Research. A
software methodology for applied research: eXtreme Researching vy
Olivier Chirouze, David Cleary and George G. Mitchell (Software.
Hi Matt
Re Halting/non-halting programs:
This try-out works fine for small values of {program length}. For large values
the problem is essentially unsolvable, though I admit that you could get a fair
feeling for the distribution by simulating a large number of randomly generated
programs. The
Hey but it makes for an excellent quote. Facts don't have to be true if they're
beautiful or funny! ;-)
Sorry Eliezer, but the more famous you become, the more these types of
apocryphal facts will surface... most not even vaguely true... You should be
proud and happy! To quote Mr Bean 'Well, I
Ok, Panu, I agree with *your statement* below.
[Meta: Now how much credit do I get for operationalizing your idea?]
Panu Horsmalahti [EMAIL PROTECTED] 06/04/07 10:42 PM
Now, all we need to do is find 2 AGI designers who agree on something.
-
This list is sponsored by AGIRI:
Except that Ogden only included a very few verbs [be , have , come - go , put -
take , give - get , make , keep , let , do , say , see , send , causeand
because are occasionally used as operators; seem was later added.] So in
practice people use about 60 of the nouns as verbs diminishing the
Hi Mike
Just Google 'Ogden' and/or Basic English - there's lots of info.
And if you doubt that only a few verbs are sufficient, then obviously you need
to do some reading: anyone interested in building AGI should be familiar with
Schank's (1975) contextual dependency theory which deals with
of programming (in any conventional sense) a mind to handle
them.
- Original Message -
From: Jean-Paul Van Belle
To: agi@v2.listbox.com
Sent: Tuesday, June 05, 2007 5:44 PM
Subject: Re: [agi] Minimally ambiguous languages
Hi Mike
Just Google 'Ogden' and/or Basic English
Sorry yes you're right, I should and would not call Schank's approach
discredited (though he does have his critics). FWIW I think he got much closer
than most of the GOFAIers i.e. he's one of my old school AI heroes :) I thought
for a long time his approach was one of the quickest ways to AGI
Synergy or win-win between my work and the project i.e. if the project
dovetails with what I am doing (or has a better approach). This would require
some overlap between the project's architecture and mine. This would also
require a clear vision and explicit 'clues' about deliverables/modules
to ask
people on the list quickly to indicate their interest and/or willingness to
participate in your scheme (by emailing either of you directly rather than the
list)?
Just my thoughts...
=Jean-Paul Van Belle
PS @Mike/J Stors - yes I remember the Hilbert spaces posting as well but
skipped
Thx for your response, Ben (and for the many other contributions on the
list!)
Re Hebbian neural – I assume you could calculate an eigenvalue matrix or
some other heuristic approximation (to matrix**n) to speed up
calculations. However, the matrix changes dynamically each time your AGI
learns.
The provable *social* AI
was indeed a very sexy sheila
but she became too emotional
and her brain too irrational
so her creator killda
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
Derek Zahn
Check bigrams (or, more interestingly, trigrams) in computational
linguistics.
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
Eric Baum [EMAIL PROTECTED] 2007/05/23 15:36:20
One way to parametrize
Universal compassion and tolerance are the ultimate consequences of
enlightenment
which one Matt on the list equated IMHO erroneously to high-orbit
intelligence
methinx subtle humour is a much better proxy for intelligence
Jean-Paul
member of the 'let Murray stay' advocacy group aka 'the write
Interesting question you raise there, Matt (vs :) YKY
How many of us would be prepared to work FULL-TIME on AGI:
(0) If a department of defense/military organisation paid you develop a
secret AGI for national defense/intelligence purpose?
(1) If a Microsoft, Google, Sun or IBM came along and
to find a word in a big list you should really use a dictionary / hash
table instead of binary search... ;-)
(ok i know that wasnt the point you were trying to make :)
Jean-Paul
PS: [META] - people pls to cut off long message includes - some of us
don't enjoy always on high bandwidth :(
a
You're mostly correct about the word symbols (barring onomatopoeic words
such as bang hum clipclop boom hiss howl screech fizz murmur clang buzz
whine tinkle sizzle twitter as well as prefixes, suffixes and derived
wordforms which all allow one to derive some meaning).
However you are NOT correct
silly) words (a la the meaning
of liff etc.) we *all* recognise the concepts/feelings/situations which
these words map to and can see quite well why these should/could be
given a special word.
Jean-Paul Van Belle
On 4/29/07, Richard Loosemore [EMAIL PROTECTED] wrote:
The idea that human beings
@ Mike: remember that she wasn't blind/deaf from birth - read her
autobiographical account (available on project gutenberg - which is an
excellent corpus source btw - also available on DVD :) for how he
finally hooked up the concept of words as tokens for real world
concepts when linking the word
space (likely to be in
intellectual domains such as maths) I doubt that we will ever be able to
recognize what it does (try reading an advanced maths, physics or
theology/philosophy book)
Jean-Paul Van Belle
Jef Allbright [EMAIL PROTECTED] 2007/04/15 21:40:06
While such a machine
I guess (50 to 100 modules) x (500 to 2500 locs) x fudge factor x
language factor
with fudge factor = 2 to 4 and language factor = 1 for eg Python; 5 for
eg C++
i.e. minimum 50 klocs (Python) which is what i wishfully think;
realistically probably closer to 5000 klocs C++
that's of course for the
could always inline C. So Python it is
for my first prototype. I don't recommend people change their current
language tho if they're happy with it. Still early days for me.
YKY (Yan King Yin) [EMAIL PROTECTED] 2007/03/29
15:58:45
On 3/29/07, Jean-Paul Van Belle [EMAIL PROTECTED] wrote:
I guess
IMHO
IF you can provide a learning environment similar in complexity as our
world
THEN (maximum code size(zipped using Matt Mahoney algorithm) portion
of non-redundant DNA devoted to brain
/IMHO
Some random thoughts.
Any RAM location can link to any other RAM location so there are more
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
kevin.osborne [EMAIL PROTECTED] 2007/03/28 15:57
as a techie: scepticism. I think the 'small code' and 'small
hardware'
people are kidding themselves.
from concentrating their thinking on
AGI aspects where current projects are weak.
- how can I ever get this listserv to move to digest mode - i must have
tried 20 times using both IE and FF to no avail (the singularity one
works fine) ;-)
Ok that was me. Others?
Jean-Paul Van Belle
Department
I like the metaphor. The other good reason NOT to go for neuroscience
(i.e. against Ray Kurzweil's uploading the human brain argument) is that
it may *not* scaleable. Nature may well have pushed close to the limit
of biological intelligence (argument in favour = superior intelligence
is a strong
37 matches
Mail list logo