Richard,
Unfortunately I cannot bring myself to believe this will help anyone new
to the area.
The main reason is that this is only a miscellaneous list of topics,
with nothing to indicate a comprehensive theory or a unifying structure.
I do not ask for a complete unified theory, of
On Tue, Mar 25, 2008 at 9:39 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Richard,
Unfortunately I cannot bring myself to believe this will help anyone new
to the area.
The main reason is that this is only a miscellaneous list of topics,
with nothing to indicate a comprehensive
I'll try to find the time to provide my list --- at this moment, it
will be more like a reading list than a textbook TOC.
That would be great -- however I may integrate your reading
list into my TOC ... as I really think there is value in a structured
and categorized reading list rather than
the time right now - but
it is a worthwhile endeavor, and I'm happy to do it.
~Aki
On Tue, Mar 25, 2008 at 6:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi all,
A lot of students email me asking me what to read to get up to speed on
AGI.
So I started a wiki
I actually recently purchased Artificial Intelligence: A Modern
Approach - but only because I did not know where else to start.
It's a very good book ... if you view it as providing insight into various
component technologies of potential use for AGI ... rather than as saying
very much
On Tue, Mar 25, 2008 at 11:07 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
Thanks Ben. That is really exciting stuff / news. I'm loking forward to
OpenCog.
BTW - is OpenCog mainly in C++ (like Novamente) ? Or is it translations (to
Java, or other languages) of concepts so that others can code
want to propose. We shouldn't
try to merge them into one wiki page, but several.
Pei
On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi all,
A lot of students email me asking me what to read to get up to speed on
AGI.
So I started a wiki
http://www.codeplex.com/singularity
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
).
Some proposal ideas are found here
http://opencog.org/wiki/Ideas
but we're quite open to other suggestions as well, in the freewheeling spirit
of GSOC...
Thanks
Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe
... or whatever the set of objects in the toy
world may be...
This is the danger of toy test environments, be they in virtual worlds or
physical robotics...
ben g
On Thu, Mar 13, 2008 at 12:35 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Unless the details of that modified Turing Test are somehow
An attractor is a set of states that are repeated given enough time. If
agents are killed and not replaced, you can't return to the current state.
False. There are certainly attractors that disappear, first
seen by Ruelle, Takens, 1971 its called a blue sky catastrophe
/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease
The three most common of these assumptions are:
1) That it will have the same motivations as humans, but with a
tendency toward the worst that we show.
2) That it will have some kind of Gotta Optimize My Utility
Function motivation.
3) That it will have an intrinsic urge to
://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Archives: http://www.listbox.com
http://www.memphisdailynews.com/Editorial/StoryLead.aspx?id=101671
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
Sure, AGI needs to handle NL in an open-ended way. But the question is
whether the internal knowledge representation of the AGI needs to allow
ambiguities, or should we use an ambiguity-free representation. It seems
that the latter choice is better. Otherwise, the knowledge stored in
Using informal words, how would you describe the metaphysics or
biases currently encoded into the Novamente system?
/Robert Wensman
This is a good question, and unfortunately I don't have a
systematic answer. Biases are encoded in many different
aspects of the design, e.g.
-- the knowledge
model of intelligence IMHO.
My thinking is that a more-universal theoretical prior would be a prior over
logically definable models, some of which will be incomputable.
Any thoughts?
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO
This is a general theorem about *strings* in this formal system, but
no such string with uncomputable real number can ever be written, so
saying that it's a theorem about uncomputable real numbers is an empty
set theory (it's a true statement, but it's true in a trivial
falsehood,
Hi,
I think Ben's text mining approach has one big flaw: it can only reason
about existing knowledge, but cannot generate new ideas using words /
concepts.
Text mining is not an AGI approach, it's merely a possible way of getting
knowledge into an AGI.
Whether the AGI can generate new ideas
I'm not talking about inference control here -- I assume that inference
control is done in a proper way, and there will still be a problem. You
seem to assume that all knowledge = what is explicitly stated in online
texts. So you deny that there is a large body of implicit knowledge other
d) you keep repeating the illusion that evolution did NOT achieve the
airplane and other machines - oh yes, it did - your central illusion here is
that machines are independent species. They're not. They are EXTENSIONS of
human beings, and don't work without human beings attached.
Well, what I and embodied cognitive science are trying to formulate
properly, both philosophically and scientifically, is why:
a) common sense consciousness is the brain-AND-body thinking on several
levels simultaneously about any given subject...
I don't buy that my body plays a
It could be done with a simple chain of word associations mined from a text
corpus: alert - coffee - caffeine - theobromine - chocolate.
That approach yields way, way, way too much noise. Try it.
But that is not the problem. The problem is that the reasoning would be
faulty, even with
yet I still feel you dismiss the text-mining approach too glibly...
No, but text mining requires a language model that learns while mining. You
can't mine the text first.
Agreed ... and this gets into subtle points. Which aspects of the
language model
need to be adapted while mining,
Subscription
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben
Knowing how to carry out inference can itself be procedural knowledge,
in which case no explicit distinction between the two is required.
--
Vladimir Nesov
Representationally, the same formalisms can of course be used for both
procedural and declarative knowledge.
The slightly subtler
SAT/SMT in step 3 ... but, using these
techniques within NM/OpenCog is also a possibility down the road, I've
been studying the possibility...
-- Ben
On Tue, Feb 26, 2008 at 6:56 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On 2/25/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi
)
[EMAIL PROTECTED] wrote:
On 2/26/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Obviously, extracting knowledge from the Web using a simplistic SAT
approach is infeasible
However, I don't think it follows from this that extracting rich
knowledge from the Web is infeasible
It would
Your piano example is a good one.
What it illustrates, I suggest, is:
your knowledge of, and thinking about, how to play the piano, and perform
the many movements involved, is overwhelmingly imaginative and body
knowledge/thinking (contained in images and the motor parts of the brain
No one in AGI is aiming for common sense consciousness, are they?
The OpenCog and NM architectures are in principle supportive of this kind
of multisensory integrative consciousness, but not a lot of thought has gone
into exactly how to support it ...
In one approach, one would want to have
You guys seem to think this - true common sense consciousness - can all be
cracked in a year or two. I think there's probably a lot of good reasons -
and therefore major creative problems - why it took a billion years of
evolution to achieve.
I'm not trying to emulate the brain.
short inferences paths pass, or something like
that.
Ed Porter
-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 20, 2008 5:54 PM
To: agi@v2.listbox.com
Subject: Re: [agi] would anyone want to use a commonsense KB?
And I seriously
we'd like...
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Archives: http
C is not very viable as of now. The physics in Second Life is simply not
*rich* enough. SL is mainly a space for humans to socialize, so the physics
will not get much richer in the near future -- is anyone interested in
emulating cigarette smoke in SL?
Second Life will soon be integrating
.
The question is do enough heuristics make an autogenous AI or is there
something more fundamental to its structure?
On Wednesday 20 February 2008 12:27:59 pm, Ben Goertzel wrote:
The trick to understanding once in a blue moon is to either
-- look at the moon
or
-- ask someone
To me, the moon varies from a deep orange to brilliant white depending on
atmospheric conditions and time of night... none of which would help me
understand the text references.
On Wednesday 20 February 2008 02:02:52 pm, Ben Goertzel wrote:
On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD
in the right direction...
Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
at 3:45 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Ben Goertzel [EMAIL PROTECTED] wrote:
I note also that a web-surfing AGI could resolve the color of the moon
quite easily by analyzing online pictures -- though this isn't pure
text mining, it's in the same spirit...
Not really. You
On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine
you have a heuristic that takes the problem down from NP-complete (which it
almost certainly is) to a linear system, so there is an N^3
:
A PROBABILISTIC logic network is a lot more like a numerical problem than a
SAT problem.
On Wednesday 20 February 2008 04:41:51 pm, Ben Goertzel wrote:
On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
OK, imagine a lifetime's experience is a billion symbol
To get back to Ben's statement: Is the computer chip industry happy with
contemporary SAT solvers
Well they are using them, but of course there is loads of room for improvement!!
or would a general solver that is capable of
beating n^4 time be of some use to them? If it would be useful, then
And I seriously doubt that a general SMT solver +
prob. theory is going to beat a custom probabilistic logic solver.
My feeling is that an SMT solver plus appropriate subsets of prob theory
can be a very powerful component of a general probabilistic inference
framework...
I can back this up
Yes, I'd like to hear others' opinion on Cyc. Personally I don't think it's
the perceptual grounding issue -- grounding can be added incrementally
later. I think Cyc (the KB) is on the right track, but it doesn't have
enough rules.
I do think it's possible a Cyc approach could work if one
: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
I agree, that might be a viable approach. But the key phrase is
Encode some simple knowledge, instruct the system in how to ground it
in its sensorimotor experience - i.e. you're _not_ spending a decade
writing a million assertions and _then_ looking for the first time at
the grounding
In other words, maybe what you think needs to be gotten from grounding
in a nonlinguistic domain, could somehow be gotten indirectly via grounding
in masses of text?
I am not confident this is feasible, nor that it isn't ... and it's
not the approach
I'm following ... but I'm
.
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
an
AGI -
in symbols-AND- graphics/schemas-AND detailed images - simultaneously,
interdependently - that we are the greatest movie on earth with
words/symbols-AND-pictures.
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO
Perhaps it will start to give you a sense that words and indeed all symbols
provide an extremely limited *inventory of the world* and all its infinite
parts and behaviours.
I welcome any impressionistic responses here, including confused questions.
I agree with the above, but I think one
Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms
Hi Mike,
P.S. I also came across this lesson that AGI forecasting must stop (I used
to make similar mistakes elsewhere).
We've been at it since mid-1998, and we estimate that within 1-3 years from
the time I'm writing this (March 2001), we will complete the creation of a
program that
8. Generative Programming, Methods, Tools, and Applications (2000) -
Krzysztof Czarnecki, Ulrich W. Eisenecker
The above is a very good book, IMO ... not directly AGI-related, but
damn insightful re generative software design...
-
This list is sponsored by AGIRI:
in various emails...
-- Ben G
p.s. for those who don't know what opencog is, see
http://opencog.org/wiki/Main_Page
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely
IMO language is integral to strong AI in the same way that logic is
integral to mathematics.
The counterargument is that no one has yet made an AI virtual chimp ...
and nearly all of the human brain is the same as that of a chimp ...
I think that language-centric approaches are viable, but I
Hi,
I'd be interested in what you see as the path from SLAM to AGI.
To me, language generation seems obvious: 1. Make a language and
algorithms for generating stuff in that language. 2. Implement pattern
recognition and abstraction (imo not _that_ hard if you've designed
your language well)
Google
already knows more than any human,
This is only true, of course, for specific interpretations of the word
know ... and NOT for the standard ones...
and can retrieve the information faster,
but it can't launch a singularity.
Because, among other reasons, it is not an intelligence, but
or change your options, please go to:
http://v2.listbox.com/member/?;
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
In other words, flies seem to possess the same kind of internal
spontaneity-generation that we possess, and that we associate with our
subjectively-experienced feeling of free will.
-- Ben G
To clarify further:
Suppose you are told to sit still for a while, and then move your hand
suddenly
The question vis-a-vis the fly - or any animal - is whether the *whole*
course of action of the fly in that experiment can be accounted for by one -
or a set of - programmed routines or programs period. My impression -
without having studied the experiment in detail - is that it weighs against
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
Possible major misunderstanding : I am not in any shape or form a vitalist.
My argument is solely about whether a thinking machine (brain or computer)
has to be instructed to think rigidly or freely, with or without prior
rules - and whether, with the class of problems that come under AGI,
As far as I know there is little or no work done yet to integrate
probabilistic
reasoning with these solvers and it will probably not be easy to do it and
keep things efficient.
I don't think it will be easy, but what's intriguing is that it seems
like it might
be feasible-though-difficult
reduced it to linear programming
somehow.
Thanks
Ben Goertzel
List Owner
On Jan 20, 2008 1:51 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Jim,
I'm sure most people here don't have any difficulty understanding what
you are talking about. You seem to lack solid understanding of these
basic
On Jan 20, 2008 2:34 PM, Jim Bromer [EMAIL PROTECTED] wrote:
I am disappointed because the question of how a polynomial time solution of
logical satisfiability might affect agi is very important to me.
Well, feel free to start a new thread on that topic, then ;-)
In fact, I will do so: I will
effective are current SMT solvers then?
If they are effective, then SMT could prove an interesting tool within an AGI
inference engine... a way of relatively rapidly resolving complex queries...
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL
... as has been pointed out on this list s many times ... and as
Eric Baum argues quite elegantly (among other points) in What Is Thought?
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
We are on the edge of change comparable
http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;
--
Ben Goertzel, PhD
CEO, Novamente
... I think that it is at best peripheral to any
really serious AGI approach.
However, some serious AGI thinkers, such as Doug Lenat,
believe otherwise.
And, this list is about AGI in general, not about any specific
approaches to AGI.
So, the thread can stay...
-- Ben Goertzel, list owner
Bob
hypothesize is used
in the brain...
thx
ben
On 3/25/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,
Does anyone know if the term glocal (meaning global/local) has
previously been used in the context of
AI knowledge representation?
While not recognized as a formal term of knowledge representation
The project was founded officially in 2001 but for much of the time
between 2001 and 2004 there was NOBODY working on it full time. All
of us founders had day jobs, either actual jobs or AI consulting
jobs, needed to pay the bills.
For the last couple years there were 2-3 people working
Hi all,
As there has been a lot of discussion of the Novamente AI system on this
list,
it seems apropos to announce here that Novamente LLC has decided upon a
significant shift in business direction/approach.
If you're curious a pertinent company blog entry is here:
Hi,
- what is the REAL reason highly talented AGI research groups keep
pushing their deadlines back. E.g. Ben's announced imminent breakthrus
several times ... the one fact he mentioned a few years back that made
sense is the huge parameter space/degrees of freedom (you have at
least 5 to 10
Hi,
Does anyone know if the term glocal (meaning global/local) has
previously been used in the context of
AI knowledge representation?
thx
Ben G
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
If we're talking language for AGI _content_ (as opposed to framework
for which Ben Goertzel has made a fair case for even C++), then more
like removal of features. Because for AGI content, it's not what you
can do in principle, it's what you can be _casual_ with.
Correct
Is the research on AI full of Math because there are many Math
professors that publish in the field or is the problem really Math
related? Many PhDs in computer science are Math oriented exactly
because the professors that deem their work worth a PhD are either
The fact that C, C++ and I would presume C# has pointers, precludes any of
these from my list up front. There can be no boundary checks at either
compile or execution time so this feature alone is incompatible with a
higher level language IMO.
FYI, C# has no pointers generically, but you
As for all the other talk on this list, recently, about programming
languages and the need for math, etc., I find myself amused by the
irrelevance of most of it: when someone gets a clue about what they
are trying to build, and why, the question of what language (or
environment) they
Richard Loosemore wrote:
Ben Goertzel wrote:
Richard Loosemore wrote:
As for all the other talk on this list, recently, about programming
languages and the need for math, etc., I find myself amused by the
irrelevance of most of it: when someone gets a clue about what they
are trying
rooftop8000 wrote:
one chooses a
decent option and gets on with it.
-- Ben
That's exactly the problem.. everyone just builds their
own ideas and doesn't consider how their ideas and code could
(later) be used by other people
I'm not at all sure something like AGI is well-suited to
Mark Waser wrote:
IMO, creating an AGI isn't really a programming problem. The hard
part is knowing exactly what to program.
Which is why it turns into a programming problem . . . . I
started out as a biochemist studying enzyme kinetics. The only
reasonable way to get a reasonable
Samantha Atknis wrote:
Ben Goertzel wrote:
Regarding languages, I personally am a big fan of both Ruby and
Haskell. But, for Novamente we use C++ for reasons of scalability.
I am curious as to how C++ helps scalability. What sorts of
scalability? Along what dimensions? There are ways
BTW I think I have answered that question at least 5 times on this list
or on the SL4 list. I'm almost motivated to make a Novamente FAQ to
avoid this sort of repetition!!!
ben
Ben Goertzel wrote:
Samantha Atknis wrote:
Ben Goertzel wrote:
Regarding languages, I personally am a big
David Clark wrote:
I appreciate the amount of effort you made in replying to my email.
Most of your questions would be answered if you read the documentation
on my site. The last time I looked, LISP had no built-in database.
Allegro Lisp has a very nice (easy to use, scalable, damn fast)
Chuck Esterbrook wrote:
On 3/20/07, Ben Goertzel [EMAIL PROTECTED] wrote:
I would certainly expect that a mature Novamente system would be able to
easily solve this kind of
invariant recognition problem. However, just because a human toddler
can solve this sort of problem easily, doesn't
mean
!
Thanks
Ben Goertzel
(list owner/ moderator)
David Clark wrote:
I put up with 1 person out of all the thousands of emails I get who insisted
on sending standard text messages as a attachment. Because of virus
infections, I had normally set all emails with attachments to automatically
get put
Richard
But where you (I believe) start to confuse the picture is by selecting
an example of an 'emergent' system that is a special case. Hopfield
nets are barely complex enough to have any emergent properties: in
fact, they were pretty much engineered so that they could be analysed
Eric Baum wrote:
Hayek doesn't directly scale from random start
to an AGI architecture in as much as the
learning is too slow. But the same is true of any other means of
EC or learning that doesn't start with some huge head start.
It seems entirely reasonable to merge a Hayek like architecture
Hi,
P.S. About Daniel Amit:
I haven't read the book, but are you saying he demonstrates coherent,
*meaningful* symbol processing as the transition of the dynamics
through the lobes of an ultracomplex set of attractor lobes? Like,
reasoning with the symbols, or something?
And that he
For people who might be interested in influencing some of the
features of this system, I would appreciate them looking at my
documentation at www.rccconsulting.com/hal.htm
http://www.rccconsulting.com/hal.htm Although my system isn't quite
ready for alpha distribution yet, I expect that it
FreeBase should be a really wonderful resource for early-stage AGIs
to learn from...
-- Ben
I think Danny Hillis became consumed with FreeBasing. ;-)
See http://www.edge.org/documents/archive/edge205.html for a recent
report on his newly announced open database project.
- Jef
-
This
David Clark wrote:
If you were introducing a radically new programming paradigm for AGI, I
would be more interested Not that I think this is necessary to
achieve AGI, but I would find it more intellectually stimulating ;-)
If you care to detail what kind of problem or structure you
Shane Legg wrote:
Ben, I didn't know you were a Ruby fan...
Cassio has gotten me into Ruby ... but in Novamente it's used only
for prototyping, the real system is C++
For some non-AGI consulting projects we have also used Ruby.
Ruby runs slowly, but, other than that, it's a great language.
KIF would be a highly practical lingua franca
Lojban would work fine too
I agree that using English to interface btw modules of an AGI system
seems suboptimal...
I am glad that the different components of my brain don't need to
communicate using English ;-_)
Ben
Jey Kottalam wrote:
On
Kevin Cramer wrote:
I tested this and it is very very poor at invariant recognition. I am
surprised they released this given how bad it actually is. As an example I
drew a small A in the bottom left corner of their draw area. The program
returns the top 5 guesses on what you drew. The letter
rooftop8000 wrote:
Hi, I've been thinking for a bit about how a big collaboration AI project
could work. I browsed the archives and i see you guys have similar
ideas
I'd love to see someone build a system that is capable of adding any
kind of AI algorithm/idea to. It should unite the power
G
Russell Wallace wrote:
On 3/19/07, *Ben Goertzel* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Minsky is not big on emergence
This is an interesting point.
I'm not big on emergence, not in artificial systems anyway. It
produced us, sure, but that's one planet with intelligence out
J. Storrs Hall, PhD. wrote:
On Monday 19 March 2007 17:30, Ben Goertzel wrote:
...
My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI.
Novamente consists of a set of agents that have been very carefully
sculpted to work
Russell Wallace wrote:
On 3/19/07, *Ben Goertzel* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
According to the above definition, it is quite possible to engineer
systems with emergent properties, and to prove things about the
constraints on emergent system properties as well
701 - 800 of 1549 matches
Mail list logo